category
stringclasses 191
values | search_query
stringclasses 434
values | search_type
stringclasses 2
values | search_engine_input
stringclasses 748
values | url
stringlengths 22
468
| title
stringlengths 1
77
| text_raw
stringlengths 1.17k
459k
| text_window
stringlengths 545
2.63k
| stance
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|
Selenology | Are lunar cycles linked to human behavior and health? | yes_statement | "lunar" "cycles" have an impact on "human" "behavior" and "health".. "human" "behavior" and "health" are influenced by "lunar" "cycles". | https://www.eviemagazine.com/post/science-how-the-moon-can-affect-health-lunar-cycles-sleep-period-mood | The Science On How The Moon Can Affect Health, And How Lunar ... | The Science On How The Moon Can Affect Health, And How Lunar Cycles Can Influence Your Sleep, Period, And Mood
Believe it or not, the full moon actually does have an effect on your sleep! And that's not all – the moon cycles might influence your period and mood as well. Here’s the science behind the moon and how it can play a role in how we feel.
You, too, can blame the moon without believing in astrology – and you don’t have to feel weird about it! I mean, our bodies are one with nature after all. If you think about it, the sun gives us life and energy, and it stimulates our circadian rhythm. So maybe it's time we all recognize the role the lunar cycles play in our wellbeing as well! Here are some of the ways that the moon can affect your health.
It’s Harder To Sleep on a Full Moon
I’ve noticed that I have a harder time falling asleep on a full moon, and I’d twist and turn in bed all night. As it turns out, studies show that people take longer to doze off and have poor sleep quality in general during a full moon – even if their curtains completely block out the light. Researchers found that in the days before and after a full moon, people had 30% less deep sleep. Volunteers even reported poorer sleep in one survey during this period.
"This paper showed that it's possible to detect a correlation between the human sleep cycle and lunar phases, which strongly suggests to me that there is some kind of synchronization," says one neuroscientist Kristin Tessmar-Raible. "And the question now is what is the mechanism behind this?" According to biologist Christian Cajochen, our circadian rhythm is likely influenced by the lunar cycle.
Perhaps instead of wasting our time in bed trying to sleep during a full moon, we can just embrace the wakefulness it brings us and use it to write, read, or meditate.
Can the Moon Affect Human Behavior and Moods?
Maybe! It’s unclear, but there are some studies that might support this theory. David Avery, a psychiatrist, met one engineer who admitted in the psych ward that he would experience extreme mood swings. The man had an unpredictable and erratic sleep schedule, at times being completely unable to doze off to sleeping 12 hours a night. Avery studied his patient's patterns, and he was surprised to find that the engineer's struggles synced with the rise and fall of the oceans – which are heavily dictated by the moon. "There seemed to be high tides occurring during the night when the sleep duration was short," he explained.
The sun and moon influences the behavior of animals, plants, and humans.
Interestingly, another psychiatrist named Thomas Wehr found that the manic and depressive cycles in his patients with bipolar disorder correlated with the moon's gravitational cycles. So, the lack of sleep, possible hormonal changes, and the gravitational pull of the moon – in theory – could all influence people’s moods.
The Moon and Your Menstrual Cycle
A period, on average, lasts 28 to 29 days – that’s about how long the moon cycle is! The moon has a 29.5 waxing and waning cycle, and many cultures have linked periods and this celestial body together.
There are many women today who have experienced this seemingly-mystical synchronization.
In Science Advances, researchers concluded from long-term data (32 years) that, at some point in their lives, the women's periods synced with the lunar light and the gravity cycles. Participants who had cycles lasting longer than 27 days showed "intermittently synchronized with cycles that affect the intensity of moonlight," as written in the statement.
In their conclusion, the researchers said that “menstrual cycles also aligned with the tropical month (the 27.32 days it takes the moon to pass twice through the same equinox point) 13.1% of the time in women 35 years and younger and 17.7% of the time in women over 35, suggesting that menstruation is also affected by shifts in the moon’s gravimetric forces."
While some scientists still believe that the link between periods and lunar cycles are nonexistent, there are many women today who have experienced this seemingly-mystical synchronization, and it shouldn't be brushed off so easily.
A more recent study published in 2022 recognizes that the sun and moon influences the behavior of animals, plants, and humans. "All matter on Earth, both live and inert, experiences the effects of the gravitational forces of the Sun and Moon expressed in the form of tides," says the study author, Cristiano de Mello Gallep. "The periodic oscillations exhibit two daily cycles and are modulated monthly and annually by the motions of these two celestial bodies. All organisms on the planet have evolved in this context. What we sought to show in the article is that gravitational tides are a perceptible and potent force that has always shaped the rhythmic activities of these organisms."
Closing Thoughts
Did you know that the origin of the word lunatic comes from the Latin word lunaticus, which translates to "of the moon" and "moonstruck"? Even Aristotle believed that lunacy and madness were caused by the moon. In addition, the words "menstruation" and "moon" come from the Greek and Latin words mene (moon) and mensis (month). Maybe these are all myths – or maybe the ancients held knowledge on the moon that modern society has forgotten about. | The man had an unpredictable and erratic sleep schedule, at times being completely unable to doze off to sleeping 12 hours a night. Avery studied his patient's patterns, and he was surprised to find that the engineer's struggles synced with the rise and fall of the oceans – which are heavily dictated by the moon. "There seemed to be high tides occurring during the night when the sleep duration was short," he explained.
The sun and moon influences the behavior of animals, plants, and humans.
Interestingly, another psychiatrist named Thomas Wehr found that the manic and depressive cycles in his patients with bipolar disorder correlated with the moon's gravitational cycles. So, the lack of sleep, possible hormonal changes, and the gravitational pull of the moon – in theory – could all influence people’s moods.
The Moon and Your Menstrual Cycle
A period, on average, lasts 28 to 29 days – that’s about how long the moon cycle is! The moon has a 29.5 waxing and waning cycle, and many cultures have linked periods and this celestial body together.
There are many women today who have experienced this seemingly-mystical synchronization.
In Science Advances, researchers concluded from long-term data (32 years) that, at some point in their lives, the women's periods synced with the lunar light and the gravity cycles. Participants who had cycles lasting longer than 27 days showed "intermittently synchronized with cycles that affect the intensity of moonlight," as written in the statement.
| yes |
Selenology | Are lunar cycles linked to human behavior and health? | yes_statement | "lunar" "cycles" have an impact on "human" "behavior" and "health".. "human" "behavior" and "health" are influenced by "lunar" "cycles". | https://capitaleap.org/full-moon-lunacy-fact-or-fiction/ | Full Moon Lunacy – Fact or Fiction? - Capital EAP | Full Moon Lunacy – Fact or Fiction?
What is a lunatic? The dictionary definition of a “lunatic” is an insane person (no longer in technical use; now considered offensive); a person whose actions and manner are marked by extreme eccentricity or recklessness. The word “lunacy” however has an interesting origin, it originates from the Latin word for moon: luna. How did the Latin word for moon come to mean someone is, in essence, crazy? There’s an ancient and interesting history there.
I’m sure at some point in your life you’ve heard someone say something along the lines of, “Tonight’s a Full Moon. Watch out!” or you’ve encountered people who blame strange occurrences and behaviors on a Full Moon. The Full Moon has been associated with strange behavior and happenings for hundreds of years. A superstitious belief existed in Medieval Europe that the phases of the moon could cause insanity or madness in certain individuals. People who displayed this madness were often deemed to be lunaticus from the Latin for “moon-struck” because their moods and behaviors were thought to be influenced by the changing phases of the moon. The effect of the moon on people’s mental health has also been reported in ancient writings from the Assyrians and Babylonians. In 19th-century England, lawyers used the “guilty by reason of the Full Moon” defense to claim that their “lunatic” clients could not be held accountable for acting under the Moon’s influence. The significance of the Full Moon, however, is that the height of a “moon-struck” individual’s erratic behavior is during the presence of the Full Moon. There is also plenty of anecdotal evidence from individuals working in the mental health field, law enforcement, and medical field that clients and patients display more intense symptomology and extreme behavior during the Full Moon.
There have been scientific studies of a phenomenon known as the “lunar effect”. The lunar effect is “a real or imaginary correlation between specific stages of the roughly 29.5-day lunar cycle and behavior and physiological changes in living beings on Earth, including humans”. Some of the possible purported reasons for the effect have been the amount of moonlight, the presence of water in the human body (proposers of this belief state that the moon controls tides, so why not the water of the human body? This is a pseudo-scientific claim that does not take into account the differences of mass of water in the ocean compared to humans), our evolution and our innate fear of darkness, the increase of positive ions during a Full Moon (this is also a pseudo-scientific claim), and the monthly cycle of human menstruation as it may relate to the lunar cycles. By the late 1980’s there were dozens of studies that looked at the lunar effect, including the Full Moon, on human behavior. However, none of these studies have found a significant correlation between the phases of the moon and the way people behave.
What possible evidence exists that the Full Moon has an effect on the way we think and behave? Two studies found evidence that those with mental disorders i.e. Schizophrenia generally exhibit 1.8% of increased violent or aggressive episodes during the Full Moon, but a more recent study found no such correlation to that of individuals without Schizophrenia. Another analysis of mental-health data found a significant effect of Moon phases, but only on Schizophrenic patients. Such effects are not necessarily related directly to the appearance of the Moon. Disclaimer:Please keep in mind that individuals with Schizophrenia are usually not violent and that this is a dangerous and negative stereotype that exists. A study into epilepsy found a significant negative correlation between the mean number of seizures per day and the fraction of the Moon that is illuminated, but this correlation disappeared when the local clarity of the night sky was controlled for suggesting that it is the brightness of the night sky, and not the lunar phase per se, that influences the occurrence of epileptic seizures with advanced photosensitive epilepsy.
Police officers have also reported increased crime during a Full Moon. Senior police officers in Brighton, UK, announced in June 2007 that they were planning to deploy more officers over the summer to counter trouble they believe is linked to the lunar cycle. This followed research by the Sussex Police force that concluded there was a rise in violent crime when the Moon was full. Police in Ohio and Kentucky have blamed temporary rises in crime on the Full Moon. In January 2008, New Zealand’s Justice Minister Annette King suggested that a spate of stabbings in the country could have been caused by the lunar cycle. A reported correlation between Moon phase and the number of homicides in Dade County was found, through later analysis, not to be supported by the data and to have been the result of inappropriate and misleading statistical procedures.
There have also been some studies on how the lunar cycles affect sleep in humans. A study by the University of Basel in Switzerland established a sleep study that looked for the effect of lunar cycles on human sleep quality, and made sure to control for variables such as increased light at night. The researchers found that the lunar cycles did have an effect on quality of sleep; they found that during a Full Moon the study participants deep sleep decreased by 30%, the time it took for them to fall asleep increased by five minutes, and total sleep duration was decreased by 20 minutes. Some researchers have criticized this study for its small sample size (33 participants) and lack of control for age and sex. However, sleep plays a huge role in an individual’s mental health, and sleep deprivation can have many negative side effects on mental health, and it would be interesting to see other studies research how lunar cycles affect human sleep.
Scientific studies have found no concrete evidence to suggest that the cycles of the moon or the Full Moon have a significant effect on human behavior and mood. If anything, it is the concept of a confirmation bias (when an individual seeks out information that supports their beliefs, and ignores the evidence that debunks their beliefs) that continues to reinforce the idea that the Full Moon plays a role in mental health. However, in the spirit of Halloween, it doesn’t hurt to look up at the Full Moon, which this month is Sunday the 19th, and imagine that spooky and wonderful things are about to occur. From all of us at Capital EAP, have a happy and safe Halloween! | The effect of the moon on people’s mental health has also been reported in ancient writings from the Assyrians and Babylonians. In 19th-century England, lawyers used the “guilty by reason of the Full Moon” defense to claim that their “lunatic” clients could not be held accountable for acting under the Moon’s influence. The significance of the Full Moon, however, is that the height of a “moon-struck” individual’s erratic behavior is during the presence of the Full Moon. There is also plenty of anecdotal evidence from individuals working in the mental health field, law enforcement, and medical field that clients and patients display more intense symptomology and extreme behavior during the Full Moon.
There have been scientific studies of a phenomenon known as the “lunar effect”. The lunar effect is “a real or imaginary correlation between specific stages of the roughly 29.5-day lunar cycle and behavior and physiological changes in living beings on Earth, including humans”. Some of the possible purported reasons for the effect have been the amount of moonlight, the presence of water in the human body (proposers of this belief state that the moon controls tides, so why not the water of the human body? This is a pseudo-scientific claim that does not take into account the differences of mass of water in the ocean compared to humans), our evolution and our innate fear of darkness, the increase of positive ions during a Full Moon (this is also a pseudo-scientific claim), and the monthly cycle of human menstruation as it may relate to the lunar cycles. By the late 1980’s there were dozens of studies that looked at the lunar effect, including the Full Moon, on human behavior. However, none of these studies have found a significant correlation between the phases of the moon and the way people behave.
What possible evidence exists that the Full Moon has an effect on the way we think and behave? | no |
Selenology | Are lunar cycles linked to human behavior and health? | yes_statement | "lunar" "cycles" have an impact on "human" "behavior" and "health".. "human" "behavior" and "health" are influenced by "lunar" "cycles". | https://www.petplace.com/article/dogs/pet-behavior-training/lunatic-dogs-are-dogs-affected-by-lunar-cycles | Lunatic Dogs: Are Dogs Affected by Lunar Cycles? | Lunatic Dogs: Are Dogs Affected by Lunar Cycles?
Dogs and Lunar Cycles: What’s the Influence?
The belief that lunar cycles can and do influence aspects of behavior has existed since Roman times. Do lunar cycles impact dogs? Despite recent attempts to establish the validity of this concept, none – bar one – has produced any convincing evidence to support it. The one affirmative study concluded that schizophrenics are more troubled at full moon than at other times. The results of this study were statistically significant. Most human studies, however, have concluded that there is little or no effect of the phases of the moon on behavior, so positive findings on this subject is, at best, few and far between. But what of our dogs? More attuned to their environment, as they are, might they be influenced, however slightly, by the phases of the moon? “Maybe,” is the answer to that question though no one has successfully demonstrated this influence.
Evidence Of Lunar Cycle Influence on Other Species
Some species do show a few signs of influence of the lunar cycle:
Belted sandfish have significantly higher levels of an estrogen-like substance in their bloodstream at new and full moon.
Night-migrating skylarks are most active when the moon in its waxing gibbous stage.
Galapagos fur seals dive less and deeper on moonlit nights than at new moon and show loss in body weight.
The predatory behavior of some mites is significantly and strikingly depressed around the full moon.
Coho salmon parrs and smelts, maintained in a fixed 12-hour light/ 12-hour dark light cycle, show rhythmic changes in growth pattern on a 14–15 day cycle suggesting that the lunar cycle acts as a “timegiver” for the synchronization of growth rate rhythms.
One type of mollusk can derive directional cues from the magnetic field of the earth and will orientate differently according to the lunar phase.
Tides, which are affected by lunar cycles, effect the behavior of many ocean creatures.
Why Dogs Might Behave Differently During Lunar Phases
Changes in nocturnal illumination causing changes in sleep/wake cycles and/or behavior of the animal itself, either directly or by altering the behavior of the animal’s prey/competitors.
Changes in the Earth’s magnetic field affecting internal magnetoreceptors.
Evidence Dogs and Cats May Be Affected by Lunar Cycles
There is no concrete evidence to this effect; however, anecdotes abound and some seem to have a reasonable explanation. For example, a client’s cat apparently sprayed a lot of urine when there was a full moon. The cat’s owners were lobster fishermen and were thus always highly aware of the weather and tides. Why might a cat spray more on a moonlit night? Possibly because of increased activity of outdoor critters facilitated by the moonlight or because of the intruders’ increased visibility to the indoor cat. Was the cat a lunatic? I don’t think so. It might have behaved differently during a full moon but there was a reasonable explanation for its increased agitation at this time.
Presumably the same might hold true for dogs. Coyotes and wolves are often depicted howling at the moon. One might imagine that hunting would be better on a moonlit night and that there would thus be increased activity of these wild canids under such conditions. Though they have excellent night vision, neither dogs, wolves, nor coyotes can see when there is no light, but full moonlight, creates virtually daylight visibility for them. Howling is a long distant communication, with members of a group signaling their location to each other by this means. Dogs may howl on moonlit nights because they sense increased movements of other animals and feel a greater the need to communicate their position. Others might anticipate the thrill of the chase and become generally more restless or agitated.
Even we humans can be driven to act by primordial instincts that we barely appreciate or recognize. Dogs probably have even greater genetically imbued subliminal agendas than we do. Perhaps dogs that howled on moonlit nights remained in better contact with their pack members and that this behavior somehow conferred a survival benefit. The precise benefit may have been linked to increased activity of prey on moonlit nights, hence a greater need for strategic communication on these potentially fruitful nights. | Lunatic Dogs: Are Dogs Affected by Lunar Cycles?
Dogs and Lunar Cycles: What’s the Influence?
The belief that lunar cycles can and do influence aspects of behavior has existed since Roman times. Do lunar cycles impact dogs? Despite recent attempts to establish the validity of this concept, none – bar one – has produced any convincing evidence to support it. The one affirmative study concluded that schizophrenics are more troubled at full moon than at other times. The results of this study were statistically significant. Most human studies, however, have concluded that there is little or no effect of the phases of the moon on behavior, so positive findings on this subject is, at best, few and far between. But what of our dogs? More attuned to their environment, as they are, might they be influenced, however slightly, by the phases of the moon? “Maybe,” is the answer to that question though no one has successfully demonstrated this influence.
Evidence Of Lunar Cycle Influence on Other Species
Some species do show a few signs of influence of the lunar cycle:
Belted sandfish have significantly higher levels of an estrogen-like substance in their bloodstream at new and full moon.
Night-migrating skylarks are most active when the moon in its waxing gibbous stage.
Galapagos fur seals dive less and deeper on moonlit nights than at new moon and show loss in body weight.
The predatory behavior of some mites is significantly and strikingly depressed around the full moon.
Coho salmon parrs and smelts, maintained in a fixed 12-hour light/ 12-hour dark light cycle, show rhythmic changes in growth pattern on a 14–15 day cycle suggesting that the lunar cycle acts as a “timegiver” for the synchronization of growth rate rhythms.
One type of mollusk can derive directional cues from the magnetic field of the earth and will orientate differently according to the lunar phase.
Tides, which are affected by lunar cycles, effect the behavior of many ocean creatures.
| no |
Selenology | Are lunar cycles linked to human behavior and health? | no_statement | "lunar" "cycles" have no effect on "human" "behavior" and "health".. there is no correlation between "lunar" "cycles" and "human" "behavior" and "health". | https://www.nbcnews.com/health/body-odd/full-moon-doesnt-make-you-crazy-study-confirms-flna1c7291816 | The full moon doesn't make you crazy, study confirms | The full moon doesn't make you crazy, study confirms
A nearly full moon sets over waters of Cook Inlet and a children's whale slippery slide just before sunrise on Tuesday, at Elderberry Park in downtown Anchorage, Alaska. Anchorage's next full moon is Wednesday.Dan Joling / AP
Nov. 28, 2012, 1:54 AM UTC
By Cari Nierenberg
When there's a full moon (like the one Wednesday), there's a tendency to blame some people's strange behavior on it. But a new Canadian study dismisses this popular belief and suggests that more people with psychological problems do not show up at hospital emergency rooms during a full moon.
Researchers found little evidence that the moon's lunar cycles were linked to an increased incidence of mental health concerns.
In other words, the moon's behavior seems to have no effect on human behavior on planet Earth. Sure, the word "lunatic" derives from the Latin word "luna" for "moon," but science has found little connection between the moon and madness.
Even so, that won't stop some of us from thinking that lunar cycles can influence psychological symptoms. By one estimate, 80 percent of nurses and 64 percent of doctors who work in the emergency department believe it affects patients' mental health.
In the study, which will appear in the journal General Hospital Psychiatry, researchers reviewed medical records from two hospitals in Montreal over a three-year period. They looked at nearly 800 patients who came to the emergency room for unexplained chest pains, meaning doctors aren't sure what caused their heart trouble.
Researchers studied unexplained chest pains because people with this complaint often suffer from many psychological difficulties, including panic attacks, anxiety and mood disorders, and suicidal thoughts.
They also investigated this topic because the research team was already conducting a study on panic attacks and unexplained chest pains. And the emergency department personnel would often make comments, such as "This would be a good night for research because it's a full moon," says study researcher William Foldes-Busque, PhD, an assistant professor of psychology at the University of Quebec in Montreal, Canada. So, experimenters knew some health professionals already had this perception in their heads, but they wanted to see if the idea had any truth to it.
After patients completed a mental health evaluation, scientists then analyzed data to find out if their psychological symptoms revealed any seasonal patterns or lunar phase influence. Researchers were able to determine which one of the moon's four phases -- new moon, first quarter, full moon, or last quarter -- was present on the day each patient came to the emergency room.
The study found that lunar cycle had no influence on the occurrence of psychological problems, such as panic attacks, anxiety and mood disorders, or suicidal thoughts. The only exception was a 32% drop in the frequency of anxiety disorders during the moon's last quarter.
"We don't know for sure why this happened," says Foldes-Busque.
Other studies have looked at admissions to psychiatric hospitals, calls to crisis hotlines, or homicide rates, and also failed to turn up a link between the moon's illumination and behavior changes. But if you talk to health professionals or police officers, they may think there's more nuttiness and craziness during a full moon.
It's possible that people are more prone to notice -- and remember -- a full moon, so they may link any strange behaviors they see that day to it. And perhaps when people act odd during other times of the month, they're just considered weird -- no further explanations given.
Foldes-Busque says it's possible the moon affects mental health in other ways. "I've heard that the full moon may affect sleep, mostly because of increased luminosity," he says.
What's his advice for today's full moon? "Don't do anything special or change anything because of it." | Even so, that won't stop some of us from thinking that lunar cycles can influence psychological symptoms. By one estimate, 80 percent of nurses and 64 percent of doctors who work in the emergency department believe it affects patients' mental health.
In the study, which will appear in the journal General Hospital Psychiatry, researchers reviewed medical records from two hospitals in Montreal over a three-year period. They looked at nearly 800 patients who came to the emergency room for unexplained chest pains, meaning doctors aren't sure what caused their heart trouble.
Researchers studied unexplained chest pains because people with this complaint often suffer from many psychological difficulties, including panic attacks, anxiety and mood disorders, and suicidal thoughts.
They also investigated this topic because the research team was already conducting a study on panic attacks and unexplained chest pains. And the emergency department personnel would often make comments, such as "This would be a good night for research because it's a full moon," says study researcher William Foldes-Busque, PhD, an assistant professor of psychology at the University of Quebec in Montreal, Canada. So, experimenters knew some health professionals already had this perception in their heads, but they wanted to see if the idea had any truth to it.
After patients completed a mental health evaluation, scientists then analyzed data to find out if their psychological symptoms revealed any seasonal patterns or lunar phase influence. Researchers were able to determine which one of the moon's four phases -- new moon, first quarter, full moon, or last quarter -- was present on the day each patient came to the emergency room.
The study found that lunar cycle had no influence on the occurrence of psychological problems, such as panic attacks, anxiety and mood disorders, or suicidal thoughts. The only exception was a 32% drop in the frequency of anxiety disorders during the moon's last quarter.
"We don't know for sure why this happened," says Foldes-Busque.
| no |
Selenology | Are lunar cycles linked to human behavior and health? | no_statement | "lunar" "cycles" have no effect on "human" "behavior" and "health".. there is no correlation between "lunar" "cycles" and "human" "behavior" and "health". | https://www.smw.ch/index.php/smw/article/download/2616/4138?inline=1 | Is it the moon? Effects of the lunar cycle on psychiatric admissions ... | Summary
BACKGROUND
There is an ongoing debate concerning the connection between lunar cycle and psychiatric illness.
AIMS OF THE STUDY
The purpose of the present study was to evaluate the rates of admission to and discharge from psychiatric inpatient treatment, as well as the length of stay, in relation to the lunar cycle, including 20 different categories of phases of the moon.
METHODS
The data of 17,966 cases of people treated in an inpatient setting were analysed. Routine clinical data and data about admission and discharge were used. The lunar calendar was obtained from the website of the US Naval Observatory and was used to calculate the dates of the full moon according to the geographic location of the clinics. The clinics are located in the Canton Grisons in Switzerland. The following phases of the moon throughout the lunar cycle were defined: (a) full moon, (b) quarter waxing moon, (c) new moon, and (d) quarter waning moon. In addition, we coded one day and two days preceding every lunar phase as well as the two days following the respective phases of the moon.
RESULTS
The lunar cycles showed no connection with either admission or discharge rates of psychiatric inpatients, nor was there a relationship with the length of stay.
Introduction
The belief that the moon influences human lives, emotions, and welfare is deeply anchored in human history, dating back to the ancient cultures of Assyria, Babylonia and Egypt [1, 2]. Medieval European mythology and superstition held that humans were transformed into werewolves or vampires under the influence of a full moon [3]. The antiquated and potentially offensive colloquial word “lunatic” derives from the Latin lunaticus (originally derived from Luna – moon), a term that originally referred mainly to epilepsy and madness, because those diseases were at one time thought to be caused by the moon [1].
To date, there is an ongoing debate concerning the connection between lunar cycle and psychiatric illness [4, 5]. The literature presents conflicting results, with the majority of studies showing no relationship between lunar cycle and either psychiatric admissions or emergency evaluations [6–9], psychiatric inpatient admissions [10], use of community psychiatry services [11, 12], violent behaviour [13–17], suicide [18, 19], or sleep disturbances [20, 21]. However, some studies do show relationships between the lunar cycle various psychiatric phenomena. For example, in a study of 17 healthy individuals Cajochen et al. [22] demonstrated under laboratory conditions that around the full moon, electroencephalogram (EEG) delta activity during non-REM sleep (an indicator of deep sleep) decreased by 30%, time to fall asleep increased by 5 minutes, and EEG-assessed total sleep duration was reduced by 20 minutes. These results presented a possible explanation for morning fatigue associated with a full moon [23]. In a prospective study involving 91 psychiatric inpatients, Calver et al. [4] observed an increase of violent and acute behavioural disturbances during the full moon among patients with severe forms of behavioural disturbance at admission. Another study, focused on gender differences regarding distress phone calls, showed that distress calls by women were more strongly linked to the lunar month than were those by men [24]. Family practitioners have also found a correlation between general practice consultation and the lunar cycle [25]. A large prospective case series of 2281 patients similarly showed an increase in frequency of outpatient psychiatric visits for non-affective psychosis during the full moon [26].
Parmar et al. [5] highlighted the importance of a clear operationalisation of the stages of the lunar cycle for scientific study; in their work, they observed significantly different outcomes regarding psychiatric emergency department presentations during different phases of the moon. Other authors have postulated associations, not only with the full moon, but also different lunar phases, showing sudden changes on the day of the full moon including crisis calls, suicide, and psychiatric admission rates as well as significant differences between the quarter waning and quarter waxing moon, including increases in homicides and crisis calls [27]. The authors presented positive findings regarding the relationship between lunar cycles and psychopathology, violence and admission to psychiatric institutions, highlighting the importance of using a more detailed approach than just the full moon. In addition, they suggest analyses should be stratified by gender and diagnostic categories.
Aims of the study
Based on the literature the present study examined whether different stages of the lunar cycle have an impact on psychiatric patient admissions, discharges and length of stay in psychiatric clinics, as well as possible moderating influences of the lunar cycle on the relationship of these variables with psychiatric diagnosis and gender. We hypothesised that the phases of the lunar cycle will show no relationship to admission or discharge of patients to psychiatric inpatient treatment, nor will it have any bearing on the length of stay.
Materials and methods
The present study was conducted in the Cantons Grisons, a rural alpine area located in south-eastern Switzerland. The study’s catchment area is characterised by the following features: it is geographically the largest canton of Switzerland with 7105 square kilometres; there are approximately 200,000 inhabitants; it lacks any major cities with populations over 50,000; the predominant business sectors include the tourism industry, agriculture and forestry. The Canton Grisons is entirely mountainous, located in the Swiss Alps (41% of the population lives at 3000 feet above sea level or higher) [28–30]. The clinics studied were operated by the Psychiatrische Dienste Graubünden (PDGR; English: Psychiatric Services of Grisons). PDGR is a state-owned but independently run network of inpatient and outpatient services, including two inpatient psychiatric clinics (clinic Waldhaus and clinic Beverin) with a combined total of 15 wards and 230 beds, including general acute psychiatry wards as well as specialised wards for the treatment of substance use disorders, personality disorders, forensic populations and geriatric patients. Both clinics provide psychiatric and physical healthcare, offer community mental health services, are legally obliged to provide healthcare to all individuals from the Canton Grisons, and are part of a single-tier psychiatric care system. With the exception of a small, privately funded clinic for the treatment of work-related burnout, PDGR is the sole provider of psychiatric services in Canton Grisons. Therefore, all psychiatric emergency admissions are sent to PDGR, which has the Cantonal mandate for psychiatric healthcare delivery for the region.
Patients admitted at least once from January 2005 to December 2015 were included. Data were documented as part of routine treatment and were anonymised before analysis. Because the study relied exclusively on anonymised, routine clinical data we did not require approval from the local ethics committee. The current study was performed in accordance with all national and international legal regulations and with the Declaration of Helsinki (7th revision, 2013).
Sample population
Inclusion criteria for the study were admission to psychiatric inpatient treatment on one of the 15 wards from 2005 to 2015. Cases admitted before 2005 or dismissed after 2015 were excluded. No further inclusion or exclusion criteria were defined to ensure a naturalistic sample. The resulting dataset contained a total of 17,966 cases.
Measures
We used the structured routine data for sociodemographic data, clinical data and data about admission and discharge. Diagnostic classification was based on the International classification of Disease, 10th edition (ICD-10) [31]. All diagnoses were made by a board-certified psychiatrist. The analyses included only the main diagnosis of the patient; comorbid diagnoses were not included.
The lunar calendar was obtained from the website of the US Naval Observatory [32] and was used to calculate the dates of the full moon according to the geographic location of the clinics (coordinates: 46°51′N 9°32′E). We chose the city of Chur (capital of the Canton Grisons) as our reference point. The two clinics are within a 20 km radius and therefore affected by the same lunar cycle. We defined the following phases of the moon throughout the lunar cycle: (a) full moon, (b) quarter waxing moon, (c) new moon, and (d) quarter waning moon. In addition, we coded one day and two days preceding every lunar phase as well as the two days following the respective phases of the moon. This adds up to a total of 20 different lunar phases. Admissions were defined as the day the patient physically entered one of the two clinics. Discharges were defined analogously as the day the patient physically left the clinic. The length of stay was consequently characterised as the time between these two days, including the day of admission and the day the person was discharged.
Statistical methods
The present study relied on a retrospective observational design, using a comparative cross-sectional analysis.
Descriptive analysis
We used a Welch t-test to test for differences between men and women in the continuous variables, assuming unequal variances of the tested variables. To analyse gender differences in the categorical variables we used chi-squared tests. A significance level of p <0.05 was assumed.
Admission and discharge
We constructed two statistical models to determine whether (1) the distribution of the number of admissions follows a hypothesised Poisson distribution independently of the lunar phases, (2) the distribution of the number of discharges follows a hypothesised Poisson distribution, independently of the lunar phases. Based on the mean and variance of the daily count of admissions and discharges we analysed the possible Poisson distribution. We tested the Poisson distribution with a quantile-quantile (q-q) plot, with a straight line indicating a violation of the Poisson assumption. In order to test the models, we used a chi-square goodness-of-fit test to confirm the hypothesis that admissions and discharges will follow a Poisson distribution based on the natural variance, independently of the lunar phase at the day of admission or at discharge.
To test the strength of association, a Goodman and Kruskal’s tau was run to determine whether the lunar phase at admission or at discharge was associated with the type of main diagnosis of the individual.
Length of stay
The impact of the lunar phases at admission on the length of stay was analysed with a linear regression model adjusting for gender, age, marital status, treating clinic, citizenship, highest level of education and main diagnosis. The covariates, chosen based on the literature and experience, were variables that might influence the length of stay of inpatient psychiatric treatment. We tested the model assumptions, including an analysis of linearity as assessed by partial regression plots and a plot of studentised residuals against the predicted values, independence of residuals, using a Durbin-Watson statistic, and homoscedasticity, as assessed by visual inspection of a plot of studentised residuals versus unstandardised predicted values. There was no evidence of multicollinearity.
All statistical analyses were performed using IBM Software SPSS statistical software, version 24 (SPSS, 2010).
Results
Sample population
In total, 17,966 cases were included. As presented in table 1, female participants were significantly older than the men and more likely to be of Swiss citizenship. Men were more likely than women to have finished vocational school, technical school and college or university.
Table 1 Sociodemographic patient characteristics.
Females
Males
p-value
Number of cases
8852
9114
Age (years), mean (SD)
47.0 (19.2)
44.3 (17.4)
<0.001
Marital status, n (%)
<0.001
Single
3202 (36.2)
4362 (47.9)
Married
2505 (28.3)
2250 (24.7)
Separated/divorced
1716 (19.4)
1746 (19.2)
Widowed
956 (10.8)
232 (2.5)
Unknown
473 (5.3)
524 (5.7)
Citizenship, n (%)
<0.001
Switzerland
7492 (84.6)
7355 (80.7)
Other countries
1068 (12.1)
1446 (15.9)
Unknown
292 (3.3)
313 (3.4)
Highest level of education (%)
<0.001
Elementary school
1772 (20.0)
1632 (17.9)
High school
132 (1.5)
133 (1.5)
Vocational school
2901 (32.8)
3563 (39.1)
Technical school
851 (9.6)
939 (10.3)
College or university
172 (1.9)
338 (3.7)
No school
673 (7.6)
645 (7.1)
Unknown
2341 (26.5)
1859 (20.4)
SD = standard deviation
The clinical variables of the population are presented in table 2. The average length of stay was 38 days, with no relevant gender difference. Admitting diagnosis varied significantly by gender, with affective disorders being the most common diagnosis for women, followed by substance use and schizophrenia spectrum disorders. In contrast, among men substance abuse was the most common diagnosis, followed by affective and schizophrenia spectrum disorders.
Table 2 Clinical patient characteristics.
Females
Males
p-value
Number of cases
8852
9114
Length of stay (days), mean (SD)
38.0 (55.4)
38.6 (88.5)
0.617
Main diagnosis, n (%)
<0.001
Organic mental disorder
465 (5.3)
430 (4.7)
Substance use disorder
1515 (17.1)
3317(36.4)
Schizophrenia spectrum disorder
1420 (16.0)
1506 (16.5)
Affective disorder
2976 (33.6)
1984 (21.8)
Neurotic, stress related, somatoform disorder
1047 (11.8)
878 (9.6)
Behavioural syndromes with physiological disturbances
122 (1.4)
13 (0.1)
Personality disorder
714 (8.1)
354 (3.9)
Mental retardation
47 (0.5)
53 (0.6)
Disorders of psychological development
4 (0.0)
7 (0.1)
Childhood onset disorders
59 (0.7)
75 (0.8)
Non-psychiatric disorder
483 (5.5)
497 (5.5)
SD = standard deviation
Admissions and discharges
Of the 17,966 cases included in the study, 626 were admitted on a full moon, 640 on a new moon, 618 on a quarter waxing moon, and 619 on a quarter waning moon. Regarding the discharges, 586 cases were discharged on a full moon, 580 on a new moon, 619 on a quarter waxing moon, and 577 on a quarter waning moon. The number of admissions and discharges stratified by lunar phases are shown in figure 1.
Figure 1 Number of admissions and discharges stratified by lunar phases.
A chi-square goodness-of-fit test was used to determine whether the cases admitted and discharges did not follow natural variation rates based on a Poisson distribution due to a possible influence of the moon. For the number of admissions, the minimum expected frequency was 616. The chi-square goodness-of-fit test indicated that the number of admitted participants was not statistically significantly different from an expected Poisson distribution of admissions (χ2(2) = 22.913, degrees of freedom [df] = 19, p = 0.241). For discharges, the minimum expected frequency was 607.4. The chi-square goodness-of-fit test indicated that the number of admitted participants was not statistically significantly different from an expected Poisson distribution of discharges (χ2(2) = 15.208, df = 19, p = .709).
In figure 2 the admissions according to the lunar phases are presented stratified by main diagnosis (represented by ICD-10 code groups). Goodman and Kruskal’s tau was run to determine whether the lunar phase at admission or at discharge was associated with the type of main diagnosis of the individual. Goodman and Kruskal’s tau was 0.015. This was a statistically nonsignificant reduction in the proportion of errors due to the category of main diagnosis as a predictor of the lunar phase at admission, p = 0.141. Also for the lunar phases at discharge there was no significance, Goodman and Kruskal’s tau was 0.015, p = 0.495.
Figure 2 Number of admissions and discharges stratified by lunar phases and diagnosis.
There was no significant association between lunar cycle and length of stay. Regression coefficients and standard errors can be found in table 3.
Table 3 Multiple linear regression analysis: adjusted impact of the moon phases at admission on the length of stay.
Discussion
The results of this 10-year naturalistic observational study show that lunar cycles have no connection with the number of admissions or discharges of psychiatric inpatients. The large sample size and naturalistic study design, the inclusion of two psychiatric clinics, the addition of all diagnostic categories, and the availability of data over an extended observation period all enhance the generalisability of our findings. Furthermore, the two clinics were legally obliged to provide care to all individuals from their catchment area (Canton Grisons), which reduced the role of a possible admission bias (e.g., patients with certain mental illnesses or violent behaviour being preferentially admitted to adjacent hospitals) that could potentially be related to the phases of the moon [33]. Although the quality of data from prospective studies is generally superior to that from observational studies, evidence suggests that routine data can be of a quality suitable for these types of analyses [34]. The quality of data from the participating clinics was warranted by Swiss board-certified psychiatrists, centralised data management and systematic quality control.
Our findings with respect to admissions related to full moon are in line with those of Gorvin and Roberts [10], who did not find a higher number of psychiatric hospital admission during full moon. However, our results extend these findings, showing that not only does full moon have no association with admissions, but also all the other phases during a lunar cycle, such as quarter waning moon, new moon, and quarter waxing moon fail to show any relationship with admission rates to psychiatric inpatient treatment. On the contrary, the degree of consistency in admission rates across the various lunar phases is striking. These results stand in contrast to the statements of Parmar et al. [5], who proposed a more detailed analysis of phases of the moon, possibly demonstrating an impact on patients of lunar cycles. Also, we were not able to show any differences related to the gender of the participants, which contrasts with the results of Kollerstrom and Steffert [24]. Despite several authors describing possible connections between lunar cycle and different diagnostic categories, our analysis did not show variations regarding admission rates if stratified by diagnosis [22, 27, 35]. To our knowledge, no previous study has analysed the impact of the lunar phases on discharge rates from psychiatric inpatient treatment. Analogously to the results from inpatient admission, no connection with lunar phases was present in the discharge data. Furthermore, the analysis stratified by gender or diagnostic category did not result in any significant findings. An additional analysis developed a regression model examining the influence of various factors, including lunar phases, on length of stay in a psychiatric inpatient hospital. However, the moon did not show a relationship to the length of stay.
Despite the widely-held popular belief that the moon effects peoples’ mental health and subsequently psychiatric treatment, our study was unable to support any connection between any phase of the lunar cycle and either admission or discharge rates, nor with length of stay, at psychiatric inpatient clinics. The belief that the moon affects our lives and especially our emotions has existed for thousands of years and appears to be part of a collective, lay wisdom. Such beliefs seem largely impervious to the fact that a great deal of research, including the present study, has failed to support them. Research results as presented in this study need to be used for de-stigmatisation of mental illness in society, which continues to be a major issue and is also driven by the use of language as discussed in the introduction section. The reasons for the persistence of such beliefs may not be found in a rational understanding but more in a primal, emotional desire to believe that we are not solely responsible for our own behaviours, rather that some superior force also influences our actions and feelings.
Limitations
The data from the two clinics that are part of the PDGR network represent non-profit, government owned hospitals serving catchment areas ranging from rural to (minor) urban areas. They are therefore representative of a part of the Swiss healthcare system. However, our findings might not be representative of settings such as university hospitals or private clinics, which might be able to choose which patients to treat. Since routine data were used in the current study, some clinically desirable information was not available for analyses, including information on the patients’ attitudes concerning lunar cycles, medication, and course of treatment. Data concerning educational attainment was missing for almost 25% of the sample, a factor which could arguably vary with beliefs about the effects of the moon on mental health. The study had a retrospective design, limiting the information to correlations between the moon and the analysed variables and precluding any causal inferences. Also, confounding variables not analysed could affect the relationship between the lunar cycles and the length of stay.
Conclusion
We believe that mental health providers should be aware of such popular beliefs while at the same time ensuring that they are aware of the scientific evidence supporting and refuting such beliefs, in order to ensure both that treatment remains evidence-based, and that patients are increasingly socialised in, and increasingly come to expect, an evidence-based approach to mental health care.
Acknowledgements
The authors would like to thank Mrs. Doris Rizzi, the secretary of the research department at Psychiatrische Dienste Graubünden, and Mr Romedo Meier, for their great help provided during data collection.
Notes
Disclosure statement
No financial support and no other potential conflict of interest relevant to this article was reported. | RESULTS
The lunar cycles showed no connection with either admission or discharge rates of psychiatric inpatients, nor was there a relationship with the length of stay.
Introduction
The belief that the moon influences human lives, emotions, and welfare is deeply anchored in human history, dating back to the ancient cultures of Assyria, Babylonia and Egypt [1, 2]. Medieval European mythology and superstition held that humans were transformed into werewolves or vampires under the influence of a full moon [3]. The antiquated and potentially offensive colloquial word “lunatic” derives from the Latin lunaticus (originally derived from Luna – moon), a term that originally referred mainly to epilepsy and madness, because those diseases were at one time thought to be caused by the moon [1].
To date, there is an ongoing debate concerning the connection between lunar cycle and psychiatric illness [4, 5]. The literature presents conflicting results, with the majority of studies showing no relationship between lunar cycle and either psychiatric admissions or emergency evaluations [6–9], psychiatric inpatient admissions [10], use of community psychiatry services [11, 12], violent behaviour [13–17], suicide [18, 19], or sleep disturbances [20, 21]. However, some studies do show relationships between the lunar cycle various psychiatric phenomena. For example, in a study of 17 healthy individuals Cajochen et al. [22] demonstrated under laboratory conditions that around the full moon, electroencephalogram (EEG) delta activity during non-REM sleep (an indicator of deep sleep) decreased by 30%, time to fall asleep increased by 5 minutes, and EEG-assessed total sleep duration was reduced by 20 minutes. These results presented a possible explanation for morning fatigue associated with a full moon [23]. In a prospective study involving 91 psychiatric inpatients, Calver et al. | no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://science.nasa.gov/science-news/science-at-nasa/2001/ast27jul_1 | Meteorites Don't Pop Corn | Science Mission Directorate | Disclaimer: This page is kept for historical purposes, but the content is no longer actively updated. For more on NASA Science, visit https://science.nasa.gov.
Published:
Jul 27, 2001
Meteorites Don't Pop Corn
A fireball that dazzled Americans on July 23rd was a piece of a comet or an asteroid, scientists say. Contrary to reports, however, it probably didn't scorch any cornfields.
Listen to this story via streaming audio, a downloadable file, or get help.
July 27, 2001: Every few weeks, somewhere on Earth, a fiery light streaks across the sky casting strange shadows and unleashing sonic booms. Astronomers call them fireballs or "bolides." They're unusually bright meteors caused by small asteroids that disintegrate in our planet's atmosphere. Often they explode high in the air like kilotons of TNT -- blasting tiny meteorites far and wide.
It happens all the time, say experts, but usually no one notices. We live on a big planet, after all, and very little of Earth's surface is inhabited by people. Most debris from space falls unseen over oceans or sparsely-populated land areas -- or during times when sky watchers simply aren't paying attention.
Above: Artist Duane Hilton created this rendition of the July 23rd fireball streaking over a Pennsylvania farmhouse.
Last Monday was different, however. On July 23rd hundreds of thousands of people were looking when, unexpected, a fireball appeared over the US east coast. It was 6:15 p.m. local time. The Sun hadn't set, but onlookers had no trouble seeing the fireball in broad daylight. Witnesses from Canada to Virginia agreed that the colorful fireball was brighter than a Full Moon, and some saw a smoky trail lingering long after it had passed.
Sign up for EXPRESS SCIENCE NEWS delivery
"Contrary to some reports this was not a meteor shower," says Donald Yeomans, manager of NASA's Near Earth Object program at the Jet Propulsion Laboratory. Meteor showers happen when Earth passes through the debris trails of comets and countless thousands of cosmic dust specks burn up in Earth's atmosphere. At the heart of Monday's fireball, however, was a solitary object -- perhaps a small asteroid or a piece of a comet.
Hundreds of eyewitness reports collected by the American Meteor Society establish that the fireball was moving on an east-west trajectory that carried it directly over the state of Pennsylvania. "It was traveling perhaps 15 km/s (34,000 mph) or faster when it exploded in the atmosphere with the force of about 3 kilotons of TNT," says Bill Cooke, a member of the Space Environments team at the Marshall Space Flight Center. If this was a rocky asteroid, then it probably measured between 1 and 2 meters across and weighed 30 or so metric tons.
"The pressure wave from the airburst shattered some windows in towns west of Williamsport," Cooke continued. "Breaking glass requires an overpressure of about 5 millibars (0.5 kPa), which means that those homes were within 100 km of the explosion."
No one knows if any sizable fragments of the object survived the blast. But if they did, the meteorites probably landed in the wooded, hilly terrain west of Williamsport -- perhaps in one of the many state parks of that area.
Left: Jim Richardson of the American Meteor Society created this July 23, 2001, fireball sighting map. Red stars denote witness locations; the tail on each star points in the direction that the fireball was spotted. Blue stars denote sonic booms. The green rectangle and arrows indicate the approximate trajectory of the fireball. [more]
Says Bob Young of the State Museum of Pennsylvania: "One of our planetarium staff was told that the little northern Pennsylvania town of Trout Run was destroyed by the meteor! The witness was about 100 miles away when she heard the tale from her hairdresser." Other reports credit the fireball for scorching a cornfield in Lycoming County, PA, and littering the countryside with burnt rocks.
In fact, says Yeomans, it's unlikely that any substantial meteorites reached the ground. Atmospheric friction would have reduced most of the fragments to dust. Even if fragments did survive, he added, they wouldn't burn cornfields because --despite their fiery appearance in the sky-- freshly-fallen meteorites are not hot.
Objects from space that enter Earth's atmosphere are -- like space itself -- very cold and they remain so even as they blaze a hot-looking trail toward the ground. "The outer layers are warmed by atmospheric friction, and little bits flake away as they descend," explains Yeomans. This is called ablation and it's a wonderful way to remove heat. (Some commercial heat shields use ablation to keep spacecraft cool when they re-enter Earth's atmosphere.) "Rocky asteroids are poor conductors of heat," Yeomans continued. "Their central regions remain cool even as the hot outer layers are ablated away."
Right: This ablative heat shield protected a space probe in 1986 as it made a high-speed plunge from NASA's Galileo spacecraft into the atmosphere of Jupiter. Meteorites racing through Earth's atmosphere likewise shed heat via ablation. [more information]
Asteroids move faster than the speed of sound in Earth's atmosphere. As a result, the air pressure ahead of a fireball can substantially exceed the air pressure behind it. "The difference can be so great that it actually crushes the object," says Cooke. "This is probably what triggered the airburst over Pennsylvania."
Small fragments from such explosions lose much of their kinetic energy as they heat the atmosphere via friction. They quickly decelerate and become sub-sonic. Dusty debris from airbursts (and ablation) can linger in the atmosphere for weeks or months, carried around the globe by winds. Walnut- to baseball-sized fragments might hit the ground right away at a few hundred kilometers per hour.
"Small rocky meteorites found immediately after landing will not be hot to the touch," says Yeomans. They will not scorch the ground or start fires. On the other hand, notes Cooke, "if we got hit by something large enough to leave a crater, the fragments might be very hot indeed." A stony meteorite larger than 50 meters might be able to punch through the atmosphere and do such damage -- but that's far larger than the object that flew over Pennsylvania.
No one knows what kind of space debris caused the July 23rd fireball. It might have been a small piece of an icy comet, in which case it's unlikely that anything larger than dust grains survived. It might also have been a rocky asteroid -- the most likely candidate -- or perhaps a nickel-iron meteorite. "Iron objects are more likely to survive a descent to Earth," says Yeomans, "but they are rare."
It's possible that fragments will never be found, notes Cooke. "We still don't have a precise trajectory for this object," he explains. "And so much of the targeted area (in central Pennsylvania) is heavily forested -- searching for debris will be like looking for a needle in a haystack."
Or should that be a needle in a cornfield?
"I suppose it's possible that some ablative fragments fell into that field," says Cooke, "but it is strange that only a small area was affected. I doubt it's a good candidate impact site."
Editor's note: Did you see the July 23rd fireball? If so, please submit a report to the American Meteor Society. They can use your information to refine the trajectory of the meteor and possibly pinpoint the location of meteoritic debris. Also, the terms fireball and bolide are often confused -- even by professional astronomers. A fireball is a meteor at least as bright as the planet Venus (visual magnitude -3 or -4). A bolide is a fireball that explodes, often with sound effects.
Speeding in Space: Scientists say the Pennsylvania fireball ripped through the atmosphere at 15 km/s. In this math activity, students will compare the fireball's speed to the speed of Earth's orbital motion around the Sun -- and practice their English to metric conversions!
[lesson plan] [activity sheet] [key]
Dino-Colors: It's a good idea to keep an eye on near-Earth asteroids -- just ask any dinosaur! Younger kids will enjoy coloring these pictures by Duane Hilton. [You call that an asteroid?] [Dinosaur Sky Watcher]
Use this button to download the story with lessons and activities in printer-friendly Adobe PDF format:
Web Links
Frequently Asked Questions about Fireballs - from the American Meteor Society
Meteorite leaves trail of fire, confusion -- (CNN) A streaking fireball or fireballs witnessed over much of the eastern United States seems to have disappeared without a trace, save perhaps for strange markings in a Pennsylvania cornfield.
Arctic Asteroid! -- Science@NASA A 200 metric ton rock from space streaked across the skies of western Canada on January 18, 2000 and scattered intriguing meteorites across a frozen lake.
: In 1992 a fireball raced over the eastern US and dumped a 12-kg rocky meteorite in the trunk of a parked car in Peekskill, NY. At least 14 people captured videos of the meteor -- one of which is reproduced here as a 0.9 MB mpeg movie. [more] | 's unlikely that any substantial meteorites reached the ground. Atmospheric friction would have reduced most of the fragments to dust. Even if fragments did survive, he added, they wouldn't burn cornfields because --despite their fiery appearance in the sky-- freshly-fallen meteorites are not hot.
Objects from space that enter Earth's atmosphere are -- like space itself -- very cold and they remain so even as they blaze a hot-looking trail toward the ground. "The outer layers are warmed by atmospheric friction, and little bits flake away as they descend," explains Yeomans. This is called ablation and it's a wonderful way to remove heat. (Some commercial heat shields use ablation to keep spacecraft cool when they re-enter Earth's atmosphere.) "Rocky asteroids are poor conductors of heat," Yeomans continued. "Their central regions remain cool even as the hot outer layers are ablated away. "
Right: This ablative heat shield protected a space probe in 1986 as it made a high-speed plunge from NASA's Galileo spacecraft into the atmosphere of Jupiter. Meteorites racing through Earth's atmosphere likewise shed heat via ablation. [more information]
Asteroids move faster than the speed of sound in Earth's atmosphere. As a result, the air pressure ahead of a fireball can substantially exceed the air pressure behind it. "The difference can be so great that it actually crushes the object," says Cooke. "This is probably what triggered the airburst over Pennsylvania. "
Small fragments from such explosions lose much of their kinetic energy as they heat the atmosphere via friction. They quickly decelerate and become sub-sonic. Dusty debris from airbursts (and ablation) can linger in the atmosphere for weeks or months, carried around the globe by winds. Walnut- to baseball-sized fragments might hit the ground right away at a few hundred kilometers per hour.
"Small rocky meteorites found immediately after landing will not be hot to the touch," says Yeomans. They will not scorch the ground or start fires. | no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://sites.wustl.edu/meteoritesite/items/thud/ | Not everything that falls from the sky is a meteorite | Some Meteorite ... | Not everything that falls from the sky is a meteorite
Most rocks that fall from the sky are not meteorites
I am fascinated by the numerous stories that I have been sent about rocks someone found that “weren’t there yesterday.” (Sorry, this is a long page because I have received LOTS of these stories.) Common alternative stories are “We heard a thud on the roof at night and found this rock the next morning” and “I saw a fireball and found this stone where I saw it land.” All the quotations below were sent to me by real people. Some of them describe real meteors and some of the rocks may be real meteorites. Meteorites have hit roofs, after all. (See accounts for Benld, Bloomington, and Park Forest, for example.) None of the rocks associated with the stories below that I have had the chance to examine myself have been meteorites, however. I do not know how these various rocks and metal pieces came to be where they were found, but I am rather certain that most did not come from outer space. I also include some newspaper and other media articles about suspected meteorites that turned out to be meteorwrongs.
The photos above were sent to me with the following story: “…the other day a friend of mine heard a big Bang on his roof it fell off and it was smoking hot he said he cooled it off with the hose. It was found in Vacaville Calif. yesterday is when it happens this is what it looks like? Can you give me any type of info just by the picture?” Yes, I can.
(1)The surface on the left does not look like a meteorite fusion crust to me.
(2) The interior on the right does not look like the inside of an ordinary chondrite, the most common kind of meteorite. (3) The inquiry does not say whether or not the rock attracts a magnet, but I do not see any obvious metal. (4) Meteorites do not land “smoking hot.” More about that below.
Stories
“i found a strange rock in front of my house that just seemed to have appeared overnight.”
“We heard a loud pop like gunshots. The next day my 2-year old grandson was out in the back yard and found the “rock”. It was stuck in the ground. We had to pull it out.”
“My husband heard a loud bang or thug like something hitting the ground in the garden. It was night and VERY dark so we looked, got spooked and went inside. About a week or so later he was in the garden and found this odd rock.”
“The only reason I’m reaching out, is because it popped out of the air about 2 1/2 feet above the ground right in front of my eyes!!..thats crazy!!! And in the middle of the parking lot!”
“…so at around 3 o’clock to about 4 o’clock am (early Sunday) right infront of me a rock with a green flame to it actually fell right in front of me.. I actually have what came flying through the sky because it landed right infront of me..”
“Today morning (5th December, 2020), at about 5 am, my brother was out for morning walk as usual. Suddenly, with a Vrroomm sound, something fell on the ground just a few metres away from him. The soil was thrown away. He looked at the place and found it smashed as if hit by something and had gone inside the ground. He dug the area and found a small pebble sized stone of reddish brown colour.”
This is the reddish-brown stone. Although it has a crust, I think that it is not a meteorite fusion crust because it does not look glassy as it would on a freshly fallen meteorite, it is too thick, and a meteorite fusion crust would not flake off like this on a fresh fall. Most importantly, however, no meteorite would have a reddish interior like this.
“About 27 or 28 years ago, while fixing a fence, I came across a fence post that was split in half, partially burnt, with a small one ounce rock imbedded near the base.”
“I found this rock at a site that I thought at first may have been a lightning strike, it was impacted into the earth with only about a 1/3 of it showing and the area around it was charred and still smoldering.”
“In the middle of April, I was out in my yard and found this rock. Which of course wasn’t there before.”
“The person who gave me the rock over 25 years ago indicated that he heard it hit the ground and picked it up.”
“Found in my front yard this morning, definitely not there yesterday ( i mowed).”
“I was outside today and I found this object in my yard. 15ft away from my house. So I picked the object up, and there was a hole the object created.”
“My dad was awoken one night by a loud crash of something hitting his wood pile.”
“It was night time when I heard a very distinct sound, when I went outside I expected to see something but found nothing. The next day I searched the area that I believed would have contained whatever hit my house. When I looked at the shingle from the mansard style roof I saw a mark and dent about the size of a Grapefruit. I searched the area in and around the bushes and sitting a top this evergreen bush was a rock?? My property has no visible rocks of this nature as the perimeter of my house is mulched with no rocks. Just grass and bushes. When I picked the rock up I knew right away that this hit my home as it landed in an area very close to where it hit the roof. I must have been cushioned by the bush it landed on after it projected from the roof. I searched every possible area around my home to make sure I wasn’t imaging this all. Nothing like this rock exists on my property.”
“I discovered a meteorite embedded in my driveway a few days ago.”
Hot rocks
Despite these many stories to the contrary, meteorites do not land “hot.” Explanation here. I suspect that some of these stories actually involve lightning strikes.
“When I was a young girl in about 1947, my father and I were out at night in a field by our home in Loves Park, Illinois identifying some of the constellations for a school project. Suddenly, there was a streak of light, followed by a thud in a field about 300 ft from where we were standing. We waited until the early morning hours to go out and retrieve it, realizing it would be extremely hot. I remember thinking the heavy metal was iron, not realizing meteorites made of iron are blackened. I brought it to school, so my classmates could see it.”
“My wife and I awoke around 12:14 Friday morning to an extremely bright flashing light outside of our window. We thought they were EMS lights at first. I looked out of our window and saw a small object in the middle of our paved road … It was flashing a white flame and smoking, I assumed someone had thrown some sort of fireworks out of a car. Upon closer inspection later Friday morning, I found a small rock on the road where I saw the light. The rock is very dense, has a black “crust” like it has been burnt, and is slightly magnetic. Do you think this could be a meteorite?”
“My grandfather saw it fall from the sky in the early 1920s. It caught the woods on fire. He and his brother went the next day and recovered it. It was buried in the ground about 6 feet and was too hot to touch. It weighs approx. 150 lbs.”
“I believe that my grandmother is in possession of a small meteorite. It fell to earth about 30 years ago, and she saw it hit the ground (literally 50 feet away from her). It was still glowing hot as she approached it. After letting it cool down, she took it home and put it on a shelf, were it has remained pretty much untouched for 30 years.”
“we are sure that we have seen it falling from the sky and looks like fire and then make a big hole. we tried to touch it but was very hot that was at 3 in the morning. please advise”
“I found a rock meteor looking several years ago, it was raining that night, we heard a loud boom and we ran outside this rock hit our house created a small hole outside the wall on our house. I touch this rock it was hot so i kept it for years.”
“My step-father told me that years ago (45 yrs now) a preacher that he knew gave it to him when he was 9 or 10. The man told him that he watched it fall from the sky and that it was so hot he had to wait to touch it.”
“My dad said they watched this meteor come within the earth’s atmosphere and exploded. The sound off of it was louder than a sonic boom. This thing busted apart into small pieces and began falling from the sky. Much of it rained down throughout the entire Silver Valley area and had every field on fire. When they saw it raining down, they too ducked into the storm cellar. They could hear pieces of it hitting the ground and the steel cellar door like hail. My dad thought the world was coming to an end!! A piece of this stuff went right through the roof of my dads’ uncle’s front porch. After they quit hearing the thuds on the ground, they all climbed out of the cellar to see the fields on fire, the front porch on fire, and the barn on fire. They all grabbed buckets and started dipping water from the well to put the front porch out, but lost the barn. Dad went out the next day and found this rock laying everywhere, still too hot to touch.”
“i believe this is a meteor(rite) i found in the afternoon on my deck on 04/24/07. i photographed it immediately….i’d like to research if it is a true meteor…. that actually burned into my deck im sending a few of my pics for u to help”
“This rock has been passed down in my husband’s family since between 1944-1948. It flew out of the sky and hit a wheat field that then caught on fire. The fire was put out and later the rock discovered at the sight where the rock was located …” “Several nights ago my friend experienced several booms and the ground shaking. He felt impacts around his trailer that causes the the roof to push in and the walls of his trailer to buckle. When he went out side he said he found smoking holes in the ground that were still sizzling from the heat.”
“This rock was in an area of sparse grass and I would have seen it while mowing the lawn or actually just walking across the yard… I think it “arrived” sometime over the last week or so…”
“It was night time when I heard a very distinct sound, when I went outside I expected to see something but found nothing… When I picked the rock up I knew right away that this hit my home as it landed in an area very close to where it hit the roof.”
“A rock crashed through our basketball hoop a steep angle from above and burst upon impact.”
Meteor showers
People have been watching meteor showers, such as the Perseids in early August, for thousands of years. Space.com: “Perseid meteoroids (which is what they’re called while in space) are fast. They enter Earth’s atmosphere (and are then called meteors) at roughly 133,200 mph (60 kilometers per second) relative to the planet. Most are the size of sand grains; a few are as big as peas or marbles. Almost none hit the ground, but if one does, it’s called a meteorite.” For all meteor showers, the meteors you see are made almost exclusively by sand- to pea-sized objects that ablate away in the atmosphere and never hit the ground. If you see a meteor during a meteor shower, you are not going to find a meteorite.
“During the meteorite shower in early August, I believe what seems to be a meteorite fell in the yard of my daughter’s home. It apparently was smoldering all night and the odor from the heated rock was a stench. It hit the surface of the ground and the plants that were beneath it, the roots were scorched and destroyed. It took several buckets of water to stop the smoke and cool it off.”
“A meteor struck my driveway last Wednesday night/Thursday morning, and I have the fragments, what is the probability that it’s from the comet Swift-Tuttle associated with the Perseid Meteor Shower?”
“I was up watching the shower, then as I was going to bed I heard a loud whoosh, and a thud as something hit the earth. There were no neighbourhood boys who could have thrown objects. The next morning I went into the garden to see what fell there, and saw nothing, but a hole in the soft soil about eight inches deep. This rock was at the bottom of the hole.”
“It landed about 1 am on August 13 2007! My daughters … and I were hoping to see some meteors and were hoping to see something spectacular! We were outside watching the meteor shower and saw one come straight over us and it exploded right over our heads! It seemed very close! Then we saw a spiraling object smoking toward us in a corkscrew pattern and I ducked!! I felt silly cause what could I do to protect myself if it were to hit me? Lol… We could also smell a bunt metallic smell shortly after the object fell!! Anyway we found the object today stuck into the dirt. It landed about 4 feet from where [my daughter] was standing and about 8 feet from where I was standing! The object is about egg size, black and rocklike. I have pictures of the hole and have the stone with the dirt embedded on one side.”
“When the Hale Bopp Comet passed by us a few years ago I collected several quarter sized fragments that made their way to earth. These fragments were still smoking when I discovered them. I was driving past my grandparent’s home when I noticed several small fires. These fires were started in the pine straw around the shrubs. In the pine straw is where I found the quarter sized fragments. My home is about a half mile from my grandparent’s home and I also noticed several “pot marks” on my wood deck. These “pot marks” were clearly impact points of fragments but when the fragments hit the hard wood surface they shattered to much smaller fragments and mostly powder. This event happened about mid-day.”
“This elderly man’s father was returning home around midnight on horseback sometime around the year 1890. Suddenly the sky was lighted as a ball of fire with sparks shooting off of it passed overhead. Though an excellent rider, the man was thrown from his horse. The object made, as he described, a loud whining, roaring sound and it was gone in a moment of time. He felt that the object had struck the earth nearby but it was not reported or found.”
“I found this large rock (grapefruit size) a few inches from my street on our side yard (about 2 feet wide). It was in August or early Sept. My husband and I put it away in our chifferobe where it has been for 7 mos. This week I have been digging in my small back yard to plant flowers and each day I have found a handful of smaller pieces to this big rock we found in Aug. I don’t know what a fusion crust is but if you pick it up it leaves a residue of a black sooty substance. I have many pictures but they are not good. I only have a cheap digital camera and I’m a lousy photographer as well.”
Sorry, this is not a meteorite. Almost certainly, it is a hematite concretion – and a very nice one.
“A man was working in his garden in the evening of the said date and heard a hissing sound as a rock landed less than four feet from him. He quickly went over to investigate and found a warm black rock freshly embedded in the long grass.”
“She said it had fallen through the roof in 1964 and she showed me where her roof had been repaired.”
“I’m contacting you regarding a meteorite that was found by my Great Grandmother. It fell through the roof of their home and landed on their rug, which would have been in the late 1800s, in England. The rock is the size of a tangerine, has a shiny fusion crust and weighs 132 grams”
“I have experienced rocks falling on my house. I know that these rocks are coming from Heaven, just need for it to be confirmed.”
“I was at a friend’s house a few weeks back when an object shown in the attached pictures crashed in his yard.”
“My wife found it several years ago in our back yard. It was buried most of the way into the ground with about 4″ protruding from the ground. Enough that it would have caught the mower blade. It had fell sometime since the yard was last mowed because it wasn’t there before. It appears to have fallen straight down and impacted the earth with such velocity that it cleared away dirt and grass around it and It was so deep that it had to be dug up and removed.”
Holes in the ground
Another common story is that a meteorite landed and made a big hole or crater, or that the suspected just-fallen meteorite was dug out of the ground from a depth of several feet. It does not work that way. Meteorites the size of those in the stories here land at terminal velocity, not faster than about 500 km/hour (300 miles per hour or 440 feet/second) – the same speed as it would have if dropped from a high-flying airplane. Such a rock can punch a hole in a roof or make a divot in the lawn, but it is not going to bury itself. (For comparison, the muzzle velocity of a 30-06 rifle is in the range of 2500-2800 feet/second.)
“In the evening around 4 yrs ago this stone could of killed me. I was only 5 foot away approximately when I heard a noise like a thump in the ground. I was having a cigarette & remember I had a few alcoholic drinks at the time. First I thought someone threw something at me. When I turned around I noticed a 2-3 foot deep hole in the soil like tar. I didn’t start to inspect it then as I was drunk & worried that someone was trying to injure me by throwing things at me.”
“In August of the year 1957 I was on summer holidays with my parents in the family house of my mother which is located in a village named Stemnitsa in south Greece. The altitude of the village from sea level is 1100 meters. A night of August I got up from my bed to go to the toilet. As I passed in front of a window, I suddenly saw in the dark sky a bright object approaching the ground. It was a flaming ball on the size of the basketball with a bright tail to follow. As the ball was coming down it was getting smaller size. I thought that eventually it would be completely lost in the dark sky, but was not.It fell aflame about 100 meters away from our house. I noted in my mind the part that fell. I went to sleep again and in the morning, even before the sunrise, I took my father (after he narrated the incident) to go with me to look after for the sky object because I was somewhat young then for such adventures. I was only 12 years old. We found that field and because of a bush which was half-burnt we understood the point that it had fallen. We dig in a blackened hole and in a depth of about 30 cm we found the rock that you can see in the pictures which was still warm enough.”
“My grandmother gave me this rock about 30 years ago. She grew up in White County, Indiana and told me that they woke up one morning and found a crater in the yard. My grandfather dug this up out of the crater.”
“I recently found this going through some of my mother’s things she left me after passing away, she told me she dug this up out of her flower garden this object fell from the sky around 1962… it penetrated the awning of our home one evening , the next morning she noticed the hole in the awning and dug this out of the hole it created in her flower bed…”
“One Night, I saw a flame blue and yellow coming from the sky very very fast and them landed in the beach, I hear a big noise and I went to see what happened it was about 23 hours I was closing my shop near the beach, I went to the beach and I saw a big hole I started to dig and I did not find anything I went home and I was thinking all night about it. The next day I went to the same place about 6 AM and I started to dig and after 2 hours I found one small black stone I was digging more and after 30 minutes I found other Black Stone square, small one side is concave.”
The photo below was sent to me with the following text: “We recently had a meteorite impact our yard blasting a 10 feet wide hole, which tapered down about 15 feet deep. We first thought it was an object from an aircraft. Perhaps even a weapon that fell off and was not armed not detonate. We used our bulldozer to carefully excavate and upon using a metal detector, we carefully removed the object within a mass of mud. When we washed away the mud it eventually revealed something we had not even considered. The meteorite is a solid metallic blob that weighs approx 8.2 pounds. A magnet seems to have NO attraction to any part of it.”
Where do I start. (1) From its appearance, and that in better photos sent later to a colleague of mine, I suspect that the object is a iron-oxide concretion. Concretions are dense, sometimes mistaken for metal, and those consisting of hematite will not attract a magnet. (2) Thus far, no meteorite consisting solely of iron oxide has been recognized. Such rocks are common, however, in the vicinity of where this rock was found in Missouri. (3) The rock does not have a fusion crust; a freshly fallen meteorite would have a fusion crust. (4) As mentioned above, a meteorite of this size does not have enough energy to make a crater 10 feet wide. Freshly fallen meteorites are found on the ground. (5) If the rock was excavated from a depth of 15 feet, then it has been there a long time.
“I have a son that is now 19 the other day he came home and was telling me this story about when he was little He and a friend saw this really bright fiery light fall from the sky and hit the ground. They were all excited and told a bunch of people in a restaurant but everyone ignored them. The next day they went to look to see if they could find out what they saw fall from the sky and in a cemetery there was a round whole with a bunch of pieces of rock in it. He said it was a meteorite of some kind. And now probably 10-12 years later He and a different friend happened to be in the same cemetery and he was telling the story and remembered exactly where his find was there was a circle of dirt on the ground, no grass had grown there and he bet the kid that if they dug they would find some of these strange looking rocks or whatever they were, and sure enough just below the surface the odd material was there. He did bring one or two of them home with him. Could this really have been a meteorite and who would I contact that might be interested in this. He also said the day they first found it, the tree it came through, on its landing, had the branches broken off.”
From Egypt: “I have a Meteorite ,it fell down beside my home in my land and almost to kill us .
The killer rock, with an end cut off (right). The fusion crust on a freshly fallen meteorite would not be this red and sawing it would not leave a reddish smear on the sawn face. As I emphasize elsewhere, stony meteorites are not long and thin. This thing looks like another piece of hematite. When sawn with a good saw, hematite can be surprisingly shiny and metallic looking. Two sides of an object from Bangladesh: “…there was a loud noise and a bright light on the roof of the house. A few hours later I went to the roof and saw this object. The place where it hits is broken.” From the circular edge on the left this is clearly a man-made object. There is no fusion crust and meteorites are not shaped like this.
“About 35 yrs ago my friend was sitting outside next to his cabin … and a meteor came down the size of an ostrich egg and went into the lake 20 ft from where he sat, and the next morn he waided in and dug it out, and sent it somewhere and they said it was one.”
“All I can do is swear up and down that it fell from the sky onto my windshield when i was stopped and there was nobody, car, traffic, roadwork, anywhere that could have gotten it there.”
This photo was sent to me with the following story: “…a flaming ball of fire landed at my foot. It was hot. I let it cool off and picked it up. It was the size not much bigger than a marble.” OK, but it’s not a meteorite. A freshly fallen meteorite would have a fusion crust and this rock does not. Also, meteorites do not have vesicles like this.
“Unfortunately, I no longer have the actual meteorite, but I know that I had one. It hit a cherry tree in the backyard of the house across the street one night when I was a kid. Sounded like an explosion! The next day, the tree looked like it had been hit by lightning, though there had been no lightning that night. I think there may have been a meteor shower, but I’m not certain. The tree was split in half. I found the meteorite some yards away, embedded in the ground! I know it had not been there previously, it would have been quite obvious. If I’m recalling correctly, the grass around the impact site was somewhat scorched, as well. It was about the size of a baseball or just slightly smaller, but not spherical like one, seemed to be sort of metallic, and heavy for its size.”
“My husband went out to mow our three acres of land. It had rained really hard for a few days, and he told me to come out and see this big strange rock in the grass. We have no idea how it could have got there as it weighs almost 25 pounds. It seems alot more heavier. It could not have fallen off a passing truck as it’s at least 100 feet from the road. … there are no rocks in our clay soil. I know you are bothered by a lot of dumb calls but we really don’t know what to do. It had to fall from the sky in our minds. The only other way would be for someone to carry it and place it where we found it. My husband and I are retired and do not play pranks. And we live out in the country where neighbors are few and far between.”
“An old friend saw a burning mass fall in his yard up north in 1984 and last summer I was able to go to the spot and search. I found this greyish, porous rock which is severely burnt and melted at one end.”
“One night, a rock, black and burnt looking (with what appears to be reddish gray dust, also has like glitter under the black ,you can bearly see )came crashing down onto the hood of my van . It hit so hard it dented, cutting the hood. When it hit it also broke my windshield wiper and cracked my windshield. This was at 10:30 at night. There were no kids around . No vehicles. I went back with a flashlight and retrieved this stone. I am sure this is the rock that hit my van because there is a small amount of my paint from the hood on it.”
“Eight years ago when I was in fourth grade I was sitting in my backyard on my wooden skateboard ramp, alone, when I heard a relatively loud thud on the ramp about five feet away from me. I turned quickly to see a rock bounce into my yard. I picked up the rock in curiosity. At first I thought my neighbor had thrown it at me, so I scanned to see if he was around but he was nowhere to be seen. I later questioned him on the subject and he didn’t know what I was talking about.”
“It was the summer of 2013. I was sitting on a porch with my friend and there was a faint THUNK as a small rock came down from the air and landed next to us.”
“Last June, I heard what seemed to be a bunch of gravel being tossed across the roof of our log cabin and travel trailer metal roofs.”
“A couple of years ago I heard what sounded like bits of rock spray across our metal roof, didn’t think much about it at first, next morning I thought that maybe what I heard were bits of meteorites that possibly fell on my roof.”
“On May 24,1997 a rock weighing 58 grams hit the screen door of my back patio piercing through it and putting a chip in the plate glass. It made quite a noise.”
“yesterday while in my backyard these came hurling out of the sky with such velocity I thought I was being shot at by bullets because you could hear the whistle sound as they were coming down, one hit a metal cage in my backyard and it sounded like a boulder hit it when I couldn’t even see what actually hit it, one of them hit me in the arm and left a welt it barely grazed my arm with a sweatshirt on but still made a good mark and bruise, I found these two odd looking soft metal pieces in my backyard where I was standing, they are not magnetic they are light in weight. I was told they could be pieces from an airplane but I just don’t know I just know it was one of the scariest days of my life yesterday not knowing what it actually was so I’m glad I found these but I sure would like to know what they are.”
Airplane parts, space debris, or other chunks of man-made metal
“As I looked out my window, I saw a rock sitting on my deck. I went outside to look at it and at first, I didn’t think it was anything to out of the ordinary. Not until a few minutes later, when I looked back at my house to discover this rock pierced through the siding of my house, into a layer of my insulation and then it must of bounced off and landed on my deck. That’s when I became suspicious.”
“I was driving home one night, still in Virginia, about 9 p.m. est on a long straight road that has nothing but farmer’s fields on either side. From the top of my windshield I saw this big (well, it looked very large) burning thing falling to the ground, very fast. It burned all the way near to the ground, at what was about 30 yards from the ground it went out for a second then lite back up and appeared to burn out about 10-13 feet from the groun d. It was not only very close to me, maybe 1/2 mile, but it also had smaller pieces coming off of it that were also burning. It had a burning trail, or tail that was long, very long streaming from it… I was awe struck. It lite an entire section of the field up. That’s how I knew it landed in the field. I saw the plowed corn stalks where the farmers had disked the old corn under it’s light. It lit a huge section up. I never heard it hit and since it went out just before it landed I did not see an impact. Not to mention I almost wrecked my jeep watching it!”
Another “thwomp” story
“I have this rock that fell in my driveway today. My first thought from about 20 feet away was that it was some sort of compacted fecal matter that fell from an air craft. If it is that, I am sorry that I have wasted your time. The sound it made when it fell can only be described as a loud thwomp. Roughly 5 seconds after I heard the sound, I opened my front door. (I happened to be standing next to said front door, picking my son up.) There was nobody outside. After taking a picture of it from roughly 3 feet away from it, I put my son to bed, returned and took another picture from one foot away. Those are in the other email I sent you. The rock is in two main pieces, and one small piece. Altogether it is 12.80″ long, 4’5″ wide and 3.5″ deep. I don’t have a magnet to test it with. I don’t own a single magnet. It is heavy; heavier than my 25 pound toddler. If this IS some sort of meteorite or other matter that you have an educative interest in, it is all yours. I am honestly not at all interested in doing a chemical analysis and sawing the rock up or any other such actions. I am not interested in selling it, either. I just want to make sure, before I throw it out, that it is not something of scientific, geological or educational value. I cannot go out and buy an expensive digital camera with which to take pictures of it, so the quality I am sending is the best I have. I have amended my earlier aversion to touching it (ziplocs are great gloves), and will bring it into my house to take a couple better pictures for you. Past that, I am sticking it in a plastic bag and putting it outside until I hear from you.”
The rock. It does not look like a meteorite. It is obviously weathered on the exterior. When a meteorite comes through the atmosphere, it loses most of its exterior material to ablation. The remaining material is not rusty and weathered.
“7 years ago I saw a meteor falling above me. I slowed to a stop as it’s fire-ball got larger until it ‘poffed’ into a non-moving white flash about 75 feet in diameter at about 500 feet above the ground. After about 5 seconds I heard a ‘thud’. ‘Next’ I thought, ‘there is a meteorite on my ranch’ “
“about 9:00 PM as my wife Judy and myself were sitting in our living room with the front windows open and we heard a loud pitch sound and a impact that sounded like a light bulb being shot out ? We went out front and checked the lights on the front of our garage and looking around we found splattered rocks very close to the garage door on our cement driveway. I took my time and picked all of them up and put them in a jar.”
“I am writing this for my mother. She has a rock which has all the characteristics mentioned in your article on meteorites. She was working in the yard …laid down her rake….went back later for it…..and this rock was laying there.”
“In the early 1900″ my great grandparents were awakened early one morning around 3.00am. by something falling into their front yard when day lite came my great grandfather dug this meteorite up, it was still smoking.”
“My Grandfather saw this meteorite fall from the sky about 50 years ago. He Found it in a cornfield the next morning. My Father has it now and I don’t think he knows how rare and valuable it is. I believe it is a lunar meteorite and it weighs at least 20 pounds.”
“In 2004 winter some late; I push my son to take a walk nearby the park, suddenly in is away from me not to 6 meter about street lights, is pounded broken by a ball, this ball rolls to my foot nearby, I pick conveniently it, thought originally is some people is playing a ball game hits not carefully. But has a look all around unexpectedly continually a person’s shadow not to have Difficult inadequate is the space falls; Has turned head inspects the street light, it is from place above is pounded, also has a look in the hand this ball under the light sparkling. Never sees the so perfect universe rugby. Examines many related meteorites the pictures and the news, but I pick this as if too too is perfect, but I believe firmly it indeed am come from the outer space; Because it almost projects on me; The following is I to its description: It has a light mirror surface fusion outer covering, in the spherical surface is inlaying the near hundred seven colors (yellow, green, purple, orange color, brown, gray, white) the gem; One side has is likely has been roasted by the fire, has the colored round corona. The son one side is not that smooth in addition. It not by magnet attraction; After also had the hit street light the spot to lack a small angle; The fusion outer covering interior is likely the green glass material quality, has many bubbles, it may be diaphanous. The penetration photosource is likely the viridis eyeball. It is likely a grain of glass moonie. sorry cannot send for you the sample, because I do not want to destroy it.”
from the Peoria Journal Star
Rock that smashed window likely from recycling center
Friday, April 6, 2007 By: Fitzgerald M. Doubet of the Journal Star
BLOOMINGTON – The alleged meteorite that crashed through a Bloomington couple’s home last month now appears to have a more earthly origin. Robert “Skip” Nelson, a professor of geology at Illinois State University, along with his colleagues, originally believed the metallic rock that landed March 5 in David and Dee Riddle’s home at 25 Partner Place to be a meteorite. Upon further examination, his theory has changed.
“It appears that it was a piece of metal, steel actually, that had been embedded in a log probably as a growing tree,” Nelson said. “The log was put into a wood chipper, an industrial wood chipper. Inside the chipper the hammers were revolving at a significant velocity, and when they hit this piece of metal in there, it kicked it out the top of the chipper with a velocity in excess of 200 miles an hour.”
Nelson believes the piece of steel – about the size and shape of a deck of cards – inadvertently wound up in the wood chipper at Twin City Wood Recycling on Oakland Avenue.
“They found that it came crashing through the house, and traveling that velocity, the first thought was that it was a meteor,” Nelson said. “It was coming down at a 60 degree angle. When it was shot out, or ejected from the chipper, it traveled over 300 meters, more than 900 feet, two city blocks. It was really moving and had a trajectory like you fired it out of a mortar.”
John Wollrab, owner of Twin City Wood Recycling, said foreign objects making their way through the chipper is not common.
“It happens from time to time, but we try to prevent that,” Wollrab said.
The Riddles plan to keep the shiny black piece of steel around as a memento of their experience.
“It was fun. I’m a little disappointed, though,” Dee Riddle said. “It would have been a lot more fun if it had been even something from outer space, maybe not even a meteorite. We have had fun with it, and we will keep it just as a conversation piece.”
Mystery object from sky identified as woodchipper part
By: The Associated Press 18 July 2007, 4:27 p.m. ET
BAYONNE, N.J. (AP) — A hunk of metal that crashed through the roof of a home had NASA and Federal Aviation Administration officials scratching their heads.
It didn’t look “very space-y,” said Henry Kline, a spokesman for NASA’s Jet Propulsion Laboratory in Pasadena, Calif. ” It’s obviously made for something … But we wouldn’t know what to do with it.”
It didn’t appear to be an airplane part either, the FAA said.
Finally, FAA spokesman Jim Peters said Wednesday, a colleague in his office solved the mystery: It was part of a commercial woodchipper. The same part from another woodchipper’s grinder had caused similar confusion last year, he said. [The Bloomington, IL, story above.]
How it got on a Bayonne roof was anyone guess, but Peters had a theory. The grinder piece moves very fast and, apparently, it can launch into the air if something goes wrong.
The man who lives in the house was watching television Tuesday when he heard a crash and saw a cloud of dust. In the next room, he found the hunk of gray metal, 3 1/2 inches by 5 inches, with two hexagonal holes in it.
The part was being returned to Bayonne Police on Wednesday, Peters said.
“It belongs to somebody,” Police Director Mark Smith said.
Object that fell through roof of Dallas home was part of a tree-mulching machine, police say
Associated Press February 27, 2009
DALLAS – Police say a 6-pound chunk of metal that crashed through the roof of a Dallas home was part of a machine that was grinding up an unwanted tree nearby.
Sgt. Gil Cerda says: “Mystery solved.” So much for the theory it could have been a piece of debris from this month’s collision of Russian and U.S. satellites.
Cerda says the metal chunk was a grinding tip of a mulching machine being used by a tree disposal service crew. No one was hurt when it went flying Tuesday.
Senior Cpl. Janice Crowther said no charges will be filed against the business because it was an accident.
The satellite debris theory also came up when a fireball streaked across the Texas sky Feb. 15. That turned out to be a meteorite. It also surfaced last week when a piece of metal crashed through a New Jersey warehouse. That was another errant piece of a mulching machine.
‘Meteorite’ that struck a French woman was just a regular Earth rock that either fell from a roof, a plane’s wheels or was thrown by BURGLARS to see if anyone was home, experts claim
The unnamed victim was enjoying a coffee with a friend on the terrace of her home in Schirmeck, north-eastern France, when she ‘felt a shock in the ribs’.
It followed a bang on the roof above her, leading to the assumption that a space rock may have smashed into it before falling off and hitting the woman.
…
“I happened to google about things that fall from a the sky. I will attach a couple photos of the item that came through the roof of our garage, hit a bag of cardboard boxes, broke a 2 ft piece of 2×4 & broke the drywall attached below that. Once it hit the concrete floor it rolled over ten feet to the along with drywall chips to the big garage closed door. We are in the path of the Indianapolis airport, however, I spoke to two people from FAA, they didn’t ask for pics or seemed concern. The Greenwood, IN police had referred them, and again they didn’t investigate as well. While reading your article I found the wood chipper incidents interesting [above], as we have a city facility behind us that process wood debris for mulch.“
Left:The hole in the roof.Right:The “item.” It is hexagonal and abraded, so it is man-made.
“I believe it is a meteorite because my window was broken and it left a huge circular area where it busted the glass out then the rest of the window preceded to fall into my back seat. It is about the size of an eye.”
“My husband found this rock on our back porch next to a long skid across a concrete pad and a dent in the house.”
“While I was looking outside of my bathroom window on the second floor. I had an object to strike my copper roof which, I just happened to witness and heard as it hit. My first thought was, ‘What the heck was that?'”
A story from Ireland
“Forty years ago, I was bringing in turf from a bog with my father and I heard a loud ‘woshhhh’ sound. I looked up and saw (what looked like) a big silver ball with a big long trail of light behind it. Suddenly there came splinters off it and it went out. There was a trail of (what looked like) smoke for a few seconds after it disappeared and there was a loud thud. It frightened me, but my father said it was just a ‘shooting star and if you get it, it will bring you luck forever’. It fell quite near to us, in a small grazing field. I went over to where it was and picked it up. It was on top of the ground and there was nothing else in the field. I brought it home and left it on a flower bed at home and there it sat for 40 years. I knew nothing about meteorites then but since, with media/internet and school I have learned more about them. It’s been on my mind to get it looked at and would love to get it checked out.”
The rock. This one actually looks like it might be a meteorite. I never heard from the fellow again.
“we saw a round ball type of glowing multiple colors (Red, Blue, Green but predominantly red) with a tail of many colors, making a whishing sound (whoosh). It was heading towards an open field. It was found the next day in the morning as it glowed as the sun reflected on it in the open field.”
“I attached five photos of my rock for you to look at it and see what you think about this rock. It landed on the roof of my house back in 2001. it weighed about 3.5 Kg. a little magnetic and metallic on it as you see in one of photos. It has very light fusion crust but I still not sure, it is dark gray. I found it about a couple feet from the hole on second floor attic.”
“Hey I was looking up meteorites last night because a friend stopped by with one that crashed through his roof it’s real heavy and he left a piece here that broke off.”
Mysterious chunks of ice pelt Iowa town
CNN.com July 27, 2007
DUBUQUE, Iowa (AP) — Large chunks of ice, one of them reportedly about 50 pounds, fell from the sky in this northeast Iowa city, smashing through a woman’s roof and tearing through nearby trees.Authorities were unsure of the ice’s origin but have theorized the chunks either fell from an airplane or naturally accumulated high in the atmosphere — both rare occurrences.
“It sounded like a bomb!” 78-year-old Jan Kenkel said. She said she was standing in her kitchen when an ice chunk crashed through her roof at about 5:30 a.m. Thursday. “I jumped about a foot!”
…
“Occasionally, aircraft latrines discharge contents at altitude, resulting in chunks of descending ice. Airplanes also sometimes accumulate ice on their edges in certain atmospheric conditions, including high altitude and extreme moisture,” said Robert Grierson, the Dubuque Regional Airport manager and a pilot.
“Here is my story. I’m pretty sure this rock came from the sky which is very magnetic. It hit my house in a very strange spot. It hit my front window of my house that is facing a southerly direction. It was about eight feet off the ground and it ripped through a stainless steel screen.”
“Well in 1994 this thing fell out of the sky and hit an 18 wheeler and did major damage to the truck”
“I found this rock in my chicken pin about a month ago, April. It was laying on yellow grass ,was easily seen ,I check area for eggs each day, it wasn’t there the day before. The rock is a light brown and light black on one side. The rock didn’t have any dirt in it which is unusual in a chick pin, they like moving things around. Checked with magnet of my uniform badge and didn’t grab or maybe just a little if my imagination didn’t kick in.”
Tests show object isn’t meteorite
FREEHOLD TOWNSHIP – The flying object that came crashing through the roof of a township house in January was not a meteorite, as initially thought.
Not to worry. It appears man-made, not space invader-made, according to recent testing, information about which was released Friday.
“Basically, it’s a piece of stainless steel,” said Jeremy Delaney, a Rutgers University meteoriticist who became involved in analyzing the item Jan. 3, the day after it fell and when the homeowner notified township police. The rock-like item was silver and brown, lumpy but smooth. It was about 2-1/2 inches by 1-1/2 inches, weighing about 13 ounces.
Because the object had no specific distinguishing characteristics, “we can’t take it much further” to identify its source, Delaney said. Although it remains an unidentified flying object, Delaney speculated it was “space junk,” or spacecraft debris.
Srinivasan Nageswaran, whose family discovered the silver object after it crashed through the roof and into the upstairs bathroom of his home, was disappointed by the news.
“That’s the nature of science,” the 46-year-old information technology consultant said Friday. “If the conclusion from the test says it’s not a meteorite, then it’s not a meteorite. We have to move forward.”
“It’s still the world’s most popular metallic object that fell from the sky,” Nageswaran said. Debris falls daily.
About 11,000 items of space debris larger than about 4 inches are known to exist, according to the National Aeronautics and Space Administration. All told, according to NASA, tens of millions of space debris items probably exist.
Over the last 40 years, an average of one piece per day of known space debris has fallen to Earth, with no serious injuries or significant damage to property confirmed, according to the space agency. “Space junk is kind of a default answer,” Delaney said, explaining conventional aircraft would be eliminated as a source because the Federal Aviation Administration reported none in the area at the time of the crash.
Peter Elliott, a Colts Neck metallurgist involved in an early analysis of the object – and who thought it was a meteorite – suspected space debris when told of the test results.
The item seems to have come from space because of a triangle-like pattern, suggesting heat, Elliott said. An item falling from a conventional aircraft at a lower altitude would not have had the heat pattern, Elliott said.
About a week and a half ago, scientists viewed the item under a new, advanced electron microscope at the American Museum of Natural History in New York, then immediately analyzed the results, Delaney said. By the end of that day, the scientists from the museum and Rutgers concluded it was not a meteorite, Delaney said.
The item had chromium, a typical component of stainless steel, Delaney said. A meteorite would have been basically nickel and iron, Delaney said.
“This particular composition is not one we’ve ever seen (happening naturally),” Delaney said.
The delay in testing the item was a combination of arranging schedules of the Nageswaran family and those of scientists, as well as the availability of the microscope, Delaney said.
“It’s a new tool and it’s very much in demand,” Delaney said of the microscope.
On Jan. 2, the item crashed into the family’s home in the Colts Pride development along Route 537. It went through the roof, then into a second-floor bathroom, where it bounced off a tile floor and embedded into the wall, according to township police.
Early on, there seemed a sureness the object was a meteorite. Its shape, density, color and magnetism suggested meteorite, according to Rutgers.
“There was a sureness in the evidence that was available – the physical evidence,” Delaney said. “But we wanted to test it more thoroughly.”
Delaney said he was unaware of any continued analysis now that the item is determined not to be a meteorite.
“I was pretty comfortable from right when I first saw it (that it was a meteorite),” said Elliott, who was not involved in the recent testing. ” I wonder how many of the past ones (believed to be meteorites) were fully analyzed.”
On Jan. 27, the Rutgers University Geology Museum displayed the object as a meteorite at its open house.
“Oh, well, you win some, you lose some,” said Delaney, speaking of the display. “Now, we are in the position of saying, “Oops.”
The public, now, has a glimpse of how scientific analysis works, Delaney said.
“New experimental evidence routinely causes scientists to change earlier hypotheses that were based on the best information available at that time,” Delaney said.
After the object crashed through the roof, various people reported objects falling from the sky. Delaney viewed up to 50 objects, with all turning out to be a “meteorwrong” – not a meteorite.
Of the 50, only one falling in the “same general area” on possibly the same day might be related debris, Delaney said. No more information was immediately available on the other object.
Aircraft debris would have fallen at the same time, while orbiting debris could have fallen over hours, Delaney said.
Had it been a meteorite, within the context of it crashing through a house, “it was probably worth several thousand dollars,” Delaney said.
And, now that it is likely man-made debris?
“Zero, regrettably,” Delaney said.
“I was stopped at a traffic light last December and this hit my back window of my truck and landed on my bed cover”
“The attached photos are of a rock found approximately 8 feet from a shattered glass table located under a wooden slat pergola on our pool deck…”
“I know you may get several emails from people but I thought it best to see of you could give me your expert opinion on what happened at my house yesterday. I came home from work around 4:00 p.m. and went to get the mail. On my way back to the house I noticed what looked like a bullet hole in my driveway. I looked around and noticed several of them.”
“I was sitting on my back patio (facing North) looking at the Big Dipper when all of a sudden I see the most brilliant/bright ball come flying over my house with a fire tail behind it. It was so incredible. I could actually here it “fizzing” as it came over the house. As I sat there in amazement, watching this, I heard a “thump” like sound as it was passing over my house. The next day I decided to go investigate where I thought I heard the sound. To my amazement, the pictures show what I found. The rock was in my front yard laying in bark dust.”
This photo was sent to me by a man from Montenegro who said that “the man who found this stone claims [it] to have fallen from the sky and made a smaller indentation in the ground at the place where he keeps the sheep.” It does not look like a meteorite; it looks like slag.
“I’ve had my local geology department look at a pebble that hit me in the head (felt like a rain drop). Please take me seriously. It is a bit funny. …It is magnetic, shiny black, heavy for its size, and did hit me in the head and stuck in my hair. I pulled it out five minutes later. I had got out of the shower 15 minutes previous, therefore when I found it, I thought; how the heck??”
“My sister’s Cadillac trunk lid received a nasty dent that nearly pierced the metal. The damage is slightly downward. Everyone assumes a nearby lawn mower threw a rock. A rock the size of a baseball was found close by. The rock has a thin layer of crust and responds to a magnet and is very heavy for its size. Any ideas ?”
From http://www.sott.net/articles/show/107513-Believe-it-or-leave-it-strange-stories-of-2005
Police in Newcastle, Australia, reported a spate of frozen chickens smashing into house roofs with great force. They suspected a prankster with a powerful catapult.
“Last month my mom was taking her garbage out and this “meteorite” “rock” fall out of the sky about 8 feet from her into the driveway… There were no people around so nobody threw it and she did see it fall and hit ground.”
“I think I might have found a meteorite. I’ve put in a link to a few pictures and will keep the email short. I found the rock next to my parked car in the apartment complex parking lot in Austin, TX. The parking space is paved with normal asphalt used for making roads. Here’s why I think this might be a meteorite; it had created a dent / small crater of the shape and size similar to the rock in such hard asphalt and of course there aren’t any other rocks like this around here.”
“One night while out in the yard with the dog, I heard the sound of a rock being whizzed through the trees and then a thud sound a short distance from where I was standing. It was too dark to see clearly so I just proceeded to go inside. The following day my husband is holding this rock in his hand telling me he thinks he found a meteorite in the backyard. He said it stood out like a sore thumb, because it was just laying on top of the ground in an area we had just cleared a few days before. After showing me where he found it, I told him about the incident the night before.”
A geologist colleague sent me this story
“One time a man calls me and says he went on vacation and upon his return he found a meteorite embedded in his house – so I check it out the still embedded rock and impact trajectory – seemed all wrong – I asked him do you have a lawn service? – yep, – and do they have a riding mower? – yep, and they cut the grass while you were away? – yep – mystery solved – and it was a limestone cobble!!.”
“anyway…the attached photos are of a rock that appeared ON my patio about two weeks before…I did not record the exact day, but I believe it was around the 11th, or 12th of June 2010. I also HEARD it hit my shop/apartment while trying to take an afternoon nap around 2-3pm est. My shop has metal (steel) siding. I didn’t rise to check what made the noise, but nobody was mowing there yard, and I’m far enough away from the road, that I’m very certain it wasn’t flung or ejected . When I did rise and stepped out the door, sat down in my chair, I looked to my right and there it was. It does NOT look like any local rocks.”
“i assure you it’s meteorite because the lad didn’t look for it .he watched it fell on the sand near where they were standing one night. he knows much about the desert .he knows the whereabouts and where it fell coz not far from the tent .the next days and till the other friends forgot about it he went and picked it up.it took him very little time coz he knew where it was lying ! he recognized it coz its sight becomes different while he was walking around it. i asked him to take more photos. he is in the desert. he lives there.”
“I was standing 10 feet from a car today at work when I heard a loud smash from a customers car. Window was smashed white stuff all over windshield and that rock. The rock was cold like ice and in pic you can see it’s even wet.”
“In the early 1950’s my husband’s father was brought to a small forest in NY by a local and he showed him where the night before a meteor had crashed. It was approximately half the size of a Volkswagen sitting in a small crater at the end of a triangle shaped path cut through the trees. The path through the tree was a fairly clean cut and burnt.”
“In the mid-90’s a rock fell out of the sky in the evening and lodged into the siding of my parent’s house in Palmer, Kansas. I feel it might be a meteorite simply because it fell from the sky and because of the angle that it stuck into the house.”
“As we approached our house my wife Jean pointed to the roof of our house and said look someone has broken into our house. The roof of our house is tiled and one tile was smashed. At that point I could not think of any reason what smashed it. I fetched my ladder and got a spare roof tile from the shed. I had to get it replaced before the rain did any damage to the ceiling. The tile was on the third row up from the gutter and I could work on it easily. After taking all the broken pieces out I could then look inside but no more than a meter. Just to the right of the hole was a rock, I took the rock out and I immediately thought that someone had thrown it, but that just does not happen here. The rock is about the size of a cricket ball.”
“My wife back in November heard a load bang outside our house. The next morning when we walked out our door we found a rock looking thing in our walkway. We then went to our security cameras to see if we could find anything. The only thing that showed up was Motion, but we could not see anything.”
“I found, what i believe to be a meteorite in my yard, we heard it in the middle of the night hit my house and it damaged my gutters”
“I am sending you this email because my daughter came across this unusual rock in a strange and unusual way. It was May 31, 2011 around 1:30 in the afternoon. She was outside the home here in Pleasanton Texas, when suddenly she was struck on the back by this rock. What was strange was when I asked her which direction she was facing when struck. She searched for anyone who may of thrown it but found nobody. The opposite direction of which she was facing was only brushy farm land for cattle and no houses for a mile or more. I know the rock did not of hit her directly before striking her back, because she was not injured at all. She said she felt only a thump, like if it had come from a close distance. I believe if it may of struck the earth before bouncing towards her back. The rock seems to have some sort of fusion crust, also the core is bubble like, a lighter shade of rust color with white pigments slightly throughout the core. It looks to be one of those rare lunar meteorites.” [The rock the photo actually appeared to be a piece of hematite.]
A story sent to me from England
“Many years ago a friend and his buddies, who were mischievous lads in a rural area, found an old yoke from a lawnmower. Pondering what to do with such a find, he came up with the idea of making a giant slingshot. They found some truly huge rubber bands and collected a stash of appropriate sized stones, and they managed to rig up the device in an abandoned field outside town. They disported themselves on a fine morning by launching their missiles idly into the air until they became bored, then wandered off in search of fresh amusement. The next day, there was a story in the local paper about a mysterious hail of rocks from the sky. No one was hurt, but several cars suffered some damage.”
“I have this rock that fell in my driveway today. My first thought from about 20 feet away was that it was some sort of compacted fecal matter that fell from an air craft. If it is that, I am sorry that I have wasted your time. The sound it made when it fell can only be described as a loud thwomp. Roughly 5 seconds after I heard the sound, I opened my front door. (I happened to be standing next to said front door, picking my son up.) There was nobody outside. After taking a picture of it from roughly 3 feet away from it, I put my son to bed, returned and took another picture from one foot away. Those are in the other email I sent you.”
“It was in the evening around 6:30 – 7:00pm that I heard a bang on my bedroom glass louver and bang on the floor as the rock landed on the bedroom floor. My immediate reaction was that someone threw a rock at the house. Upon inspection I discovered this rock the size of a match box, grey in colour, dense and rather unique. Am suspicious it might be a meteorite.”
“This rock fell from the sky and passed through a tree then sank about 6 feet into the ground.” It looks like metal but not iron or steel. The shape is not rounded enough to be a meteorite and there is no hint that the outside melted, so it is not a meteorite. A meteorite this size would not bury itself to a depth of 6 feet.
“I’ve attached some photos of two meteorites that struck and broke through the front deck of my house last week.”
“Three weeks ago, my family and I were spending the weekend in our beach house, pacific shore in Guatemala, it was mid-day and we all were around the pool, suddenly we heard a really strong bang in the air, kind of like gunshots and then this rock bounced hard and fast in the cement floor and went into the pool, it hit first the roof which has a 45 degree angle, in which i assume the banging noise was made, thank God it didn’t hit any of us. The noise was so hard that the maintenance guy came from the other side of the house sure that shots have been fired. My daughter took it out from the water and the first thing that got to my attention was that it was very dark (black) with little white specs, and a couple of dark red specs as well on it, and heavy for its size.”
“Well, I walk down there 1 day and nobody goes down there it’s fresh snow laid out. Well the pond had a big [redacted] hole in the middle and it cracked from side to side in web form the pond is about a half acre. Then I found a hole in the snow with this in it. … Now this is 100 percent for a fact, it did fall from the sky. … There is no way nobody put it there because there was a hole in the snow & there was no footprints around for miles. Only way is if someone dropped it out of a helicopter.”
“A colleague and I were in front of my store on Friday night/Saturday morning, and during the middle of our discussion we heard and felt something land right next to us in the parking lot. We both immediately reacted to it as though something was thrown at us, but there were no people or vehicles around. During our reaction, there seemed to be more pieces falling, and both of our reactions were akin to it “raining” objects from the sky. After the noise of falling objects stopped, we noticed the larger rock (of the three samples that I have gathered) laying adjacent to us. This rock was wedged in between the asphalt of the parking lot and the cement of the curb leading to the walkway in front of my store. I mentioned to my colleague that something had hit my knee (I was sitting at the time of the impact) and we immediately noticed another fragment on the walkway next to my stool. Finally, we spotted a third fragment to our left, also on the asphalt. After discussing what the rocks could be, we felt as though a single rock had impacted near us and then broke into fragments, explaining the “raining” effect that we had considered. Upon coming to this conclusion, we decided to investigate the general direction from where we had heard the initial impact, and we quickly found what looked to be a pair of impact trails. We took a photo of one of the trails (a linear pile of powdered debris) and that was the end of the night. Observations: After looking at the rocks immediately upon discovery, I felt as though they resembled cement or concrete.”
from APP.com
Toms River man learns mysterious rock that landed in his yard is not a meteorite
By HARTRIONO B. SASTROWARDOYO October 6, 2010
TOMS RIVER — The mysterious rock that almost conked townshipresident Salvadore D’Addario on the head a year ago is still that: a mystery.
Experts from the planetariums at Ocean County College, here, and Raritan Valley Community College in Branchburg said Wednesday it most definitely is not a meteorite.
“It’s molded. It doesn’t have the right shape, nor does it have burn marks,” said William P. McClain, a Raritan Valley planetarium instructor who has a background in geology and meteorology.
Shrugging his shoulders, McClain concluded, “It’s a piece of something.”In July 2009, D’Addario had been cutting his backyard when a 3-pound rock of something landed near him.
“I looked around to see if someone was throwing rocks from across the lagoon, but there was no one there,” D’Addario said.
D’Addario said the object was warm to the touch — also an indication that it did not originate from beyond the Earth’s atmosphere.”If it came from space, it would have been ice cold,” said Gloria A. Villalobos, director of Robert J. Novins Planetarium at Ocean County College.
That is, an object falling through the atmosphere eventually is slowed by atmospheric friction, quelling any fireballs — although the “slow speed” is relative. Linda Welzenbach, a Smithsonian Institution geologist and meteorite scientist, estimated that one meteorite that hit a Lorton, Va., doctor’s office in January had a terminal velocity of 200 mph. By contrast, a skydiver reaches a terminal velocity of about 120 mph.
Villalobos also showed Sal D’Addario and his wife, Arleen, some of the college’s meteorites. They all were rounded, had thumb-sized dimples from tumbling through the atmosphere and were attracted to a magnet, Villalobos pointed out.By contrast, D’Addario’s rock was blocky, had no “crust,” as would be evident in an iron meteorite, and does not stick to a magnet. There is also an equally mysterious square-shaped impression, as if made by one end of an Allen wrench.
It also is probable that the material did not come from a spent rocket booster or satellite or fall off a plane. D’Addario’s rock is heavy for its size, possibly indicating a lead-based composition. It can be scratched by a key, which means it is a soft metal. The aerospace industry generally uses aluminum or titanium, lightweight but strong materials.
McClain and Villalobos suggested that D’Addario take the specimen to a geology laboratory, where scientists can take pieces of it and analyze its composition. D’Addario said he will follow through with the suggestion.
“You may not find out where it came from, but at least you can find out what it is,” said Marc Schneider, an Ocean County College planetarium volunteer who brings meteorites to various schools.
“I live in Tucson, AZ on the Air Force base, Tuesday morning I heard a loud thud hit my garage, I didn’t think much of it until I was about to walk my dog and found a rock sitting in my drive way. I brought it into the house. Later on when school was out my daughter came home and said it was a moon rock.”
“Day before yesterday around noon while having tea with a friend on my back patio, I heard a “pop” behind me, and almost instantly something smacked hard into the concrete just behind my chair and then went whizzing across the patio and into the grass. It came from the upside of a steep hill just behind my house and after looking at the ballistics, it appears to have skipped off of rocks and perhaps a gravel road before reaching my patio.”
“Yesterday walking to my son’s track meet i saw this rock fall from the sky (others saw it to) it hit the curb in the parking lot and broke in half.”
“Yesterday afternoon my wife and I were sitting in my backyard in Independence Kansas. We had our patio umbrellas up and heard a sound not unlike really fine sand raining down on the umbrellas and a nearly simultaneous thud on the roof which is directly beside the umbrellas. This rock rolled between us.”
A good crash story
About 2 weeks ago, the family and I decided to go out for gelato. I took the dog out to pee, and was standing about 5 feet from my car. It was evening, sunny, calm wind. Suddenly, a loud crash hit the sunroof of my car, splaying glass on the dog and I. Honestly, I thought it was a gunshot…after all, we live in New Orleans. After ducking and waiting for the next shot which did not come, I approached the car.Now you have to understand at this point, that I have only had the car back in my possession for about a week since a tree fell through the sunroof during the “tropical depression” that blew through here.
Anyway, as I went to the car, my husband came out of the house with our disabled daughter to go to the gelato joint. I told him what happened and he climbed up to survey the damage. In the middle of the sunroof was a hole the size of a rock, and it hit with enough force to crack the edges of the sunroof…the rock had gone through and was lodged under the glass.
Now, do I believe that a meteorite hit my car, parked in the driveway, minding its own business just a week or so of having a new sunroof put it…no. But, do I think a kid (there are none around) could have lobbed the rock with that much force to shatter the glass…no.
So, here I am writing to you to ask…what the hell happened???
Attached are photos of a rock that l found on the aluminum roof of our porch. There is no explanation as to where it came from; nothing hanging over porch and no kids nearby, and birds don’t lay rocks.
“…my husband and I heard a loud crash today in our kitchen around 2- 3pm. We though something glass fell or broke. We went looking and couldn’t find anything. A couple hours later I opened the window and sitting there between the screen and the window on the window ledge was a rock. See pictures below. It had broken through the screen, hit the window(which fortunately did not break) and landed on the sill. At first I thought maybe someone threw the rock at the house from the back yard (we back to a golf course and there are a few pine trees between the house and the 7th tee) but we didn’t see anyone out there when we ran outside to see what had made the crashing sound we had heard, and there were no people detected on our ring camera.”
“I have spent the past 24 hours learning all about meteorites. There is no fusion crust and it only weighs 130 grams and measures 6 cm x6cm x7 cm and is 3cm high. We pretty much concluded it is an earth rock based on those factors!”
“Unfortunately that means that mischievous kids, probably young teens or preteens, thought it would be fun to break someone’s window!”
Here’s the rock and, yes, it does not have a fusion crust.I received these photos with the following explanation: “…these are things that have actually fallen in my mothers yard. Making HUGE intentions in the growing and breaking her flower pots etc. not just things that were found.” I do not know whether they attract a magnet. There is no rust and they do not have the dark patina characteristic of iron meteorites. I suspect space debris or possibly airplane parts.
“I would like to get your opinion on these pictures, showing a piece of rock that destroyed a huge chimney. This meteorite weighs 700 grams and it’s not attracted by a magnet.”
The chimney, the damage, and the alleged 700-g culprit. The rock is not a meteorite – there is no fusion crust. But how could that little rock do all that damage? These 2 photos were sent with the simple message, “These hit my garage door.” The rocks (left) do not look like meteorites but they did hit the garage door (right) hard enough to do some damage.This photo was sent with the simple message “It fell through my dad’s windshield in the 70’s.” I’m sure it did, but it’s not a meteorite. It’s about the right size for throwing, though.I’ve lost the story about this one, but a rather strange looking rock hit the wooden deck hard enough to make a hole in the deck and shatter the rock. Once again, there is no fusion crust.
“A little more than thirty years ago, I went to my vacation house in the Pocono mountains of Pennsylvania and discovered a puncture hole in my roof that went through the shingles and the ½’’ thick plywood sheathing and then through the ¼ ‘ wooden soffit below. The hole in the roof plywood was punctured inward, and the soffit was punctured outward, indicating whatever caused the damage came from above. Directly below the exit hole of the soffett lying on the ground was the rock shown in the photos.”
A story from England: “At some time around 3-4pm we were sat overlooking the estuary when we both heard a sound as if a fighter jet was rapidly approaching from behind and to our right. We both instinctively turned our heads, only to witness an object traveling at phenomenal speed cutting through the air in a diagonal trajectory, disappearing behind the trees in front of us and straight into the estuary. The sound was akin to air being cut as if by a whip, like white noise but with a slight whistling tone. The whole thing probably lasted only a couple of seconds.
We were very perplexed by what we saw but it didn’t cross our minds until later that the only plausible explanation would be a meteorite. With this in mind and with the knowledge that the estuary is tidal, we decided to return to try and recover the object. It was a few days before we were able to return to the site at the right time and conditions. The estuary at this point is light mud, sand and light stones around the perimeter. We had a search area of at least 200 x 50 metres. We started to lose hope when Lyndsey found a stone which was notably different to the rest in the area and had no weathering.”
This is the rock that the sender found after the event. I think that it is possible that they witnessed a meteorite fall, but I think that this rock is not the meteorite. There is no fusion crust and the surface texture is too rough for a freshly fallen meteorite.
From Canada: “Hello, a meteorite fall on my irrigation pond the January 28 2022. … It’s broke 40 cm thickness of ice and rift over 20m. The biggest hole are 1m x 1.5m. I found fragments of the same type exactly where all the holes was. … All pictures are the same type of meteorite. … Just need an analyse to confirm everything.”
On the right is a photo of one of the recovered rocks. It is not a meteorite. The surface texture is much too rough for a meteorite, there is no fusion crust, and there is a hint of layering. None of the other rocks looked like meteorites either. Perhaps a meteorite did make the hole in the ice, but none of the recovered rocks is a meteorite.
The person who sent this photo was moose hunting in northern Alberta in 2006. “I heard a loud impact on a tree and brush rustling toward me when this debris came out a couple meters from me and rolled away from me. As I walked up to it I could hear a sizzling sound. I put my hand down near it but was surprised that it was cold.”
The rock does not look like a freshly fallen meteorite. The crust is not a meteorite fusion crust and the rock is too reddish to be a freshly fallen meteorite. It looks like a weathered iron-oxide concretion that was formed by deposition of successive layers of hematite. The finder had it cut in two and the inside is solid, fine-grained reddish hematite. This is another case where the recovered rock may not be the rock that made the noise.
“A rare rock fell at my house and made a big whole in my ceiling. I would like to know if is a meteorite.” No, it is not. It is just a rock. There is no fusion crust.
I saved the best one until last – This is a real meteorite
On February 28, 2021, a bright fireball was seen by 1000+ observers over England, Ireland, and northern Europe. The Wilcock family of Winchcombe, Gloucestershire, England, heard a noise that evening (21:54 local time) but did not discover until the next morning that one of the stones had landed on their driveway. Full story here: Winchcombe. The meteorite, a rare carbonaceous chondrite (CM2), is only the 23rd meteorite to have been found in England and one of only 21 CM chondrites to have been observed as a fall.
Left: Hannah, Rob, and Cathryn Wilcock enjoying the splat in their driveway. Right: The splat; 319 g of material was recovered from the driveway and lawn. Carbonaceous chondrites are very friable (easy to break) and have the consistency of a charcoal briquette. Image credits: UK Meteorite NetworkLeft: A fragment of the Winchcombe meteorite (size=?). Note the small patch of adhering fusion crust. Image credit: Natural History Museum, London. Right: A 152-g stone found on March 6 in a field. Note that fusion crust covers most of what we can see. Image credit: finder Mira Ihasz. | Space.com: “Perseid meteoroids (which is what they’re called while in space) are fast. They enter Earth’s atmosphere (and are then called meteors) at roughly 133,200 mph (60 kilometers per second) relative to the planet. Most are the size of sand grains; a few are as big as peas or marbles. Almost none hit the ground, but if one does, it’s called a meteorite.” For all meteor showers, the meteors you see are made almost exclusively by sand- to pea-sized objects that ablate away in the atmosphere and never hit the ground. If you see a meteor during a meteor shower, you are not going to find a meteorite.
“During the meteorite shower in early August, I believe what seems to be a meteorite fell in the yard of my daughter’s home. It apparently was smoldering all night and the odor from the heated rock was a stench. It hit the surface of the ground and the plants that were beneath it, the roots were scorched and destroyed. It took several buckets of water to stop the smoke and cool it off.”
“A meteor struck my driveway last Wednesday night/Thursday morning, and I have the fragments, what is the probability that it’s from the comet Swift-Tuttle associated with the Perseid Meteor Shower?”
“I was up watching the shower, then as I was going to bed I heard a loud whoosh, and a thud as something hit the earth. There were no neighbourhood boys who could have thrown objects. The next morning I went into the garden to see what fell there, and saw nothing, but a hole in the soft soil about eight inches deep. This rock was at the bottom of the hole.”
“It landed about 1 am on August 13 2007! My daughters … and I were hoping to see some meteors and were hoping to see something spectacular! | yes |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | http://meteorite.unm.edu/meteorites/meteorite-museum/how-id-meteorite/ | Do You Think You May Have Found a Meteorite? | Meteorite MuseumMeteorite Museum
University of New Mexico
Do You Think You May Have Found a Meteorite?
Meteorites are pieces of asteroids and other bodies like the moon and Mars that travel through space and fall to the earth. They are rocks that are similar in many ways to Earth rocks, but it is exciting to find a piece of another planet here on Earth. Meteorites fall to Earth all the time and are distributed over the entire planet, so you could even find one in your own backyard!
When a meteor enters the Earth's atmosphere the resulting fireball produces light, due to the friction between its surface and the air. A smoke or dust trail is produced in the sky by the fireball caused by the removal of material from the surface of the meteorite. Because the fireballs are traveling at high speeds, they sometimes produce a sonic boom or whistling heard 30 miles or more from where the meteorite lands. Several booms may be succeeded by irregular sputtering sounds, comparable to an automobile backfiring.
Meteorites have several distinguishing characteristics that make them different from terrestrial (Earth) rocks. You can use this list to guide you through them. Usually, meteorites have all or most of these characteristics. Sometimes, detailed chemical analyses need to be done, but only on rocks that meet all these characteristics. Since detailed analyses take time and money, look for the easy characteristics first.
Meteor-rights!
Fusion crust
Meteorites which have fallen recently may have a black "ash-like" crust on their surface. When a meteorite falls through the Earth's atmosphere a very thin layer on the outer surface melts. This thin crust is called a fusion crust. It is often black and looks like an eggshell coating the rock. However, this crust weathers to a rusty brown color after several years of exposure on the Earth's surface and will eventually disappear altogether. In the image to the right, the fusion crust is the thin, black coating on the outside of the meteorite.
In desert areas, rocks often develop a shiny, black exterior called desert varnish. This develops due to microbial activity on the rock. Usually, but not always, you will be able to see the same kind of varnish on lots of rocks in the same area. This web page has some good examples of desert varnish.
Regmaglypts
The surface of a meteorite is generally very smooth and featureless, but often has shallow depressions and deep cavities resembling clearly visible thumbprints in wet clay or Play-Doh. Most iron meteorites, like the example at right, have well-developed regmaglypts all over their surface.
Ordinary chondrites and stony meteorites like the one at left have smooth surfaces or regmaglypts.
Iron-nickel metal
Most meteorites contain at least some iron metal (actually an alloy of iron and nickel). You can see the metal shining on a broken surface. Meteorites without metal in them are extremely rare and they need to have some of the other characteristics of meteorites to be able to identify them as meteorites. Iron meteorites have a dense, silvery appearing interior with no holes or crystals. Stony iron meteorites are about half metal, half crystals of green or orange olivine. Stony meteorites contain small flecks of metal that are evenly distributed throughout the meteorite. The metal in a meteorite has the unusual characteristic of containing up to 7% nickel. This is a definitive test of a meteorite, but requires a chemical analysis or acid etching to detect. See more about metal objects below.
Iron meteorites are made of dense and compact metal.
Stony-iron meteorites have metal and olivine crystals.
Metal is distributed evenly throughout ordinary chondrites.
Density
Unusual density is one of meteorites' more characteristic features. It's not enough to say your rock is heavy. Density is how heavy a rock is for its size or compared with other rocks. Iron meteorites are 3.5 times as heavy as ordinary Earth rocks of the same size, while stony meteorites are about 1.5 times as heavy. Lumps or fragments of man-made materials, ore rocks, slag (the byproduct of industrial processes) and the iron oxides magnetite and hematite, are also common all throughout the world and are frequently dense and metallic. So this test is helpful but not definitive.
To measure the density of your rock, you need to measure its weight and its volume. The weight is easy: weigh the rock on a balance or scale (either in grams or in ounces; 1 oz = 28 g). For the volume, get a household liquid measuring cup that is bigger than your rock and fill it halfway with water. Put the rock in and measure how high the water comes now. Subtract the first number from the second number to get the rock's volume. If your rock is too big to put in a measuring cup, then measure it with a ruler (make sure your measurement is in centimeters; 1 in = 2.54 cm). Measure the longest side and the shortest side, then one more length perpendicular to both sides. Calculate a rough volume by multiplying all three lengths together. When you multiply the three lengths together, you will get your answer in cm*cm*cm, or cm3. 1 cm3 = 1 milliliter = 1 mL.
The density is the weight divided by the volume. Compare your rock's density to Earth rocks:
Rock Type
Density in grams / milliliter (mL)
Density in ounces / cup
Granite
2.8
23
Sandstone
2.6
21
Basalt or lava rock
3.1
26
Hematite
5.1
42
Stony meteorite
3.5
29
Iron meteorite
8.0
66
Magnetism
Most meteorites contain some iron-nickel metal and attract a magnet easily. You can use an ordinary refrigerator magnet to test this property. A magnet will stick to the meteorite if it contains much metal. Some meteorites, such as stony meteorites, contain only a small amount of metal, but will attract a magnet hanging on a string. Metal detectors can alert you to whether a rock contains metal, but not all metal is magnetic. For instance, aluminum sets off metal detectors but is not magnetic. So, if you find a rock with a metal detector, try the magnet test too.
In addition to meteorites containing iron, there are man-made and naturally-occurring materials that are magnetic and are easily confused with meteorites. Magnetite and hematite are common iron-bearing minerals that are often mistaken for meteorites. Both minerals can occur as large masses with smooth surfaces that are heavier than typical rocks, but have some features which resemble meteorites. Magnetite is very magnetic (hence its name) and hematite is mildly magnetic. Use the streak test below to distinguish these minerals.
Chondrules
The most common meteorites to fall on Earth are called chondrites. These are stony meteorites that contain small balls of stony material called chondrules that are about a millimeter (1/25 inch) across. You need to break open the meteorite to see the chondrules.
Meteor-Wrongs
It isn't always easy to identify a meteorite even using the properties discussed above, because some characteristics are shared by common terrestrial rocks and man-made materials. Let's look at some areas where confusion can arise.
Round (Spherical) Shape
Meteorites are almost never perfectly round or spherical and rarely are they aerodynamically shaped. They are usually very irregular in appearance and come in a variety of different shapes and sizes.
Bubbles or holes
Many people believe that meteorites have the appearance of being molten, perhaps having a frothy appearance or bubbles on their surfaces. However, this is not the case. The outer portion of a meteorite, the fusion crust, is either smooth or has the characteristic regmaglypts (thumb prints) described earlier. However, many terrestrial igneous rocks are porous and have holes in them. These holes or 'vesicles' were produced by bubbles of gas that formed in the magma as it was erupted. If you find a rock that is porous or contains vesicles it is a terrestrial rock.
Crystals
If there is quartz (a clear or milky white crystal) it is not a meteorite. Quartz is produced on the earth in evolved rocks at plate margins; in contrast, other planetary bodies like asteroids do not have these kind of settings and do not produce large quartz crystals. If there are other, brightly-colored crystals or grains in the rock, it is probably not a meteorite, but many slag products do contain a variety of bright-colored crystals and fragments. If there is an easily visible crystal structure it might not be a meteorite. This is not conclusive because some of the rarer meteorites do have some crystal structure. However, most ordinary meteorites do not unless viewed under a microscope.
Hot or Radioactive
Most meteorites are cold when they hit the Earth's surface and do not start fires on the ground. Their trip through the atmosphere is short and the friction heat that burns up the outside does not have a chance to heat up the inside of the meteorite.
Meteorites are made of the same elements and minerals as terrestrial rocks and are not any more radioactive than terrestrial rocks, so you can't find them with a Geiger counter.
Streak
Streak is what the rock leaves behind, like a crayon. Common ceramic tile, such as a bathroom or kitchen tile, has a smooth glazed slide and an unfinished dull side which is stuck to the wall when installed. Take the sample that you think is a meteorite and scratch it vigorously on the unglazed side of the tile. If it leaves a black gray streak the sample is almost certainly magnetite, and if it leaves a red-brown streak it is almost certainly hematite. A meteorite, unless it is very heavily weathered, will not leave a streak on the tile. If you don't have a ceramic tile, you can also use the inside of your toilet tank cover (the heavy rectangular lid on top of the tank) - it is heavy, so be careful.
Other kinds of metal
Human activity has produced objects made from pure iron for centuries, so it is possible to confuse lumps of man-made iron with meteoritic materials. Objects such as iron grinding balls often have a smooth rounded appearance and may be thought be meteorites. Lumps of iron slag from smelting processes can also have some similarities to meteorites, so it is important to be careful. The major difference between iron produced by human activity and meteoritic iron is the presence of the element nickel. Iron metal in all meteorites contains at least some nickel whereas man-made metal objects generally do not. In addition, the interior structure of iron meteorites is unique and unlike any man-made metal alloys. Special analysis and preparation techniques are required to examine the internal structure and composition of a suspect meteorite. The results of such tests are, however, completely definitive.
Meteorites in New Mexico
For information on hunting for meteorites in New Mexico, please visit this page.
Pages originally compiled by David Draper using Open-source web design template by G. Wolfgang. Meteorite Catalog and Django CMS Application Designed and Maintained by Dr. William B. Hudspeth, Earth Data Analysis Center, UNM | Last modified 20 January 2012. | If there is an easily visible crystal structure it might not be a meteorite. This is not conclusive because some of the rarer meteorites do have some crystal structure. However, most ordinary meteorites do not unless viewed under a microscope.
Hot or Radioactive
Most meteorites are cold when they hit the Earth's surface and do not start fires on the ground. Their trip through the atmosphere is short and the friction heat that burns up the outside does not have a chance to heat up the inside of the meteorite.
Meteorites are made of the same elements and minerals as terrestrial rocks and are not any more radioactive than terrestrial rocks, so you can't find them with a Geiger counter.
Streak
Streak is what the rock leaves behind, like a crayon. Common ceramic tile, such as a bathroom or kitchen tile, has a smooth glazed slide and an unfinished dull side which is stuck to the wall when installed. Take the sample that you think is a meteorite and scratch it vigorously on the unglazed side of the tile. If it leaves a black gray streak the sample is almost certainly magnetite, and if it leaves a red-brown streak it is almost certainly hematite. A meteorite, unless it is very heavily weathered, will not leave a streak on the tile. If you don't have a ceramic tile, you can also use the inside of your toilet tank cover (the heavy rectangular lid on top of the tank) - it is heavy, so be careful.
Other kinds of metal
Human activity has produced objects made from pure iron for centuries, so it is possible to confuse lumps of man-made iron with meteoritic materials. Objects such as iron grinding balls often have a smooth rounded appearance and may be thought be meteorites. Lumps of iron slag from smelting processes can also have some similarities to meteorites, so it is important to be careful. The major difference between iron produced by human activity and meteoritic iron is the presence of the element nickel. Iron metal in all meteorites contains at least some nickel whereas man-made metal objects generally do not. In addition, the interior structure of iron meteorites is unique and unlike any man-made metal alloys. | no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://www.clemson.edu/public/geomuseum/meteorites.html | Meteorite Identification | Public | Clemson University, South Carolina | Meteorite Identification
Important Terms
Meteorites are “fragments of rock or iron from a meteoroid, asteroid, or possibly a comet that pass through a planet or moon's atmosphere and survive the impact on the surface” (1).
Meteoroids are what meteorites are called while still in space (5).
Meteors are “the streaks of light we see at night as small meteoroids burns up passing through our atmosphere” (1)
Shooting stars are “small pieces of rock or dust that hit Earth's atmosphere from space” (2). They include meteors and fireballs (1).
How Rare Are Meteorites?
Meteorites are incredibly rare. Most meteors (90-95%) don’t survive the trip through the atmosphere, and those that do often fall unnoticed in remote areas or into oceans. Some would even say they are more rare than diamonds. (3, 4)
Question and Answer:
Are meteorites magnetic? Yes. A majority of meteorites contain a significant amount of iron. If it isn’t magnetic, it probably isn’t a meteorite. (6, 7)
Are meteorites heavy? Typically, yes. The same thing that causes meteorites to be magnetic often causes them to be heavy: their high iron content. This iron causes them to be more dense than earth rocks of the same size. (6)
Are meteorites radioactive? Mostly no. Meteorites do contain small amounts of radioactive particles that are quickly lost, but they last such a short amount of time and are in such trace amounts that they are not dangerous. (5, 6)
Are meteorites on fire when they crash into earth? No. While meteoroids are still in space, they are cold. The amount of time they spend shooting through the atmosphere is too brief to warm the rock completely, and so when they land, they are not hot enough to set anything on fire because of the rock’s temperature. (5)
Origins of Meteorites
Not all meteorites are the same age. The oldest we have recorded clock in at 4.56 billion years old, especially those that come from asteroids. Meteorites from the moon tend to range from 2.9-4.5 billion years old, while those from Mars vary from 200 million to 4.5 billion years old. (8)
Types of Meteorites
Stony meteorites are the most common type of meteorites. There are three subtypes of this group: chondrites, achondrites, and a third, more rare group, planetary achondrites. Chondrites are made of chondrules, which are “droplets of melted rock which cooled in microgravity into tiny spheres” (1). These are the most common type of stony meteorites and the most common type of meteorites on earth in general. Achondrites lack chondrules and “form on planetary bodies with a distinct core and crust” (8). They are less common than chondrites. Planetary achondrites are simply achondrites that come from the moon or Mars. (8, 1)
Stony-iron meteorites are the rarest of the three types of meteorites and contain an equal mixture of silicates and a nickel-iron alloy. There are two subgroups: Pallasites and Mesosiderites. Pallasites are “believed to form between the outer shell and core of an asteroid” (8), and the primary silicate mineral found in them is olivine. Mesosiderites have a silicate portion made of mainly igneous rock fragments and are likely formed by collisions between asteroids that are rich in metal and rich in silicate. Very few meteorites of this type have been found. (8, 1)
Iron meteorites are the most recognizable types of meteorites even though they aren’t the most common. They don’t have any subgroups, and they are made of mostly a nickel-iron alloy. (8)
Meteorite Identification
Warning: The Campbell Geology Museum is not a Meteorite Identification Service.
Meteor-wrongs
Definition: A rock that is believed to be a meteorite but turns out to be an earth rock.
Common Giveaways that a Rock is a Meteor-wrong:
If a rock contains quartz. The picture on the left is what people typically think of when they imagine quartz. However, the picture on the right also contains quartz.
If a rock has vesicles (tiny holes created by gas escaping from cooling molten rock).
Commonly Found Meteor-wrongs:
Slag- Also called cinder or runoff. One of the most commonly found meteor-wrongs. Has vesicles, which meteorites don’t have. Often mistaken for a meteorite because of its melted look and is found everywhere.
Magnetite and Hematite- Often mistaken for meteorites because they are magnetic. The first picture is magnetite, while the second group of pictures features different kinds of hematite.
Dark black rocks- ex. Basalt.
Tests:
Warning: Passing a test does not guarantee that a specimen is a meteorite.
Magnetism: A majority of meteorites are magnetic. If your specimen isn’t magnetic, it probably isn’t a meteorite.
Streak Test: Scratch your specimen on a ceramic tile. “Unless it is heavily weathered, a stony meteorite typically won’t leave a streak mark on the ceramic.” (7) If the streak is black or gray, your sample is likely magnetite. If it is a red or brown streak, you probably have hematite.
Nickel Test: Run a chemical test for nickel. If the proportion of nickel is inside the range for meteorites, you may have a meteorite.
Weight Test: Meteorites are much more dense than normal earth rocks.
Fusion Crust Test: Fusion crust is a thin, dark rind formed on a meteorite as it streaks through our atmosphere. It does not occur on earth rocks and will disappear over time due to weathering, but it can be seen on some fresh meteorites. The absence of a fusion crust does not mean a specimen is not a meteorite.
Regmaglypts Test: Regmaglypts, also known as thumbprints, are unique to meteorites. They are oval depressions found on many meteorites. The absence of regmaglypts does not mean a specimen is not a meteorite.
Window Test: One of the last tests to perform. Create a small window to see inside. If you can see shiny metal flakes, you may have a meteorite. If the inside does not have shiny metal flakes and instead is plain, you probably don’t have a meteorite. | Meteorite Identification
Important Terms
Meteorites are “fragments of rock or iron from a meteoroid, asteroid, or possibly a comet that pass through a planet or moon's atmosphere and survive the impact on the surface” (1).
Meteoroids are what meteorites are called while still in space (5).
Meteors are “the streaks of light we see at night as small meteoroids burns up passing through our atmosphere” (1)
Shooting stars are “small pieces of rock or dust that hit Earth's atmosphere from space” (2). They include meteors and fireballs (1).
How Rare Are Meteorites?
Meteorites are incredibly rare. Most meteors (90-95%) don’t survive the trip through the atmosphere, and those that do often fall unnoticed in remote areas or into oceans. Some would even say they are more rare than diamonds. (3, 4)
Question and Answer:
Are meteorites magnetic? Yes. A majority of meteorites contain a significant amount of iron. If it isn’t magnetic, it probably isn’t a meteorite. (6, 7)
Are meteorites heavy? Typically, yes. The same thing that causes meteorites to be magnetic often causes them to be heavy: their high iron content. This iron causes them to be more dense than earth rocks of the same size. (6)
Are meteorites radioactive? Mostly no. Meteorites do contain small amounts of radioactive particles that are quickly lost, but they last such a short amount of time and are in such trace amounts that they are not dangerous. (5, 6)
Are meteorites on fire when they crash into earth? No. While meteoroids are still in space, they are cold. The amount of time they spend shooting through the atmosphere is too brief to warm the rock completely, and so when they land, they are not hot enough to set anything on fire because of the rock’s temperature. (5)
Origins of Meteorites
Not all meteorites are the same age. The oldest we have recorded clock in at 4.56 billion years old, especially those that come from asteroids. | no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://meteoritelab.com/about/meteorites/ | About Meteorites | Southwest Meteorite Laboratory | What are meteorites?
Meteorites are bits of other planets, asteroid, or planetoids which are found on Earth. They once were moving along in space on a collision course with earth and survive entry into our atmosphere and the impact. back to top
What planets do they come from?
So far only two parent bodies have been positively identified; Earth’s moon and Mars. However, much evidence suggests that the origin of the HED group is the asteroid 4Vesta. back to top
How do you know it is from Mars?
For several years scientists were puzzled by the SNC group of meteorites. They are not only compositionally different from all other meteorites but they are also much younger (175 million to 1.3 billion years old compared to the average of other meteorites at ~4.5 billion years old). Originally it was by a process of elimination they reasonably concluded that these meteorites must have originated on Mars. This conclusion is now well accepted in the scientific community. In early 1983 scientists reported that the meteorites in this group contained chemical, isotopic, and petrologic features consistent with available data collected by the Viking landers. By late 1983 others showed that the isotopic concentrations of various noble gasses found in shergottioes (S) were consistent with the atmosphere of Mars as measured by the Viking landers. More recent data sets transmitted by Mars Global Surveyor, Pathfinder and other scout missions also confirm the origin of the SNC meteorites. back to top
Has anyone been hit by meteorite?
A woman was struck on the thigh by a stone meteorite that crashed through the roof of her home in Sylacauga, Alabama, USA. Though it was probably pretty painful the injury was not life-threatening.
Things reported to be struck by a meteorite fall are; several houses, a barn, paved roads, a dog, a cow, a car, a mail box, a large machine, and a pot on a stove. back to top
Fall or Find?
A “Fall” is a recovered meteorite that was witnessed to fall. A “Find” is a recovered meteorite whose fall was not witnessed. back to top
How are meteorites named?
Meteorites are typically named for the town, post office or other geographical landmark nearest the recovery sight. back to top
Is a meteor a meteorite?
No. A meteor is a streak in the sky and gets its name from the Greek word “meteoros” meaning something high in the sky. A meteor is a phenomenon that occurs when a solid object makes contact with Earth’s atmosphere causing an exchange of ions resulting in a bright streak of light in the sky. The object causing the meteor can be as small as a fleck of dust or as big as a mountain. Every meteorite started out as a meteor when it entered our atmosphere. However every meteor doesn’t produce a meteorite. Most of the objects burn completely up before reaching the ground. A meteorite is the portion of an object that survives passage through the atmosphere and lands on the ground. back to top
Why study meteorites?
Meteorites are a poor mans space probe and time machine. They teach us about planetary geology and the formation of our solar system. They also represent natural resources mankind may use to survive as they travel through space. back to top
Where do meteorites come from?
Most meteorites probably come from the asteroid belt; between Mars and Jupiter. Some have been linked to specific planetary bodies: Mars, Earth’s moon, and Asteroid 4 Vesta. Some meteorites contain pre solar grains that existed before the ignition of the Sun and predate our solar system. Generally speaking most meteorites are probably samples of the Near Earth Asteroids (NEAs). back to top
How many have been found?
The Meteoritical Bulletin has recorded 34,554 valid meteorite names (fall or find locations worldwide) since 1957. Each name represents a location where at least one meteorite was found. Often multiple (paired) specimens are recovered from the same site. In rare cases a single location has yielded thousands of pieces. Individual specimens range in size from sub-gram up to 60 tons. back to top
When was the first meteorite found?
The oldest meteorite find recorded was recovered from an archeological site, in Ur, circa 2500BC. back to top
How are meteorites found?
Most meteorites have been found by visual recognition; out of place rocks that don’t fit in with their surroundings. Some meteorites are seen to fall and recovered immediately and some are found with a metal detector. Not all meteorites have enough metal to register on a metal detector. back to top
Are meteorites radioactive?
Generally, no appreciable levels of radiation are found in meteorites. One meteorite which fell in Japan a few years ago had some measurable radioactivity. back to top
What are meteorites made of?
Meteorites are primarily made of stone, iron and a mixture of the two. Stone meteorites are primarily made of Iron/Magnesium silicates (olivine and pyroxene) and Aluminum silicates (feldspars) with minor amounts of other rock forming minerals. Iron meteorites are generally made of mostly iron with from ~5.5% to 18% nickel plus trace amounts of many siderophile elements. Relatively few iron meteorites contain more than 18% nickel. Stony-iron meteorites can be made of all of the above or combinations of them. back to top
Are meteorites hot when they hit the earth?
A freshly landed meteorite is never glowing red, in fact a freshly landed meteorite is seldom if ever even as hot as a hot potato. Considering the temperature of deep space is only a few degrees above absolute zero (~ 273°C, a theoretical limit where no heat remains), we know the meteoroid is very cold, frozen solid so to speak. The descent rarely lasts as much as 30 seconds. Imagine a solid ice cube fired through a blast furnace, the surviving portion is still very cold. back to top
What is fusion crust?
Fusion crust forms on all meteorites that survive passage through our atmosphere and land on the Earth. It is usually black, rarely cream colored and often weathered to brown or reddish-brown. Fusion crust is like a rind (rarely thicker than 1 millimeter) of melted rock that is caused by friction with our atmosphere. With a minimum entry velocity of 11.2 kilometers per second, the air in the meteorite’s path is compressed to the point of superheating. The outer surface of the meteorite melts and ablates as it races to the Earth’s surface; yet the interior of the meteorite remains cool. back to top
What is orientation?
Orientation is a term used to describe an aerodynamic shape resulting from a meteorite passing through our atmosphere remaining in fixed position. As the air superheats the surface directional melting and ablation occurs systematically sculpting the specimen and producing distinctive fusion crust with directional flow lines. back to top
What is secondary fusion crust?
Secondary fusion crust forms over a surface of a meteorite that broke in the upper atmosphere while still traveling at cosmic velocity. back to top
Are meteorites magnetic?
All nickel-iron meteorites are strongly attracted to a magnet. Most stone meteorites have varying magnetic susceptibility depending upon iron content. Some meteorites have no iron at all and are not attracted by a magnet. back to top
Do they magnetize?
Any iron can be magnetized but meteorites are not natural magnets. back to top
Why do meteorites affect a compass?
Because of their iron content. A compass needle is a magnet on one end and is attracted by the north pole of the earth. When a stronger source of magnetic material is close to the needle the magnet is attracted to it. back to top
What is a chondrule?
Chondrules are little round things found in primitive meteorites called chondrites hence their name from the Greek root “chondros” meaning rounded grain. Chondrules are considered the first solid objects to form in the solar nebula. They are generally spherical igneous droplets with various composition and structures, like: porphyritic, granular, barred, radiating, and cryptocrystalline. back to top
How old are meteorites?
Generally speaking meteorites are about the same age as our solar system 4.5 billion years old. A few meteorites are considerably younger; Martian meteorites at 175 million to 1.3 billion years and some Lunar meteorites with ages of <4.3 billion years. back to top
What is an impact structure?
Impact structures are craters formed by meteorite impact, like: Great Meteor Crater; Chesapeake Bay; Wolfe Creek; and Nordlinger Reis to name a few. back to top
How many impact structures are there?
Aside from small, relatively recent, impact events like Henbury and Sikhote-Alin with multiple craters and impact pits, some 174 impact structures have been identified worldwide. back to top
What is an impactite?
An impacite is glass formed as a result of meteorite impact. It is always associated with an impact structure. back to top
What is a tektite?
A tektite is glass formed as a result of meteorite impact. However a tektite has a very low or no water content and was transported by the meteorite impact blast. Some tektites show evidence of cosmic ray exposure and aerodynamic shapes produced by re-entry strongly suggesting they were ejected from Earth’s atmosphere and returned at a later date. back to top | A freshly landed meteorite is never glowing red, in fact a freshly landed meteorite is seldom if ever even as hot as a hot potato. Considering the temperature of deep space is only a few degrees above absolute zero (~ 273°C, a theoretical limit where no heat remains), we know the meteoroid is very cold, frozen solid so to speak. The descent rarely lasts as much as 30 seconds. Imagine a solid ice cube fired through a blast furnace, the surviving portion is still very cold. back to top
What is fusion crust?
Fusion crust forms on all meteorites that survive passage through our atmosphere and land on the Earth. It is usually black, rarely cream colored and often weathered to brown or reddish-brown. Fusion crust is like a rind (rarely thicker than 1 millimeter) of melted rock that is caused by friction with our atmosphere. With a minimum entry velocity of 11.2 kilometers per second, the air in the meteorite’s path is compressed to the point of superheating. The outer surface of the meteorite melts and ablates as it races to the Earth’s surface; yet the interior of the meteorite remains cool. back to top
What is orientation?
Orientation is a term used to describe an aerodynamic shape resulting from a meteorite passing through our atmosphere remaining in fixed position. As the air superheats the surface directional melting and ablation occurs systematically sculpting the specimen and producing distinctive fusion crust with directional flow lines. back to top
What is secondary fusion crust?
Secondary fusion crust forms over a surface of a meteorite that broke in the upper atmosphere while still traveling at cosmic velocity. back to top
Are meteorites magnetic?
All nickel-iron meteorites are strongly attracted to a magnet. Most stone meteorites have varying magnetic susceptibility depending upon iron content. Some meteorites have no iron at all and are not attracted by a magnet. back to top
Do they magnetize?
Any iron can be magnetized but meteorites are not natural magnets. back to top
Why do meteorites affect a compass?
Because of their iron content. | no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://www.bibalex.org/SCIplanet/en/Article/Details.aspx?id=13540 | SCIplanet - Anatomy of Myth (2) | Articles
Anatomy of Myth (2)
What makes myths dangerous is that they are usually developed through verbal tradition. Ironically enough, among the most commonly believed myths nowadays are scientific myths, some of which we bust here.
There is a Dark Side to the Moon
Actually, every part of the Moon is illuminated at sometime by the Sun. This misconception has come about because there is a side of the Moon that is never visible on Earth; this phenomenon is called tidal locking. A tidally locked body takes just as long to rotate around its own axis as it does to revolve around its partner; this synchronous rotation allows one hemisphere to constantly face the partner.
Meteors are Heated by Friction with the Atmosphere
When a meteoroid enters Earth's atmosphere, becoming a meteor, it is actually the speed compressing the air in front of the object that causes it to heat up. It is the pressure on the air that generates a heat intense enough to make the rock so hot that is glows brilliantly for our viewing pleasure if we are lucky enough to be looking at the sky at the right time.
We should also dispel the myth about meteors being hot when they hit Earth, becoming meteorites. Meteorites are almost always cold when they hit; in fact, they are often found covered in frost. This is because they are so cold from their journey through space that the entry heat is not sufficient to do more than burn off the outer layers.
The Human Body Pops when Exposed to Space Vacuum
This myth is the result of science fiction movies, which use it to add excitement or drama to the plot. In fact, a human can survive for 15–30 seconds in outer space as long as they breathe out before exposure; this prevents the lungs from bursting and sending air into the bloodstream. After 15 or so seconds, the lack of oxygen leads to unconsciousness, which eventually leads to death by asphyxiation.
Brain Cells Cannot Regenerate
The reason for this myth being so common is that it was believed and taught by the science community for a very long time. It was not until 1998 when scientists discovered that brain cells in mature humans can regenerate. It had previously been long believed that complex brains would be severely disrupted by new cell growth, but the study revealed that the memory and learning center of the brain can create new cells, giving hope for an eventual cure for illnesses such as Alzheimer’s.
Lightning never strikes the same place twice
Next time you see lightning strike and you consider running to the spot to protect yourself from the next bolt, don't! Lightning does strike the same place twice; in fact, it is very common. Lightning obviously favors certain areas such as high trees or buildings; in a large field, the tallest object is likely to be struck multiple times until the lightning moves sufficiently far away to find a new target. The Empire State Building actually gets struck around 25 times a year.
Unfortunately, among the myths that are continuously being generated and propagated, even in this day in time of unprecedented knowledge, more often than not causing mass disturbance among the population, are eschatology myths, which are myths that revolve around the end of the world; the Apocalypse. | Articles
Anatomy of Myth (2)
What makes myths dangerous is that they are usually developed through verbal tradition. Ironically enough, among the most commonly believed myths nowadays are scientific myths, some of which we bust here.
There is a Dark Side to the Moon
Actually, every part of the Moon is illuminated at sometime by the Sun. This misconception has come about because there is a side of the Moon that is never visible on Earth; this phenomenon is called tidal locking. A tidally locked body takes just as long to rotate around its own axis as it does to revolve around its partner; this synchronous rotation allows one hemisphere to constantly face the partner.
Meteors are Heated by Friction with the Atmosphere
When a meteoroid enters Earth's atmosphere, becoming a meteor, it is actually the speed compressing the air in front of the object that causes it to heat up. It is the pressure on the air that generates a heat intense enough to make the rock so hot that is glows brilliantly for our viewing pleasure if we are lucky enough to be looking at the sky at the right time.
We should also dispel the myth about meteors being hot when they hit Earth, becoming meteorites. Meteorites are almost always cold when they hit; in fact, they are often found covered in frost. This is because they are so cold from their journey through space that the entry heat is not sufficient to do more than burn off the outer layers.
The Human Body Pops when Exposed to Space Vacuum
This myth is the result of science fiction movies, which use it to add excitement or drama to the plot. In fact, a human can survive for 15–30 seconds in outer space as long as they breathe out before exposure; this prevents the lungs from bursting and sending air into the bloodstream. After 15 or so seconds, the lack of oxygen leads to unconsciousness, which eventually leads to death by asphyxiation.
Brain Cells Cannot Regenerate
The reason for this myth being so common is that it was believed and taught by the science community for a very long time. It was not until 1998 when scientists discovered that brain cells in mature humans can regenerate. | no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://www.concordmonitor.com/dilly-cliffs-fire-caused-by-humans-14202820 | Forest official says humans, not a meteorite, caused the Dilly Cliffs fire | Forest official says humans, not a meteorite, caused the Dilly Cliffs fire
The Dilly Cliffs fire, which burned more than 70 acres and ended tourist season early for Lost River Gorge, was caused by a person – not, as a few speculated, a meteorite, officials said.
“We have determined it was human caused. Beyond that, the exact ignition source has not been identified,” wrote Tiffany Benna, public services staff officer for the White Mountain National Forest.
The blaze in the town of North Woodstock was first reported in early October and lingered for more than a month, closing a portion of the Appalachian Trail and forcing the popular Lost River Gorge to close its boardwalks and caves to make room for firefighting equipment.
When the blaze first erupted, the local fire chief told media that somebody had reported seeing something falling from the sky, leading to speculation that a meteorite may have landed and ignited the dry underbrush.
However, such an event is so unlikely as to be all but impossible since the rocks – called meteors when they’re in the air, but meteorites when they hit the ground – are not very hot when they hit the earth, despite depictions in movies.
Meteors are traveling thousands of miles an hour when they enter the atmosphere, which is why they usually burn up while still hundreds of miles above our heads, but if they survive the trip all the way to the ground, they will have been slowed to what is known as terminal velocity, just a few hundred miles an hour, and will have cooled off.
Although it drew the most attention, the Dilly Cliffs fire was one of many wildfires that broke out in the Northeast after a hot, dry summer and fall.
Gov. Chris Sununu and Vermont Gov. Phil Scott have signed a joint letter to House and Senate leadership asking to increase funding for fighting and preventing forest fires.
“This is far from just a ‘Western’ issue,” the governors wrote.
(David Brooks can be reached at 369-3313 or [email protected] or on Twitter @GraniteGeek.)
David Brooks is a reporter and the writer of the sci/tech column Granite Geek and blog granitegeek.org, as well as moderator of Science Cafe Concord events. After obtaining a bachelor’s degree in mathematics he became a newspaperman, working in Virginia and Tennessee before spending 28 years at the Nashua Telegraph . He joined the Monitor in 2015. | Forest official says humans, not a meteorite, caused the Dilly Cliffs fire
The Dilly Cliffs fire, which burned more than 70 acres and ended tourist season early for Lost River Gorge, was caused by a person – not, as a few speculated, a meteorite, officials said.
“We have determined it was human caused. Beyond that, the exact ignition source has not been identified,” wrote Tiffany Benna, public services staff officer for the White Mountain National Forest.
The blaze in the town of North Woodstock was first reported in early October and lingered for more than a month, closing a portion of the Appalachian Trail and forcing the popular Lost River Gorge to close its boardwalks and caves to make room for firefighting equipment.
When the blaze first erupted, the local fire chief told media that somebody had reported seeing something falling from the sky, leading to speculation that a meteorite may have landed and ignited the dry underbrush.
However, such an event is so unlikely as to be all but impossible since the rocks – called meteors when they’re in the air, but meteorites when they hit the ground – are not very hot when they hit the earth, despite depictions in movies.
Meteors are traveling thousands of miles an hour when they enter the atmosphere, which is why they usually burn up while still hundreds of miles above our heads, but if they survive the trip all the way to the ground, they will have been slowed to what is known as terminal velocity, just a few hundred miles an hour, and will have cooled off.
Although it drew the most attention, the Dilly Cliffs fire was one of many wildfires that broke out in the Northeast after a hot, dry summer and fall.
Gov. Chris Sununu and Vermont Gov. Phil Scott have signed a joint letter to House and Senate leadership asking to increase funding for fighting and preventing forest fires.
“This is far from just a ‘Western’ issue,” the governors wrote.
(David Brooks can be reached at 369-3313 or [email protected] or on Twitter @GraniteGeek.)
| no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://science.howstuffworks.com/peru-meteor.htm | How did a meteor make hundreds of people sick? | HowStuffWorks | How did a meteor make hundreds of people sick?
""
A meteor strike in Peru may have made up to 600 people sick, according to recent reports. Find out if the meteor strike in Peru is what made people sick.
Image courtesy AFP/Getty Images
On the afternoon of Sept. 15, 2007, residents of a village near Lake Titicaca in southern Peru heard a loud roaring noise and looked up to see a ball of fire blazing through the sky. The object struck the earth, creating a loud noise, shaking the ground and launching debris as far as 250 meters away from the impact site [source: National Geographic News]. The event resulted in a crater measuring about 16 feet deep and 55 feet across [source: Living in Peru]. The impact of the object registered a 1.5 on the Richter scale [source: AP].
But before scientists could determine what happened, hundreds of local residents became sick. Up to 600 people in the area became ill, some of them after venturing to the site of the crater [source: Living in Peru]. Some of those who were ill said they felt nauseated, were vomiting and had headaches.
Advertisement
Besides the widespread illness, several other mysterious events occurred. Early reports included claims from residents that water in the crater boiled for several minutes after impact and that a smell of sulfur filled the air. All of these events generated worried speculation as to what exactly happened and why people were sick.
Many different ideas were put forth, several of them suggesting a meteorite impact. Some wondered whether a meteorite's impact released harmful gases and possible radiation. Another proposed that there wasn't a meteorite at all and that a natural geyser or small volcanic eruption might have expelled gases from underneath the soil. Other reports mentioned a "fireball" as a possible cause of the explosion. Finally, a sensational article syndicated in the Russian tabloid Pravda made its rounds on the Internet; the writer claimed that as part of an elaborate conspiracy, the U.S. government shot down its own spy satellite, which spilled its radioactive fuel upon crashing in the Andes.
The confusion was compounded by conflicting details in some reports, especially about symptoms of those who were sick, whether groundwater boiled, if a strange smell was present and even how large the crater was. Some scientists speculated that noxious gases were stirred up by the meteor impact, while others claimed that dust caused people to experience dizziness and nausea. But what really did happen near Lake Titicaca? Read on to find out.
Advertisement
Did a meteorite crash in Peru?
""
Most meteorites have high metal content, which helps them survive their entry through Earth's atmosphere, though the meteorite in Peru was rocky.
Six days after the initial event, scientists from Peru's Mining, Metallurgy and Geology Institute confirmed that a meteorite did crash in the area and that the impact stirred up arsenic fumes. Tests confirmed that groundwater in the area was contaminated with arsenic. The explosion sent up some of the arsenic in the form of gas, making people sick. (In some parts of Peru, the soil and groundwater contain natural arsenic deposits.)
Despite initial concerns, those who became sick improved after several days. Peruvian health workers reported that they treated about 200 residents for various symptoms, none especially severe [source: Living in Peru]. The hundreds of people who claimed they felt ill may have been suffering from a psychosomatic reaction. Some of the local residents said they thought that the loud roaring noise they heard and the subsequent explosion were the sounds of Chilean military forces launching an attack. The stress and mysterious nature of the event could have provoked physical symptoms, even without a physical cause.
Advertisement
As for the meteorite itself, Peruvian scientists recovered and analyzed several samples. One scientist who specializes in meteors estimated that prior to impact, the meteoroid measured 10 feet across, though it could have been far larger before breaking apart in the atmosphere [source: AP]. The meteorite was not radioactive or harmful. The samples did show that the meteorite was made entirely out of rock, which is considered unusual. Meteorites found on Earth usually contain metal because those meteorites are better at surviving the stress of entering the atmosphere, when friction can raise temperatures up to 3,000 degrees Fahrenheit. But the tests confirmed that it was indeed a meteorite that scientists found and that the explosion did not come from some other source. The meteorite samples also had magnetic properties, which scientists attributed to the rock's high iron content.
The meteorite and arsenic discoveries put to rest questions of what caused the explosion and why people became sick, but questions remain about the reports of water boiling in the crater -- if they are true at all. Was the meteorite so hot that, upon impact, it caused the groundwater to boil? Some commentators have claimed that meteorites, especially those of moderate size like scientists believe this one was, are cold when they hit the ground -- not hot. However, there's no conclusive proof about whether meteorites are hot or cold upon impact. Available evidence indicates that just after landing, meteorites are cold or only slightly warm [source: Cornell University Astronomy Department]. Meteorite impacts aren't known to cause major fires or to scorch large areas.
So why did the water boil? One possible answer is that some sort of geothermal event took place, with venting gases boiling the water, or perhaps the local villagers misreported what they saw. In any case, it's a question that may never be solved.
For more information about meteors and other related topics, please check out the links on the next page.
Meteor? Meteoriod? Meteorite?
Discussing meteor activity can be tricky because the terminology is confusing. The term meteor actually refers to the streak of light caused by a piece of space debris burning up in the atmosphere. The pieces of debris are called meteoroids, and remnants of the debris that reach the Earth's surface (or another planet's) are called meteorites. To learn more, check out this article that explains how big meteors have to be to make it to the ground.
"Doctors Aid in Rising Number of Illnesses after Meteorite Crash." Living in Peru. Sept. 19, 2007. http://www.livinginperu.com/news-4732-environmentnature-medical-teams-aid-in-rising-number-of-illnesses-after-meteorite-crash | But the tests confirmed that it was indeed a meteorite that scientists found and that the explosion did not come from some other source. The meteorite samples also had magnetic properties, which scientists attributed to the rock's high iron content.
The meteorite and arsenic discoveries put to rest questions of what caused the explosion and why people became sick, but questions remain about the reports of water boiling in the crater -- if they are true at all. Was the meteorite so hot that, upon impact, it caused the groundwater to boil? Some commentators have claimed that meteorites, especially those of moderate size like scientists believe this one was, are cold when they hit the ground -- not hot. However, there's no conclusive proof about whether meteorites are hot or cold upon impact. Available evidence indicates that just after landing, meteorites are cold or only slightly warm [source: Cornell University Astronomy Department]. Meteorite impacts aren't known to cause major fires or to scorch large areas.
So why did the water boil? One possible answer is that some sort of geothermal event took place, with venting gases boiling the water, or perhaps the local villagers misreported what they saw. In any case, it's a question that may never be solved.
For more information about meteors and other related topics, please check out the links on the next page.
Meteor? Meteoriod? Meteorite?
Discussing meteor activity can be tricky because the terminology is confusing. The term meteor actually refers to the streak of light caused by a piece of space debris burning up in the atmosphere. The pieces of debris are called meteoroids, and remnants of the debris that reach the Earth's surface (or another planet's) are called meteorites. To learn more, check out this article that explains how big meteors have to be to make it to the ground.
"Doctors Aid in Rising Number of Illnesses after Meteorite Crash." Living in Peru. Sept. 19, 2007. http://www.livinginperu.com/news-4732-environmentnature-medical-teams-aid-in-rising-number- | no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://news.asu.edu/20160218-discoveries-asteroids-meteorites-lecture-laurence-garvie | The dangers we face from meteorites — or not | ASU News | The dangers we face from meteorites — or not
ASU Center for Meteorite Studies curator sets record straight on space-rock odds, their characteristics — and the incident in India
Before we begin reporting on his talk, let’s get something out of the way that Laurence Garvie, research professor and curator for Arizona State University's Center for Meteorite Studies at the School of Earth and Space Exploration, has been hearing about for two weeks.
Whatever killed the Indian bus driver about two weeks ago was not a meteorite.
“We still don’t have a direct hit,” Garvie said at a reception before his lecture on “Asteroids, Meteorites, and Dangers to Life on Earth.”
Meteorites don’t create explosions, he explained. And the likelihood of someone being killed by a rock falling from space is still astronomically low.
In 1954, a woman in Sylacauga, Ala., was hit by a particle from a meteorite that fell through the roof of her house. “Even then, it didn’t hit her directly,” Garvie said. “It hit the fridge and bounced off her arm.”
“It all comes down to probability, doesn’t it?” he said. “From above, we’re about a foot wide. And there are 7 billion people on Earth ... we could do the numbers!”
A meteorite like the one that wiped out the dinosaurs (such as the ones on his tie) is likely to occur only once every 100 million years, said ASU research professor Laurence Garvie. This and photo below by Ben Moffat/ASU Now
Garvie presented several numbers during his lecture, all of them fascinating.
Some 78,000 tons of extraterrestrial material hits the Earth every year, most of it dust. Most meteorites come from the asteroid belt between Mars and Jupiter. The asteroid belt is not like what you see in the movies; it’s not that crowded. Meteorites also come from the moon or Mars. “We’ve sent rovers there, but we haven’t brought anything back,” Garvie said. “Nature has done that for us.”
“As these objects come into the atmosphere, they produce a massive spectacle,” he said.
Meteorites are not hot and glowing when they hit the ground. In space, heated by the sun, they might only reach 200 degrees. Even when they fall through the stratosphere, they only have about four seconds to get hot. Garvie compared them to Baked Alaska; the inside is still cool.
Meteorites fall everywhere, but they’re tiny.
“The vast majority of meteorites are about a centimeter or so,” he said.
“Fortunately for us the very large events are rare,” Garvie said. A fall like the one captured on many dashboard cameras three years ago in Chelyabinsk, Russia, happens about once a generation.
A Tunguska-level eventThe Tunguska event was a large explosion that occurred near the Stony Tunguska River, in Yeniseysk Governorate, nowKrasnoyarsk Krai, Russian Empire, on the morning of 30 June 1908 (N.S.).[1][2] The explosion over the sparsely populated Eastern Siberian Taiga flattened 2,000 km2 (770 sq mi) of forest and caused no known casualties. The cause of the explosion is generally thought to have been a meteor. It is classified as an impact event, even though no impact crater has been found; the meteor is thought to have burst in mid-air at an altitude of 5 to 10 kilometres (3 to 6 miles) rather than hit the surface of the Earth. — Wikipedia as happened in Russia in 1915 occurs about once every 100 years. Chicxulub, which slammed into the Yucatan Peninsula, wiped out the dinosaurs and trashed the entire planet, is likely to occur only once every 100 million years.
A 100-foot-diameter asteroid is orbiting Earth in a wobble and will next pass by in March. Scientists estimate it has a one in 250 million chance of hitting Earth. If it does, it will create a crater only a few hundred meters wide.
“What I hope you go away with is that you’re safe, basically,” Garvie said. “Will there be another large impact? Yes. When will it happen? Hopefully not soon.”
The School of Earth and Space Exploration’s New Discoveries Lecture Series brings exciting scientific work to the general public in a series of informative evening lectures, each given by a member of the faculty once a month throughout the spring. The School of Earth and Space Exploration is an academic unit of the College of Liberal Arts and Sciences.
Scott Seckel
Next Story
Clutching hopes, dreams and minerals, about 50 people waited to approach Captain Fireball. Did they own interesting and valuable meteorites? Or just another rock?With the speed and finality befitting his nickname, Captain Fireball — aka research professor Laurence Garvie, collections manager of the Center for Meteorite Studies at Arizona State University — crushed dreams like a metal lump shat...
Between a rock and a hard place
It's the one day a year public can bring rocks to meteorite center to be ID'd.
Rocks traveled from across the U.S. — but not the galaxy — for ASU's space day.
November 9, 2015
People from across US enjoy ASU's meteorite-ID event, even if not all get the verdict they want
Clutching hopes, dreams and minerals, about 50 people waited to approach Captain Fireball. Did they own interesting and valuable meteorites? Or just another rock?
With the speed and finality befitting his nickname, Captain Fireball — aka research professor Laurence GarvieLaurence Garvie is a research professor in the School of Earth and Space Exploration, which is part of ASU's College of Liberal Arts and Sciences., collections manager of the Center for Meteorite Studies at Arizona State University — crushed dreams like a metal lump shattering a window.
There wasn’t time to waste. Saturday was Earth & Space Exploration Day at ASU. The open house, hosted by ASU's School of Earth and Space Exploration, features science-related activities for families, and it is the one day a year the public is invited to bring in what might be a meteorite for identification. The rest of the year, the center’s doors are closed firmly, as is stated — multiple times — on nearly every single page of the center’s website. The university’s meteorite identification program was closed five years ago because they were overwhelmed with requests.
“Nope,” Garvie said to one hopeful visitor. “It’s a pebble that has a shape to it.”
“No,” he said to another. “It’s not fusion crust. It just has some weathering on it.”
“The physical attributes of the sample are completely wrong,” he said to a gentleman who insisted he has something from space. (“I get one of those every time,” Garvie said.)
“You’re 100 percent sure it’s not a meteorite?” the gentleman insisted. He had a stack of data printouts. Tests have been done, he told Garvie, expensive tests.
“It’s not like any meteorite I’ve ever seen,” Garvie said. Captain Fireball sends the crestfallen but still insistent man downstairs to Dr. Rock, a geologist who was identifying rock specimens.
Ron DePlazes did not have a meteorite either, but he was not disappointed. The Phoenix man owns a company that paints road cuts for state highway departments across the West so they’re not glaring white. You can see his work along the Beeline Highway on the way up to Payson.
“I paint rocks,” he said. He doesn’t collect them. “No, I don’t. I pick up Indian stuff. I like meteorites, though.”
DePlazes was walking across the yard of his business at Central Avenue and the Salt River with his son Wednesday morning when he spotted his sample.
It is heavy, and magnetic — DePlazes affixed two small magnets to the rock — but it’s not a meteorite. It is similar to the Argentinian Chajari meteorite, according to Garvie, who sliced a sample from it and analyzed it in his lab.
“They don’t know it’s not a meteorite, but they don’t know what it is,” DePlazes said. “He’s a lot more sophisticated than I am.”
Before coming in to the event, DePlazes carefully went over the meteorite ID page on the center’s website and answered the five questions on the page.
“I didn’t think I had a meteorite,” he said. But he wanted to come in and check anyway. “He doesn’t know what it is, and that’s amazing considering how much they know here at Arizona State,” he said.
Rex Myerscough and his son, Rex Jr., drove two days from Clearwater, Florida, to ASU's Earth & Space Exploration Day with a 76-pound rock that has quite the origin story. But was it declared a meteorite Saturday in Tempe? Alas, no.
Deanna Dent/ASU Now
(From left) Jesse, Dylan, Robin, Bennett and Shelby DeWitt pose with their sample of bronze found while Jesse was rabbit hunting in northern Arizona. It was pronounced a "meteor-wrong" Saturday.
Deanna Dent/ASU Now
Jeff Christensen drove in from Mesquite, Nevada, with a handgun case full of rocks. Christensen found his rocks while he was working at the local airport.
Deanna Dent/ASU Now
“What are you going to do in Mesquite?” Christensen said. “There’s 1,800 people. I just kick rocks when it’s slow.” He didn’t have a meteorite either.
Deanna Dent/ASU Now
Ron DePlazes was walking across the yard of his business at Central Avenue and the Salt River in Phoenix with his son Wednesday morning when he spotted his sample. It is heavy, and magnetic but — not a meteorite.
Deanna Dent/ASU Now
Brother Richard and Chris Pool show off a cool-looking rock that turned out to be another meteor-wrong Saturday.
Angela Zapata and Maylo Corella and their children, Myles and Alisse Corella, pose with their meteor-wrong sample Saturday. The family had purchased it from a neighbor and hoped it might be a meteor, though no one could positively identify what the sample was composed of.
Photo by Deanna Dent/ASU Now
Lane and Bernadyne Agan of Show Low hold their meteor-wrong. Lane said he saw it fall and wondered whether he picked up the wrong rock.
Photo by Deanna Dent/ASU Now
Hilarie O’Dell and her three children brought in a sample found by her husband camping by St. Johns. It turned out to be industrial slag. “It’s the most common ‘meteor-wrong,’ ” Garvie said.
Deanna Dent/ASU Now
John Patrick of Payson (left) and Steve Thomas of Phoenix show their samples, which turned out to be meteor-wrongs Saturday.
Deanna Dent/ASU Now
Laurence Garvie, collections manager of the Center for Meteorite Studies and a research professor in ASU's School of Earth and Space Exploration, did not identify a single meteorite Saturday. Some years are like that.
Deanna Dent/ASU Now
Jeff Christensen drove in from Mesquite, Nevada, with a handgun case full of rocks. It was either drive south to Arizona State University or north to Brigham Young University to get them identified. Christensen found his rocks while he was working at the local airport.
“What are you going to do in Mesquite?” he said. “There’s 1,800 people. I just kick rocks when it’s slow.”
He didn’t have a meteorite either.
“He’s saying they don’t get those bubbles in there,” Christensen said. “That’s weird.”
Weird doesn’t even begin to describe the Myerscough saga. Rex Myerscough and his son, Rex Jr., drove two days from Clearwater, Florida, with a 76-pound rock that, could it communicate with a meteorite, would have a better story to tell than simply falling from space. It’s a story spanning a quarter-century of drama. Pay attention. It gets complicated.
The Myerscough saga began in 1972, when Rex Sr. bought three lots and built a house on one. In the process of prepping a second lot for construction, he found an unusual rock. He took it to the school where his wife taught and left it in a classroom as a curiosity.
After some years, his wife retired. Some time after that, the school principal called. A geologist had dropped by the school and said the rock was a meteorite. The principal didn’t want something that valuable in the school, so Myerscough picked it up and brought it home.
Myerscough repairs and maintains pressure washers for a living. NASA called him because they had a broken pressure washer. He told NASA he would fix it for free if they would identify his rock. It was a deal.
Then the deal fell through. The scientist who was slated to inspect the rock was transferred to a facility in Tennessee. It was disappointing, not least because of the effort it took to take a sample from the rock.
Lori, Kal (center) and Steve Baker chat with ASU research professor Laurence Garvie while examining a meteorite display at Earth & Space Exploration Day on Saturday in Tempe. Photo by Deanna Dent/ASU Now
“You had to take an 8-pound sledge and beat it and beat it just to get a little matchstick off it,” Myerscough said.
Then the rock was stolen from Rex Jr.
“I was robbed at gunpoint,” he said.
A suspect was taken into custody. The pair of detectives assigned to the case looked familiar to Mrs. Myerscough, and she looked familiar to them. It turned out she’d taught both of them in elementary school.
“The detectives said they had a white room,” Myerscough said. “They said, ‘We’ll take him in the white room and we’ll find out where it is.’”
The suspect confessed in the white room. The rock was buried under the monkey bars on a Catholic school playground.
“They sent a SWAT team out there and recovered it,” Myerscough said. “I go out there and there’s all these SWAT guys standing around looking at it.”
They thought it was a million-dollar meteorite.
Finally, in this multi-decade quest to identify the mystery rock, Myerscough found out about the meteorite identification event at Earth & Space Exploration Day. He and his son packed the rock in a sawn-off orange Home Depot bucket, placed it on a dolly, and took off for Arizona.
“I couldn’t send this thing off to Tennessee or wherever,” he said. “UPS wouldn’t take it. The airlines wouldn’t take it. It weighs 76 pounds.”
It took two days and more than 2,100 miles to drive from Clearwater to Phoenix.
“After all that, it’s not a meteorite,” Rex Jr. said.
The Myerscoughs, having solved the mystery, are taking the rock home, where they will keep it. Was the trip worth it?
“Oh, I’ve never been to this part of the country before,” Myerscough said.
Laurence Garvie (right) explains to Ron DePlazes that by sanding the edge of the sample he can give a definitive answer whether his sample is a meteorite (it was not) Saturday in Tempe. Photo by Deanna Dent/ASU Now
Meanwhile, back at the identification table the carnage continued.
Hilarie O’Dell and her three children: “My husband found that camping by St. Johns. It’s not a meteorite?”
It turned out to be industrial slag. “It’s the most common ‘meteor-wrong,’ ” Garvie said. He gamely looked at it anyway — after all, this was a woman who had waited in line with three children — and pointed out all the reasons it wasn’t a meteorite.
Lane Agan from Show Low: “I saw it fall. Did I pick up the wrong rock?”
“It’s totally unlike a meteorite.”
A young man with a tiny box lined with white cotton holding two specks. |
“It all comes down to probability, doesn’t it?” he said. “From above, we’re about a foot wide. And there are 7 billion people on Earth ... we could do the numbers!”
A meteorite like the one that wiped out the dinosaurs (such as the ones on his tie) is likely to occur only once every 100 million years, said ASU research professor Laurence Garvie. This and photo below by Ben Moffat/ASU Now
Garvie presented several numbers during his lecture, all of them fascinating.
Some 78,000 tons of extraterrestrial material hits the Earth every year, most of it dust. Most meteorites come from the asteroid belt between Mars and Jupiter. The asteroid belt is not like what you see in the movies; it’s not that crowded. Meteorites also come from the moon or Mars. “We’ve sent rovers there, but we haven’t brought anything back,” Garvie said. “Nature has done that for us.”
“As these objects come into the atmosphere, they produce a massive spectacle,” he said.
Meteorites are not hot and glowing when they hit the ground. In space, heated by the sun, they might only reach 200 degrees. Even when they fall through the stratosphere, they only have about four seconds to get hot. Garvie compared them to Baked Alaska; the inside is still cool.
Meteorites fall everywhere, but they’re tiny.
“The vast majority of meteorites are about a centimeter or so,” he said.
“Fortunately for us the very large events are rare,” Garvie said. A fall like the one captured on many dashboard cameras three years ago in Chelyabinsk, Russia, happens about once a generation.
A Tunguska-level eventThe Tunguska event was a large explosion that occurred near the Stony Tunguska River, in Yeniseysk Governorate, nowKrasnoyarsk Krai, Russian Empire, on the morning of 30 June 1908 (N.S.).[1][2] | no |
Meteoritics | Are meteorites hot when they hit the Earth? | yes_statement | "meteorites" are "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are "hot". | https://griffithobservatory.org/exhibits/edge-of-space/pieces-of-the-sky-meteorites/ | Pieces of the Sky (Meteorites) - Griffith Observatory - Southern ... | Pieces of the Sky
Earth is bombarded by a constant rain of debris from space. Most of it is fine dust that drifts down to the surface. Other pieces can be as small as a grain of sand or larger than a house. We see flashes of light when pieces of comets, asteroids, and other planets fall through our atmosphere.
After these bits of the sky land on Earth, they are collected as meteorites. We study space rocks to learn more about the formation of our solar system and the evolution of our planets.
Meteors and Comets
Pieces of comets and asteroids fall from space and cause the flashes of light in the sky that we see as meteors.
The space between the planets is not empty. It is filled with dust shed from comets and fragments of broken asteroids. As Earth’s orbit takes us through this debris, pieces of it enter our atmosphere. The smaller ones burn up because of friction with air molecules. The resulting flashes of light in the sky are meteors. Anything that survives the trip and lands on the surface is then called a meteorite.
The smallest and most plentiful particles come from comets, which contain ice and dust. As they orbit the Sun, Comets scatter dust along their paths. When Earth’s orbit takes us through those dust trails, the material enters our atmosphere and vaporizes. The smallest particles generate very little friction with our atmosphere and drift gently down to the surface. Meteors in showers come from comets.
Larger pieces of the sky generally come from asteroids, or minor planets. When asteroids collide, they scatter rock fragments throughout space. Eventually, some land on Earth. A few rare meteorites come from the Moon and Mars.
Meteor Showers Come from Dust in Comets
A comet nucleus is made of ice and dust, and formed in the original nebula where the Sun and planets were born. It is mainly water ice, with some methane and ammonia in the mix. Its dust particles are rocky grains. As a comet nears the Sun, the surface ices evaporate and form a cloud. This material streams out behind the comet, and the dust tail scatters particles along the comet’s path. When Earth moves through the dust trail, we get meteor showers. This model shows a typical comet nucleus with a dark, crusty surface coating that forms after the comet has rounded the Sun a few times.
Compared with a planet like Earth or Jupiter, the nucleus of a comet is very, very small. Most comets are somewhere between the size of a boulder and a city. Compared with the nucleus, a comet’s dust particles are extremely small. They range in size from a grain of sand to a tiny speck no larger than the dust we find on our furniture.
Comet Dust
This is a model of a comet dust grain magnified 20,000 times. It simulates the look of dust from a comet that formed near the orbit of Pluto 4.5 billion years ago. The dust particle after which it is modeled was gathered by special collectors mounted on an aircraft.
The 1833 Leonid Meteor Shower Makes History
“In the year 1833 a most remarkable phenomenon occurred. It was called the falling of the meteors. It happened in the night, and as I was only a small child, I heard my parents describe it the next morning as being the most awful sight that was ever looked upon with mortal eye. They said that the firmament on high was one solid glare of fire and light, and looked as though every star in the sky was falling to the ground, and that they were certain that the Day of Judgment was at hand. There were many wicked men on their knees that night, praying to the Lord, and calling on other to pray for them, that had never been known to bow in prayer before.” – From the diary of Mary Hansard, Age 8
What Happens to a Meteor on Its Way Down to Earth
Every day, countless objects that came from comets and asteroids fall to Earth. What happens when they hit the atmosphere depends on their size and composition. Most of these particles are the size of dust grains. They simply float down to the surface. Slightly larger objects – the size of a grain of sand or a pea – are slowed by the atmosphere. They glow from the heat of friction and eventually burn up. The resulting flash of light is a meteor. Under a dark sky, you can see several meteors an hour on a typical night.
Incoming objects enter the atmosphere at average speeds of 12 miles (20 km) per second. Friction heats them to 8,500 degrees F (4,700 degrees C) and slows them down. Large objects and those made mostly of iron often survive their trip through the atmosphere and fall to the surface as meteorites. The very largest ones continue glowing all the way down, making spectacular fireballs.
When Rocks Fall to Earth
Small pieces of space debris burn up as they pass through the atmosphere. We see their glow as small meteors. Only the largest, most solid pieces of rock and iron survive the fiery descent. We see them as fireballs. These larger pieces are heated and softened by atmospheric friction. The air pressure they encounter may cause them to explode into fragments just before they hit the ground. Those pieces land on the surface or bury themselves in the dirt.
The force of a large impact blasts out a crater like those on the Moon and sends material flying out from the impact site. Some of the Earth’s craters are more than a hundred miles across, though most have been covered or eroded away.
Dust From Space
Look through the microscope to find tiny pieces of actual asteroid or comet dust. They are smaller than grains of sand. Some space dust particles began their journey to Earth when they were blasted from an asteroid during a collision or shed from comets passing near the Sun. The dust drifts through interplanetary space until it encounters Earth’s atmosphere.
The largest space-dust particles fall through our atmosphere and flash as meteors we see in the sky, especially during meteor showers. Most space dust is vaporized before it reaches Earth’s surface. The particles shown here survived the journey, melting into spherical shapes on the way down. They were collected from a water well at the South Pole in Antarctica.
Meteorite Origins
Meteorites have their origins in the larger bodies of the solar system. They come from asteroids and from the surface of the Moon and Mars.
Most meteorites come from asteroids, the remnants of smaller planetary bodies that formed at the same time, and out of the same basic material, as Earth and other rocky planets. Many of these “mini-planets” were shattered by collisions with one another. Asteroids are the debris. Countless numbers of them orbit through interplanetary space. Most are found between Mars and Jupiter in the Asteroid Belt.
Most Asteroid Belt objects are not a threat to us because they tend to stay in their orbits around the Sun. Occasionally, a collision knocks one toward Earth. There are more than a thousand of these Earth-approaching asteroids. If one were to hit our planet, it could cause a catastrophe like the one that killed the dinosaurs. Fortunately, these are rare occurrences. Tiny chips of asteroids, however, fall through our atmosphere all the time.
Asteroids are much smaller than Earth. Vesta is about 320 miles (515 km) in diameter, and it looks much like it did when it first formed. City-size Ida was once part of a larger asteroid smashed in a collision.
Meteorites From Asteroid Vesta
Only a few meteorites can be linked to a specific asteroid. These probably fell to Earth from Vesta, which formed when the solar system was very young. It has the same rocky composition as Earth, and evidence of ancient lava flows on its surface.
Smaller Asteroids
Each of the asteroids modeled here was photographed by passing spacecraft or imaged by radar. The differences between them tell us about their compositions and histories. Most asteroids are fragments of the original mini-planets. Their battered surfaces, strange shapes, and erratic movements are clues to the collisions they have experienced since forming. Mathilde is darker than charcoal and irregularly shaped. Ida is a lumpy, cratered asteroid with its own tiny moon, Dactyl. Oddly shaped Eros was the first asteroid visited by a spacecraft. Asteroid Gaspra was once part of a larger object and is covered with craters. Toutatis has a smooth surface, is a big as a city, and tumbles wildly through space.
Allende Meteorites, 1969
On February 8, 1969, a blue-white fireball streaked across the early-morning sky above rural northern Mexico. A tremendous explosion followed, as a piece of space rock weighing several tons broke into bits. As the fragments fell, the commotion woke up the villagers of Pueblo de Allende. Soon people began gathering samples of what came to be known as the Allende meteorites.
The Village was not far from Houston, so the meteorites were taken to NASA’s Johnson Space Center. The fall occurred just before astronauts brought back the first Moon rocks, and NASA scientist were still testing ways to analyze lunar samples. The Allende samples were a perfect test of their methods, using real space rocks. To everyone’s surprise, the meteorites turned out to be the same kind of rock that formed the inner planets and asteroids. This discovery revealed what the material in the early solar system was like.
Allende and the Early Solar System
Allende meteorites are pieces of very ancient solar system history. They contain some of the rocky material that existed before the planets and asteroids formed. The presence of these materials tells scientists that meteorites like Allende have not melted since the birth of the solar system, more than 4 billion years ago.
Pieces of Solar System History
The polished slice of Allende meteorite in the microscope is a piece of the very early solar system. This specimen contains chondrules. They formed from dust grains that existed in the cloud of gas and dust where the Sun and planets were born. These grains melted and cooled to make the tiny beadlike spheres you see here. Rock fragments and other materials are also mixed with the chondrules. All these ancient building blocks combined in the solar system’s birth cloud to form ever-larger rocky bodies that ultimately became the asteroids and planets.
Pieces From Another World
Meteorite From Mars
This Billion-year-old rock comes from a lava flow on Mars. About 20 million years ago, an object hit the planet and blasted pieces of the hardened lava into space. This one fell to Earth about 600,000 years ago. It was found in the desert of Oman (in the Middle East) in 2000.
Now do we know it came from Mars? Scientists study gases trapped inside meteorites they think are from the Red Planet. They compare their results to data taken by the Viking landers on Mars in 1976. Samples that match are probably from Mars.
Meteorite From the Moon
This chip from the Moon’s surface is a mixture of many kinds of rock and fragments of lava that melted together. A meteorite impact sometime in the distant past knocked it off the Moon’s surface. It fell to Earth and was found in Oman in 2002.
Astronomers analyzed the chemical elements in this meteorite and those in rocks brought back from the Moon by the Apollo astronauts. Since it matches the elements and minerals in the Apollo samples, we know this meteorite also came from the Moon.
Meteorite Histories
Asteroids are made of some of the oldest materials in the solar system. Asteroids formed more than 4.5 billion years ago out of the same rocky materials as the planets. Some of them have not changed since they were born. Others melted, formed layers, or were broken apart by collisions.
Formation
Countless large asteroids were assembled from chunks of material in the early solar system.
Layering
When some of the largest asteroids melted, iron sank to their centers and formed cores. Lighter rock floated up to create mantle and crust layers.
Activity
In a volcanically active asteroid, magma flows up to the surface from molten areas in the mantle. Lava flows have been found on several asteroids.
Undisturbed Asteroid
An asteroid that has not been broken apart in collisions may have a core, mantle, and crust. Its surface may be smooth or cratered.
Fragmenting and Asteroid
Collisions break apart large asteroids, scattering fragments through space. The mineral content of each piece tells us where it formed in the original asteroid.
Types of Meteorites
Stony Meteorites
More than 85 percent of meteorites that fall to Earth are stony. They originate in asteroids with mantles and crusts, and contain minerals similar to those in Earth rocks. Some meteorites formed when rock inside their parent asteroids melted completely. Others came from partially melted rock, while the rest originated in asteroids that never melted. Many meteorites contain chondrules, spheres of minerals that are among the oldest unchanged materials in the solar system.
Stony-Iron Meteorites
Many stony-iron meteorites come from the thin zone of melted rock that lies between an asteroid’s mantle and core. These meteorites contain droplets of the silicate mineral olivine trapped in the iron This translucent, olive-green mineral forms in heated rocks and is common on Earth. Meteorites with olivine in them are rare and beautiful.
Iron Meteorites
These meteorites come from the heavy iron cores of layered asteroids. They are scattered throughout space when collisions smash their parent bodies into pieces.
In asteroids that melted, iron sank to the cores. Over long periods of time, the cores cooled slowly, allowing iron crystals to grow. These crystals show up as cross-hatched patterns when an iron meteorite is sliced, polished, and etched with acid.
Meteorite Impacts
Comets and asteroid fragments have hit all of the solar system’s planets and moons. We find impact craters everywhere.
Impacts shape the surfaces of worlds. Craters are the scars left on planetary landscapes by impacts and collisions. The size of an incoming object affects the size of the “splash” it makes and the amount of ejected material deposited around the site. Craters range from microscopic pits made by very small projectiles to huge holes in the ground made by very large objects.
On Earth, the largest objects to survive a trip through our atmosphere blast out huge impact scars. Early in its history, Earth was bombarded by countless incoming objects. Most of the craters left over from those ancient impacts have eroded away or been covered by lava flows and vegetation. Only a few obvious ones remain. The Moon and other airless worlds are pockmarked with craters of all sizes. Without erosion by wind and rain, those pits remain intact for billions of years and preserve the cratering history of the solar system.
When a large incoming body hits Earth, the result is a crater. The force and pressure of the collision vaporize parts of the meteorite, as well as the ground it hits. The impact blasts out the crater, melts some of the rock, and scatters fragments far and wide.
Meteorite Impacts Can Create Tektites
An impact digs up surface material and blasts it away. During large collisions, a lot of heat gets generated, which melts rock. Some of this “impact melt’ flows away, while some is blasted into the air with the rest of the ejected rocks and meteorite fragments. As the airborne splash of hot liquid rock fall back to Earth’s surface, they cool into interesting shapes. These bits of melted earth are tektites, and each sample contains different materials formed under unique conditions.
Tektites are found in a few scattered locations on Earth. These Bediasites were found in Texas. The Libyan Desert Glass is strewn across the northern African deserts. Moldavites come from the Czech Republic and neighboring countries.
An Impact Killed the Dinosaurs
Sixty-five million years ago, the dinosaurs and other animals died off. This extinction probably happened because a huge asteroid or comet struck the ancient shoreline of the Yucatán Peninsula in Mexico. It hit with the force of a 100-million-megaton bomb and made a crater 60 miles (97 km) across.
The impact raised dust that blocked incoming sunlight. Temperatures fell, and plant photosynthesis was interrupted. The hostile conditions and food shortages helped kill off most of Earth’s animals.
We find evidence for the impact in the K/T Boundary, a layer of clay deposited around the time of the extinction. It is rich with rare elements like iridium that most likely originated in a rock from space.
Canadian Nickel Mine
A huge Canadian nickel mine lies at the site of an ancient asteroid impact. Almost 2 billion years ago, an asteroid crashed into central Canada, near the current town of Sudbury, Ontario. It vaporized a chunk of Earth’s crust and created a crater 12 miles (19 km) deep. The impact tore away rock layers and exposed Earth’s upper mantle. This allowed metal-bearing magma to flow to the surface. Today we find rich deposits of nickel, copper, and platinum on the site. Near the point of impact, rock melted instantly, flowed as liquid, and cooled into black melt glass. The impact also tossed out fragments of bedrock and pieces of the meteorite, which fell back to the area surrounding the crater.
Shattercones: Proof of an Ancient Impact
Shattercones are shock waves preserved in stone. They occur when an impact blasts into layers of bedrock and puts the rock under tremendous pressure. It shatters, creating three-dimensional cone-shaped patterns. The tips of the cones point back to the impact source.
When Meteorites Attack!
In 1954, a fragment of an asteroid crashed through the roof of a house in Sylacauga, Alabama. It bounced off the floor and hit Ann Hodges while she was napping on her couch. She was the first person documented to have been struck by a meteorite. Mrs. Hodges sold autographed pictures of herself holding the meteorite under the damaged ceiling. The rock is now on display in the University of Alabama’s Natural History Museum.
A Meteorite Meets a Carport
Impacts come when you least expect them. In 1973, a carport roof in San Juan Capistrano, California, took a hit from an asteroid fragment. Owner George Stinchcomb reported hearing a sound like a gunshot in the middle of the night. The next morning, he found an egg-size meteorite that crashed through the carport roof onto the concrete floor.
California Meteorites
California has many rich meteorite grounds. The most successful hunters use very thorough search techniques.
Meteorites have been found all over the Golden State, but many collectors go hunting in California’s flat, dry lakebeds. Rocks that land in these desert areas often remain visible and undisturbed indefinitely.
Successful meteorite hunters are methodical in their searches. They lay out grids at each site and then systematically examine every square foot for unusual stones. This is not easy because conditions can be very hot and dry. The hunters photograph meteorites exactly where they find them and record each location with a detailed description. The best teams can find several meteorites a day. The large numbered dots on the map (right) represent the California meteorites in Griffith Observatory’s collection. The small dots document other finds in the state.
The Bruceville Meteorite
In 1998, farmer Ben Howard was plowing on his property near Sacramento when he struck a huge rock. It turned out to be a visitor from space. This 183-pound (83-kg) meteorite is the largest-known stony meteorite found in California. After it was knocked off the surface of an asteroid, it wandered through space, and then fell to Earth thousands of year ago.
Meteorite From Mars
When meteorite hunter Robert Verish discovered two small stones in the Mojave Desert, he knew they were unusual. These rocks were once part of a lava river on Mars some 180 million years ago. A thin slice shows the structure of the stones.
How Meteorite Hunters Know Where to Look for Their Quarry
Although countless meteorites fall to Earth each year, finding them is tricky. They land everywhere, but if they plunge into the ocean or drop into forests and fields, they are probably lost forever. The best meteorite hunting grounds are places where these rocks stand out against the landscape: the deserts of Australia, Africa, and the Middle East; the frozen fields of Antarctica; and the dry lakebeds of California and the American southwest.
Meteorites can be hidden by Antarctic snow for years. Every year, expeditions hunt for them on the ice fields during summer in the southern hemisphere. The meteorites that fall in the world’s deserts and on dry lakebeds can lie undisturbed until hunters like Jason Utas find them. Here, he signals his fourth find of the day at a lakebed site in eastern California.
Peter and Jason Utas found this meteorite near Barstow, California, embedded in dry lakebed mud as you see it here. They dug up the soil sample without removing the meteorite to show how it really looked.
How to Recognize a Meteorite
Not every strange-looking rock is a meteorite. The weight of a rock, the minerals it contains, the way its surface looks, and the place where it was discovered all help meteorite hunters determine whether a lucky find came from space.
Meteorites are not round or porous like these normal rocks.
Stony and iron meteorites can have melted, glassy surfaces. Some have pits and markings.
An iron meteorite will attract a magnet. Some stony meteorites are also slightly magnetic.
Griffith Observatory is owned and operated as a public service by the City of Los Angeles, Department of Recreation and Parks. You can support the Observatory’s programs by donating to Griffith Observatory Foundation.
The City of Los Angeles is an equal opportunity employer. As a covered entity under Title II of the Americans with Disabilities Act, 1990 (ADA), the City of Los Angeles does not discriminate on the basis of disability and, upon request, will provide reasonable accommodation to ensure equal access to its programs, services, and activities. | The force and pressure of the collision vaporize parts of the meteorite, as well as the ground it hits. The impact blasts out the crater, melts some of the rock, and scatters fragments far and wide.
Meteorite Impacts Can Create Tektites
An impact digs up surface material and blasts it away. During large collisions, a lot of heat gets generated, which melts rock. Some of this “impact melt’ flows away, while some is blasted into the air with the rest of the ejected rocks and meteorite fragments. As the airborne splash of hot liquid rock fall back to Earth’s surface, they cool into interesting shapes. These bits of melted earth are tektites, and each sample contains different materials formed under unique conditions.
Tektites are found in a few scattered locations on Earth. These Bediasites were found in Texas. The Libyan Desert Glass is strewn across the northern African deserts. Moldavites come from the Czech Republic and neighboring countries.
An Impact Killed the Dinosaurs
Sixty-five million years ago, the dinosaurs and other animals died off. This extinction probably happened because a huge asteroid or comet struck the ancient shoreline of the Yucatán Peninsula in Mexico. It hit with the force of a 100-million-megaton bomb and made a crater 60 miles (97 km) across.
The impact raised dust that blocked incoming sunlight. Temperatures fell, and plant photosynthesis was interrupted. The hostile conditions and food shortages helped kill off most of Earth’s animals.
We find evidence for the impact in the K/T Boundary, a layer of clay deposited around the time of the extinction. It is rich with rare elements like iridium that most likely originated in a rock from space.
Canadian Nickel Mine
A huge Canadian nickel mine lies at the site of an ancient asteroid impact. Almost 2 billion years ago, an asteroid crashed into central Canada, near the current town of Sudbury, Ontario. | yes |
Meteoritics | Are meteorites hot when they hit the Earth? | no_statement | "meteorites" are not "hot" when they "hit" the earth.. when "meteorites" "hit" the earth, they are not "hot". | https://sites.wustl.edu/meteoritesite/items/meteors/ | Meteors | Some Meteorite Information | Washington University in St ... | Meteors
If you found a rock, it might be a meteorite, but it is definitely not a meteor.
If you saw a meteor and then you found a rock, then the rock is not a meteorite.
If you found a rock after seeing a meteor shower, then the rock is not a meteorite.
Meteor over Park City, Utah, August 2016. Source: NASA/Bill Dunford
A meteor is like lightning – You cannot hold it in your hand
A meteor (shooting star, fireball, bolide) is the visible streak of light in the sky from a meteoroid or micrometeoroid passing through the upper atmosphere of the Earth. The meteoroid compresses the air, which causes the exterior of the meteoroid to heat and glow (incandesce). The meteoroid sheds glowing material in its wake, causing the streak in the sky. There is a meteorite there, but you cannot see it. If the rock hits the ground, then it is a meteorite. The rock is not a meteor.
The rock you found is not a meteorite
Meteoroids enter Earth’s atmosphere at speeds typically of 12-40 km/s relative to the Earth. That is equivalent to going from New York to Los Angeles in 1.6 to 5.5 minutes.
A portion of the Self-Test Check-List. If you saw a meteor and then you found a rock, then the rock is not a meteorite.
Meteors stop incandescing (the light goes out) tens of kilometers above the Earth’s surface. It takes a few minutes for any surviving fragments to fall to the ground during the “dark flight.” They keep moving in same general direction, but their fall becomes more vertical and is subject to wind speed and direction. The fragments land at terminal velocity, about 100-200 m/s. Meteorites are not glowing when they hit the ground and they are not hot. Also, meteorites are much smaller than most people think they are, typically a few centimeters in size. This all means that the meteorite fragments land far from where you last saw the meteor and there is no way that observers at a single point on the Earth’s surface are going to find fragments of the meteorite.
Meteor showers
Space.com: “Perseid meteoroids (which is what they’re called while in space) are fast. They enter Earth’s atmosphere (and are then called meteors) at roughly 133,200 mph (60 kilometers per second) relative to the planet. Most are the size of sand grains; a few are as big as peas or marbles. Almost none hit the ground, but if one does, it’s called a meteorite.”
For all meteor showers, the meteors you see are made almost exclusively by sand- to pea-sized objects that ablate away in the atmosphere and never hit the ground. If you see a meteor during an annual meteor shower, then you are not going to find the meteorite. | Meteors
If you found a rock, it might be a meteorite, but it is definitely not a meteor.
If you saw a meteor and then you found a rock, then the rock is not a meteorite.
If you found a rock after seeing a meteor shower, then the rock is not a meteorite.
Meteor over Park City, Utah, August 2016. Source: NASA/Bill Dunford
A meteor is like lightning – You cannot hold it in your hand
A meteor (shooting star, fireball, bolide) is the visible streak of light in the sky from a meteoroid or micrometeoroid passing through the upper atmosphere of the Earth. The meteoroid compresses the air, which causes the exterior of the meteoroid to heat and glow (incandesce). The meteoroid sheds glowing material in its wake, causing the streak in the sky. There is a meteorite there, but you cannot see it. If the rock hits the ground, then it is a meteorite. The rock is not a meteor.
The rock you found is not a meteorite
Meteoroids enter Earth’s atmosphere at speeds typically of 12-40 km/s relative to the Earth. That is equivalent to going from New York to Los Angeles in 1.6 to 5.5 minutes.
A portion of the Self-Test Check-List. If you saw a meteor and then you found a rock, then the rock is not a meteorite.
Meteors stop incandescing (the light goes out) tens of kilometers above the Earth’s surface. It takes a few minutes for any surviving fragments to fall to the ground during the “dark flight.” They keep moving in same general direction, but their fall becomes more vertical and is subject to wind speed and direction. The fragments land at terminal velocity, about 100-200 m/s. Meteorites are not glowing when they hit the ground and they are not hot. Also, meteorites are much smaller than most people think they are, typically a few centimeters in size. This all means that the meteorite fragments land far from where you last saw the meteor and there is no way that observers at a single point on the Earth’s surface are going to find fragments of the meteorite.
| no |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.pewresearch.org/short-reads/2020/04/28/millennials-overtake-baby-boomers-as-americas-largest-generation/ | Millennials overtake Baby Boomers as America's largest generation | Millennials have surpassed Baby Boomers as the nation’s largest living adult generation, according to population estimates from the U.S. Census Bureau. As of July 1, 2019 (the latest date for which population estimates are available), Millennials, whom we define as ages 23 to 38 in 2019, numbered 72.1 million, and Boomers (ages 55 to 73) numbered 71.6 million. Generation X (ages 39 to 54) numbered 65.2 million and is projected to pass the Boomers in population by 2028.
The Millennial generation continues to grow as young immigrants expand its ranks. Boomers – whose generation was defined by the boom in U.S. births following World War II – are aging and their numbers shrinking in size as the number of deaths among them exceeds the number of older immigrants arriving in the country.
How we did this
Population figures for 2019 and earlier years are based on Census Bureau population estimates (2019 vintage and available by single year of age). Population sizes for 2020 to 2050 are based on Census Bureau population projections released in 2017 (and also available by single year of age). Live births by year are published by the National Vital Statistics System of the National Center for Health Statistics.
This post was originally published on Jan. 16, 2015, under the title “This year, Millennials will overtake Baby Boomers.” It was updated April 25, 2016, to reflect the changing population, under the headline “Millennials overtake Baby Boomers as America’s largest generation” This reflected the Center’s definition of Millennials at the time (born between 1981 and 1997).
A third revision published March 1, 2018, reflected the Center’s newly revised definition, under which Millennial births end in 1996. Under that new definition, the Millennial population was smaller than that of Boomers, resulting in the headline “Millennials projected to overtake Baby Boomers as America’s largest generation.”
This latest revision reflects the newly available July 1, 2019, population estimates released in April 2020, as well as new Census Bureau population projections released in 2017. Under these estimates, Millennials have overtaken Boomers under the Center’s revised definition.
Because generations are analytical constructs, it takes time for popular and expert consensus to develop as to the precise boundaries that demarcate one generation from another. In early 2018, Pew Research Center assessed demographic, labor market, attitudinal and behavioral measures to establish an endpoint – albeit inexact – for the Millennial generation. Under this updated definition, the youngest “Millennial” was born in 1996.
Here’s a look at some generational projections.
Millennials
With immigration adding more numbers to this group than any other, the Millennial population is projected to peak in 2033, at 74.9 million. Thereafter, the oldest Millennial will be at least 52 years of age and mortality is projected to outweigh net immigration. By 2050 there will be a projected 72.2 million Millennials.
Generation X
For a few more years, Gen Xers are projected to remain the “middle child” of generations – caught between two larger generations, the Millennials and the Boomers. Gen Xers were born during a period when Americans were having fewer children than in later decades. When Gen Xers were born, births averaged around 3.4 million per year, compared with the 3.9 million annual rate from 1981 to 1996 when the Millennials were born.
Gen Xers are projected to outnumber Boomers in 2028, when there will be 63.9 million Gen Xers and 62.9 million Boomers. The Census Bureau estimates that the Gen X population peaked at 65.6 million in 2015.
Baby Boomers
Baby Boomers have always had an outsize presence compared with other generations. They peaked at 78.8 million in 1999 and remained the largest living adult generation until 2019.
By midcentury, the Boomer population is projected to dwindle to 16.2 million.
Note: This is an update of a post originally published on Jan. 16, 2015. See the “How we did this” box for details.
About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts. | Millennials have surpassed Baby Boomers as the nation’s largest living adult generation, according to population estimates from the U.S. Census Bureau. As of July 1, 2019 (the latest date for which population estimates are available), Millennials, whom we define as ages 23 to 38 in 2019, numbered 72.1 million, and Boomers (ages 55 to 73) numbered 71.6 million. Generation X (ages 39 to 54) numbered 65.2 million and is projected to pass the Boomers in population by 2028.
The Millennial generation continues to grow as young immigrants expand its ranks. Boomers – whose generation was defined by the boom in U.S. births following World War II – are aging and their numbers shrinking in size as the number of deaths among them exceeds the number of older immigrants arriving in the country.
How we did this
Population figures for 2019 and earlier years are based on Census Bureau population estimates (2019 vintage and available by single year of age). Population sizes for 2020 to 2050 are based on Census Bureau population projections released in 2017 (and also available by single year of age). Live births by year are published by the National Vital Statistics System of the National Center for Health Statistics.
This post was originally published on Jan. 16, 2015, under the title “This year, Millennials will overtake Baby Boomers.” It was updated April 25, 2016, to reflect the changing population, under the headline “Millennials overtake Baby Boomers as America’s largest generation” This reflected the Center’s definition of Millennials at the time (born between 1981 and 1997).
A third revision published March 1, 2018, reflected the Center’s newly revised definition, under which Millennial births end in 1996. | yes |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.pewresearch.org/short-reads/2018/04/11/millennials-largest-generation-us-labor-force/ | Millennials are largest generation in the U.S. labor force | Pew ... | More than one-in-three American labor force participants (35%) are Millennials, making them the largest generation in the U.S. labor force, according to a Pew Research Center analysis of U.S. Census Bureau data.
As of 2017 – the most recent year for which data are available – 56 million Millennials (those ages 21 to 36 in 2017) were working or looking for work. That was more than the 53 million Generation Xers, who accounted for a third of the labor force. And it was well ahead of the 41 million Baby Boomers, who represented a quarter of the total. Millennials surpassed Gen Xers in 2016.
Meanwhile, the oldest members of the post-Millennial generation (those born after 1996) are now of working age. Last year, 9 million post-Millennials (those who have reached working age, 16 to 20) were employed or looking for work, comprising 5% of the labor force.
These labor force estimates are based on the Current Population Survey, which is designed by the U.S. Bureau of Labor Statistics and serves as the basis for its unemployment and labor force statistics.
In 2017 the Generation X labor force was down from its peak of 54 million in 2008. The decline reflects a drop in the overall number of Gen X adults (Census Bureau population estimates indicate that their population peaked in 2015). In addition, last year only 82% of Gen Xers were working or looking for work, which is lower than their share in the labor force in 2008 (84%).
Though still sizable, the Baby Boom generation’s sway in the workforce is waning. In the early and mid-1980s, Boomers made up a majority of the nation’s labor force. The youngest Boomer was 53 years old in 2017, while the oldest Boomers were older than 70. With more Boomers retiring every year and not much immigration to affect their numbers, the size of the Boomer workforce will continue to shrink.
While the Millennial labor force is still growing, partly due to immigration, it is unlikely that the Millennial labor force will reach the peak size of the Boomer labor force (66 million in 1997). The Census Bureau projects that the Millennial population will peak at 75 million. At that number, a high rate of labor force participation would be needed to reach a labor force of 66 million.
Note: This post was originally published on May 11, 2015, under the headline “Millennials surpass Gen Xers as the largest generation in U.S. labor force,” which reflected the Center’s definition of Millennials at the time (born between 1981 and 1997). This updated version reflects the Center’s newly revised definition, under which Millennial births end in 1996, and the incorporation of more recent information.
About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts. | More than one-in-three American labor force participants (35%) are Millennials, making them the largest generation in the U.S. labor force, according to a Pew Research Center analysis of U.S. Census Bureau data.
As of 2017 – the most recent year for which data are available – 56 million Millennials (those ages 21 to 36 in 2017) were working or looking for work. That was more than the 53 million Generation Xers, who accounted for a third of the labor force. And it was well ahead of the 41 million Baby Boomers, who represented a quarter of the total. Millennials surpassed Gen Xers in 2016.
Meanwhile, the oldest members of the post-Millennial generation (those born after 1996) are now of working age. Last year, 9 million post-Millennials (those who have reached working age, 16 to 20) were employed or looking for work, comprising 5% of the labor force.
These labor force estimates are based on the Current Population Survey, which is designed by the U.S. Bureau of Labor Statistics and serves as the basis for its unemployment and labor force statistics.
In 2017 the Generation X labor force was down from its peak of 54 million in 2008. The decline reflects a drop in the overall number of Gen X adults (Census Bureau population estimates indicate that their population peaked in 2015). In addition, last year only 82% of Gen Xers were working or looking for work, which is lower than their share in the labor force in 2008 (84%).
Though still sizable, the Baby Boom generation’s sway in the workforce is waning. In the early and mid-1980s, Boomers made up a majority of the nation’s labor force. The youngest Boomer was 53 years old in 2017, while the oldest Boomers were older than 70. | yes |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.cnn.com/2023/05/19/politics/millennials-genxers-baby-boomers-congress-representation-dg/index.html | Millennials are America's largest generation. But they're one of the ... | Millennials are America’s largest generation. But they’re one of the smallest groups that make up Congress.
By Christopher Hickey, Alex Matthews and Amy O'Kruk, CNN
Updated
11:30 AM EDT, Fri May 19, 2023
Link Copied!
Leah Abucayan/CNN/Getty
CNN
—
A younger Congress may be a thing of the past.
It’s taking five to 10 years longer for millennials and Generation X to reach the same level of representation in Congress as it had been for the past three generations, according to a CNN analysis of recent data from Congress, CQ and ProPublica.
In the decade after turning 25 — the age requirement to hold office in the House — baby boomers (ages 59-77) were able to retain 18 seats, while millennials (27-42) only won four during the same time period. Ten years after meeting the age requirement for the Senate, age 30, baby boomers landed four seats, with millennials unable to get a single one. Today, millennials hold 52 seats in the House and three in the Senate.
Experts attribute the widening gap to an aging population and seats that are growing less competitive, while the cost to win them keeps rising.
“I think people are becoming more and more frustrated with the fact that our Congress does not accurately reflect the population of America at this point,” said Erin Covey, an analyst at Inside Elections, a nonpartisan political newsletter.
Millennials and Generation Z (11-26) are winning their very first House seats faster than the previous four generations, CNN’s analysis shows. It’s still too early to determine what this will mean down the line for Gen Z, but for millennials, they are winning seats at an older age than their predecessors in both chambers over time.
Rep. Maxwell Frost (D-Florida) was the first member of a new generation – Gen Z – to be elected in the first eligible year in at least 100 years. It took three years for millennials, five years for baby boomers and seven years for the Silent Generation to join the House.
In the Senate, however, it’s taken more than 10 years for both Generation X and millennials to join Congress. The first member of the Silent Generation in the Senate, Ted Kennedy (D-Massachusetts), joined only four years after his generation was first eligible. It took baby boomers five years to arrive in the Senate.
CNN’s analysis shows the average age of Congress has crept up over time, which is helping prevent younger generations from making more inroads than in the past, experts told CNN.
“It’s simply more possible to have more people in their 60s, 70s and 80s hanging out in Congress [now] than would have been the case 30, 40, 50 years ago, when lifespans were shorter,” Curry said.
At an average 58.7 years old, the current Congress is the second-oldest in history, according to CNN’s analysis. That’s only a hair younger than the last Congress at 58.9 years old.
And once elected, members of Congress are staying longer than in previous decades. It’s not clear whether the same number of candidates are running or fewer young people are throwing their hats in the ring.
The average representative or senator in the current Congress had been serving for 8.5 years and 11.2 years, respectively, as of the beginning of this term, according to a recent Congressional Research Service report. That’s at least a year longer than during the early 1980s, when the average incoming member of the 97th Congress had been serving for roughly 7.5 years.
“Most seats are relatively uncompetitive,” Curry said. “And unless you get primaried, which relatively few members do, you can hang on to that seat for as long as you want, which means you could be in there to quite an advanced age without really having to worry about reelection.”
The number of members of Congress under the age of 50 over the past couple of decades has shrunk considerably, while the number over the age of 70 has ballooned. In the Senate, there’s never been a larger number of senators over 70 and smaller number under 50.
A more elderly Congress also affects legislative priorities. Older representatives are more likely to advance policies on issues that affect elderly Americans, such as nursing homes and elder abuse, according to a 2018 study by Curry and then-Ph.D. student Matthew R. Haydon, now with Texas A&M University. And consequently, Curry said, one could say less attention is paid toward issues that might be more important to voters under the age of 50, such as affording a house or student loan debt.
“If you’re younger, and you’re not well off, and the wages aren’t high enough and you can’t afford a home and you have student loans… all those things hit you more when you’re 25-30 years old, in a way that a 25 or 30 year-old member of Congress could relate to more directly because they’ve probably felt that pinch as well,” he said.
There have been recent signs of increased engagement with younger generations, though. More organizations have formed over the past few years to support younger voters and candidates, Covey said, and a larger share of younger voters have cast ballots over the past 10 years.
Frost was one of two major party Gen-Z House candidates last year in competitive or open seats who each explicitly embraced their generational identity in their campaigns, Covey said.
An analysis by Tufts University shows that 23% of eligible young voters (18-29 years old, which would include Zoomers and the youngest millennials) cast a ballot in the 2022 midterm election. That’s 10 percentage points higher than youth voter turnout in 2014, although still down from 28% in 2018, a historic high.
Although Congress is grayer than ever, a generational shift has started in House leadership.
Nancy Pelosi, then 82, handed Democratic leadership over to 52-year-old Hakeem Jefferies in January. The new Speaker of the House, Kevin McCarthy, was born in 1965, which makes him part of Generation X, along with Jeffries.
“I think that kind of shows how much of an appetite there is for generational change,” Covey said. “And for folks to pass the torch on to younger generations.” | Millennials are America’s largest generation. But they’re one of the smallest groups that make up Congress.
By Christopher Hickey, Alex Matthews and Amy O'Kruk, CNN
Updated
11:30 AM EDT, Fri May 19, 2023
Link Copied!
Leah Abucayan/CNN/Getty
CNN
—
A younger Congress may be a thing of the past.
It’s taking five to 10 years longer for millennials and Generation X to reach the same level of representation in Congress as it had been for the past three generations, according to a CNN analysis of recent data from Congress, CQ and ProPublica.
In the decade after turning 25 — the age requirement to hold office in the House — baby boomers (ages 59-77) were able to retain 18 seats, while millennials (27-42) only won four during the same time period. Ten years after meeting the age requirement for the Senate, age 30, baby boomers landed four seats, with millennials unable to get a single one. Today, millennials hold 52 seats in the House and three in the Senate.
Experts attribute the widening gap to an aging population and seats that are growing less competitive, while the cost to win them keeps rising.
“I think people are becoming more and more frustrated with the fact that our Congress does not accurately reflect the population of America at this point,” said Erin Covey, an analyst at Inside Elections, a nonpartisan political newsletter.
Millennials and Generation Z (11-26) are winning their very first House seats faster than the previous four generations, CNN’s analysis shows. It’s still too early to determine what this will mean down the line for Gen Z, but for millennials, they are winning seats at an older age than their predecessors in both chambers over time.
Rep. Maxwell Frost (D-Florida) was the first member of a new generation – Gen Z – to be elected in the first eligible year in at least 100 years. It took three years for millennials, | yes |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.pewresearch.org/short-reads/2018/04/03/millennials-approach-baby-boomers-as-largest-generation-in-u-s-electorate/ | Millennials approach Baby Boomers as America's largest generation ... | Millennials, who are projected to surpass Baby Boomers next year as the United States’ largest living adult generation, are also approaching the Boomers in their share of the American electorate.
As of November 2016, an estimated 62 million Millennials (adults ages 20 to 35 in 2016) were voting-age U.S. citizens, surpassing the 57 million Generation X members (ages 36 to 51) in the nation’s electorate and moving closer in number to the 70 million Baby Boomers (ages 52 to 70), according to a new Pew Research Center analysis of U.S. Census Bureau data. Millennials comprised 27% of the voting-eligible population in 2016, while Boomers made up 31%.
In 2016, Generation X and members of the Silent and Greatest generations (ages 71 and older) comprised 25% and 13% of the electorate, respectively. In addition, the oldest members of the post-Millennial generation (those born after 1996) began to make their presence known for the first time – 7 million of these 18- and 19-year-olds were eligible to vote in 2016 (comprising just 3% of the electorate).
The Baby Boomer voting-eligible population peaked in size at 73 million in 2004. Since the Boomer electorate is declining in size and the Millennial electorate will continue to grow, mainly through immigration and naturalization, it is only a matter of time before Millennials are the largest generation in the electorate.
While the growth in the number of Millennials who are eligible to vote underscores the potential electoral clout of today’s young adults, Millennials remain far from the largest generational bloc of actual voters. It is one thing to be eligible to vote and another thing to actually cast a ballot.
Measuring voter turnout is not an exact science. The Census Bureau’s November voting supplements are a standard data source for illuminating the demographics of voting. Census estimates of voter turnout are based on respondent self-reports of whether they voted in the recent election.
Based on these estimates, Millennials have punched below their electoral weight in recent presidential elections. (For a host of reasons, young adults are less likely to vote than their older counterparts.)
Given the historical context of relatively low voter turnout among young adults, Millennials seemed ascendant in the 2008 election when 50% of eligible Millennials voted. By comparison, 61% of the Generation X electorate reported voting that year, as did even higher percentages of Boomer and Silent Generation eligible voters. In 2008 Millennials comprised 18% of the electorate, but as a result of their relatively low turnout (compared with older generations) they made up only 14% of Americans who said they voted.
Millennial turnout was less impressive in 2012, when 46% of eligible Millennials said they had voted. Since the oldest Millennials were age 31 in 2012 (as opposed to 27 in 2008), the expectation might have been that turnout would have edged higher. After all, an older, more mature, more “settled” age group presumably should turn out at higher rates. This underscores that young adult turnout depends on factors besides demographics: the candidates, the success of voter mobilization efforts, satisfaction with the economy and the direction of the country.
Turnout among Millennials was higher in 2016 – 51%. But again, that’s significantly lower than the 61% of the electorate who voted. In order for their voting clout to match their share of the electorate, roughly 61% of Millennials would have to have turned out to vote in 2016.
While it may be a slam-dunk that Millennials will soon be the largest generation in the electorate, it will likely be a much longer time before they are the largest bloc of voters.
Note: This post was originally published on May 16, 2016, under the headline “Millennials match Baby Boomers as largest generation in U.S. electorate, but will they vote?” which reflected the Center’s definition of Millennials at the time (born between 1981 and 1998). This updated version reflects the Center’s newly revised definition, under which Millennial births end in 1996.
About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts. | Millennials, who are projected to surpass Baby Boomers next year as the United States’ largest living adult generation, are also approaching the Boomers in their share of the American electorate.
As of November 2016, an estimated 62 million Millennials (adults ages 20 to 35 in 2016) were voting-age U.S. citizens, surpassing the 57 million Generation X members (ages 36 to 51) in the nation’s electorate and moving closer in number to the 70 million Baby Boomers (ages 52 to 70), according to a new Pew Research Center analysis of U.S. Census Bureau data. Millennials comprised 27% of the voting-eligible population in 2016, while Boomers made up 31%.
In 2016, Generation X and members of the Silent and Greatest generations (ages 71 and older) comprised 25% and 13% of the electorate, respectively. In addition, the oldest members of the post-Millennial generation (those born after 1996) began to make their presence known for the first time – 7 million of these 18- and 19-year-olds were eligible to vote in 2016 (comprising just 3% of the electorate).
The Baby Boomer voting-eligible population peaked in size at 73 million in 2004. Since the Boomer electorate is declining in size and the Millennial electorate will continue to grow, mainly through immigration and naturalization, it is only a matter of time before Millennials are the largest generation in the electorate.
While the growth in the number of Millennials who are eligible to vote underscores the potential electoral clout of today’s young adults, Millennials remain far from the largest generational bloc of actual voters. It is one thing to be eligible to vote and another thing to actually cast a ballot.
Measuring voter turnout is not an exact science. | no |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.pewresearch.org/social-trends/2019/02/14/millennial-life-how-young-adulthood-today-compares-with-prior-generations-2/ | How Millennials compare with prior generations | Pew Research ... | Millennial life: How young adulthood today compares with prior generations
Over the past 50 years – from the Silent Generation’s young adulthood to that of Millennials today – the United States has undergone large cultural and societal shifts. Now that the youngest Millennials are adults, how do they compare with those who were their age in the generations that came before them?
In general, they’re better educated – a factor tied to employment and financial well-being – but there is a sharp divide between the economic fortunes of those who have a college education and those who don’t.
Millennials have brought more racial and ethnic diversity to American society. And Millennial women, like Generation X women, are more likely to participate in the nation’s workforce than prior generations.
Compared with previous generations, Millennials – those ages 22 to 37 in 2018 – are delaying or foregoing marriage and have been somewhat slower in forming their own households. They are also more likely to be living at home with their parents, and for longer stretches.
And Millennials are now the second-largest generation in the U.S. electorate (after Baby Boomers), a fact that continues to shape the country’s politics given their Democratic leanings when compared with older generations.
Those are some of the broad strokes that have emerged from Pew Research Center’s work on Millennials over the past few years. Now that the youngest Millennials are in their 20s, we have done a comprehensive update of our prior demographic work on generations. Here are the details.
Education
Today’s young adults are much better educated than their grandparents, as the share of young adults with a bachelor’s degree or higher has steadily climbed since 1968. Among Millennials, around four-in-ten (39%) of those ages 25 to 37 have a bachelor’s degree or higher, compared with just 15% of the Silent Generation, roughly a quarter of Baby Boomers and about three-in-ten Gen Xers (29%) when they were the same age.
Gains in educational attainment have been especially steep for young women. Among women of the Silent Generation, only 11% had obtained at least a bachelor’s degree when they were young (ages 25 to 37 in 1968). Millennial women are about four times (43%) as likely as their Silent predecessors to have completed as much education at the same age. Millennial men are also better educated than their predecessors. About one-third of Millennial men (36%) have at least a bachelor’s degree, nearly double the share of Silent Generation men (19%) when they were ages 25 to 37.
While educational attainment has steadily increased for men and women over the past five decades, the share of Millennial women with a bachelor’s degree is now higher than that of men – a reversal from the Silent Generation and Boomers. Gen X women were the first to outpace men in terms of education, with a 3-percentage-point advantage over Gen X men in 2001. Before that, late Boomer men in 1989 had a 2-point advantage over Boomer women.
Employment
Boomer women surged into the workforce as young adults, setting the stage for more Gen X and Millennial women to follow suit. In 1966, when Silent Generation women were ages 22 through 37, a majority (58%) were not participating in the labor force while 40% were employed. For Millennial women today, 72% are employed while just a quarter are not in the labor force. Boomer women were the turning point. As early as 1985, more young Boomer women were employed (66%) than were not in the labor force (28%).
And despite a reputation for job hopping, Millennial workers are just as likely to stick with their employers as Gen X workers were when they were the same age. Roughly seven-in-ten each of Millennials ages 22 to 37 in 2018 (70%) and Gen Xers the same age in 2002 (69%) reported working for their current employer at least 13 months. About three-in-ten of both groups said they’d been with their employer for at least five years.
Of course, the economy varied for each generation. While the Great Recession affected Americans broadly, it created a particularly challenging job market for Millennials entering the workforce. The unemployment rate was especially high for America’s youngest adults in the years just after the recession, a reality that would impact Millennials’ future earnings and wealth.
Income and wealth
The financial well-being of Millennials is complicated. The individual earnings for young workers have remained mostly flat over the past 50 years. But this belies a notably large gap in earnings between Millennials who have a college education and those who don’t. Similarly, the household income trends for young adults markedly diverge by education. As far as household wealth, Millennials appear to have accumulated slightly less than older generations had at the same age.
Millennials with a bachelor’s degree or more and a full-time job had median annual earnings valued at $56,000 in 2018, roughly equal to those of college-educated Generation X workers in 2001. But for Millennials with some college or less, annual earnings were lower than their counterparts in prior generations. For example, Millennial workers with some college education reported making $36,000, lower than the $38,900 early Baby Boomer workers made at the same age in 1982. The pattern is similar for those young adults who never attended college.
Millennials in 2018 had a median household income of roughly $71,400, similar to that of Gen X young adults ($70,700) in 2001. (This analysis is in 2017 dollars and is adjusted for household size. Additionally, household income includes the earnings of the young adult, as well as the income of anyone else living in the household.)
The growing gap by education is even more apparent when looking at annual household income. For households headed by Millennials ages 25 to 37 in 2018, the median adjusted household income was about $105,300 for those with a bachelor’s degree or higher, roughly $56,000 greater than that of households headed by high school graduates. The median household income difference by education for prior generations ranged from $41,200 for late Boomers to $19,700 for the Silent Generation when they were young.
While young adults in general do not have much accumulated wealth, Millennials have slightly less wealth than Boomers did at the same age. The median net worth of households headed by Millennials (ages 20 to 35 in 2016) was about $12,500 in 2016, compared with $20,700 for households headed by Boomers the same age in 1983. Median net worth of Gen X households at the same age was about $15,100.
This modest difference in wealth can be partly attributed to differences in debt by generation. Compared with earlier generations, more Millennials have outstanding student debt, and the amount of it they owe tends to be greater. The share of young adult households with any student debt doubled from 1998 (when Gen Xers were ages 20 to 35) to 2016 (when Millennials were that age). In addition, the median amount of debt was nearly 50% greater for Millennials with outstanding student debt ($19,000) than for Gen X debt holders when they were young ($12,800).
Housing
Millennials, hit hard by the Great Recession, have been somewhat slower in forming their own households than previous generations. They’re more likely to live in their parents’ home and also more likely to be at home for longer stretches. In 2018, 15% of Millennials (ages 25 to 37) were living in their parents’ home. This is nearly double the share of early Boomers and Silents (8% each) and 6 percentage points higher than Gen Xers who did so when they were the same age.
The rise in young adults living at home is especially prominent among those with lower education. Millennials who never attended college were twice as likely as those with a bachelor’s degree or more to live with their parents (20% vs. 10%). This gap was narrower or nonexistent in previous generations. Roughly equal shares of Silents (about 7% each) lived in their parents’ home when they were ages 25 to 37, regardless of educational attainment.
Millennials are also moving significantly less than earlier generations of young adults. About one-in-six Millennials ages 25 to 37 (16%) have moved in the past year. For previous generations at the same age, roughly a quarter had.
Family
On the whole, Millennials are starting families later than their counterparts in prior generations. Just under half (46%) of Millennials ages 25 to 37 are married, a steep drop from the 83% of Silents who were married in 1968. The share of 25- to 37-year-olds who were married steadily dropped for each succeeding generation, from 67% of early Boomers to 57% of Gen Xers. This in part reflects broader societal shifts toward marrying later in life. In 1968, the typical American woman first married at age 21 and the typical American man first wed at 23. Today, those figures have climbed to 28 for women and 30 for men.
But it’s not all about delayed marriage. The share of adults who have never married is increasing with each successive generation. If current patterns continue, an estimated one-in-four of today’s young adults will have never married by the time they reach their mid-40s to early 50s – a record high share.
In prior generations, those ages 25 to 37 whose highest level of education was a high school diploma were more likely than those with a bachelor’s degree or higher to be married. Gen Xers reversed this trend, and the divide widened among Millennials. Four-in-ten Millennials with just a high school diploma (40%) are currently married, compared with 53% of Millennials with at least a bachelor’s degree. In comparison, 86% of Silent Generation high school graduates were married in 1968 versus 81% of Silents with a bachelor’s degree or more.
Millennial women are also waiting longer to become parents than prior generations did. In 2016, 48% of Millennial women (ages 20 to 35 at the time) were moms. When Generation X women were the same age in 2000, 57% were already mothers, similar to the share of Boomer women (58%) in 1984. Still, Millennial women now account for the vast majority of annual U.S. births, and more than 17 million Millennial women have become mothers.
Voting
Younger generations (Generation X, Millennials and Generation Z) now make up a clear majority of America’s voting-eligible population. As of November 2018, nearly six-in-ten adults eligible to vote (59%) were from one of these three generations, with Boomers and older generations making up the other 41%.
However, young adults have historically been less likely to vote than their older counterparts, and these younger generations have followed that same pattern, turning out to vote at lower rates than older generations in recent elections.
In the 2016 election, Millennials and Gen Xers cast more votes than Boomers and older generations, giving the younger generations a slight majority of total votes cast. However, higher shares of Silent/Greatest generation eligible voters (70%) and Boomers (69%) reported voting in the 2016 election compared with Gen X (63%) and Millennial (51%) eligible voters. Going forward, Millennial turnout may increase as this generation grows older.
Generational differences in political attitudes and partisan affiliation are as wide as they have been in decades. Among registered voters, 59% of Millennials affiliate with the Democratic Party or lean Democratic, compared with about half of Boomers and Gen Xers (48% each) and 43% of the Silent Generation. With this divide comes generational differences on specific issue areas, from views of racial discrimination and immigration to foreign policy and the scope of government.
Population change and the future
By 2019, Millennials are projected to number 73 million, overtaking Baby Boomers as the largest living adult generation. Although a greater number of births underlie the Baby Boom generation, Millennials will outnumber Boomers in part because immigration has been boosting their numbers.
Looking ahead at the next generation, early benchmarks show Generation Z (those ages 6 to 21 in 2018) is on track to be the nation’s most diverse and best-educated generation yet. Nearly half (48%) are racial or ethnic minorities. And while most are still in K-12 schools, the oldest Gen Zers are enrolling in college at a higher rate than even Millennials were at their age. Early indications are that their opinions on issues are similar to those of Millennials.
Of course, Gen Z is still very young and may be shaped by future unknown events. But Pew Research Center looks forward to spending the next few years studying life for this new generation as it enters adulthood.
All photos via Getty Images
Social Trends Monthly Newsletter
Sign up to to receive a monthly digest of the Center's latest research on the attitudes and behaviors of Americans in key realms of daily life
About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts. | And Millennials are now the second-largest generation in the U.S. electorate (after Baby Boomers), a fact that continues to shape the country’s politics given their Democratic leanings when compared with older generations.
Those are some of the broad strokes that have emerged from Pew Research Center’s work on Millennials over the past few years. Now that the youngest Millennials are in their 20s, we have done a comprehensive update of our prior demographic work on generations. Here are the details.
Education
Today’s young adults are much better educated than their grandparents, as the share of young adults with a bachelor’s degree or higher has steadily climbed since 1968. Among Millennials, around four-in-ten (39%) of those ages 25 to 37 have a bachelor’s degree or higher, compared with just 15% of the Silent Generation, roughly a quarter of Baby Boomers and about three-in-ten Gen Xers (29%) when they were the same age.
Gains in educational attainment have been especially steep for young women. Among women of the Silent Generation, only 11% had obtained at least a bachelor’s degree when they were young (ages 25 to 37 in 1968). Millennial women are about four times (43%) as likely as their Silent predecessors to have completed as much education at the same age. Millennial men are also better educated than their predecessors. About one-third of Millennial men (36%) have at least a bachelor’s degree, nearly double the share of Silent Generation men (19%) when they were ages 25 to 37.
While educational attainment has steadily increased for men and women over the past five decades, the share of Millennial women with a bachelor’s degree is now higher than that of men – a reversal from the Silent Generation and Boomers. | no |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.insiderintelligence.com/insights/generation-z-facts/ | Generation Z: Latest Gen Z News, Research, Facts 2023 | Industries Overview
Our research focuses on the five core coverage areas below. We apply our rigorous research methodology to our reports, charts, forecasts, and more to keep our clients at the forefront of key developments and trends before they hit the mainstream.
Products
Insider Intelligence delivers leading-edge research to clients in a variety of forms, including full-length reports and data visualizations to equip you with actionable takeaways for better business decisions.
About Insider Intelligence
Our goal at Insider Intelligence is to unlock digital opportunities for our clients with the world’s most trusted forecasts, analysis, and benchmarks. Spanning five core coverage areas and dozens of industries, our research on digital transformation is exhaustive.
Generation Z (aka Gen Z, iGen, or centennials), refers to the generation that was born between 1997-2012, following millennials. This generation has been raised on the internet and social media, with some of the oldest finishing college by 2020 and entering the workforce.
Insider Intelligence has been tracking Gen Z’s characteristics, traits, values, and trends to develop in-depth statistics, facts, and marketing strategies targeting what will soon become the largest cohort of consumers.
Gen Z Terms and Definitions
What is Generation Z (Gen Z)?
Generation Z, is the youngest, most ethnically-diverse, and largest generation in American history, comprising 27% of the US population. Pew Research recently defined Gen Z as anyone born 1997 onwards. Gen Z grew up with technology, the internet, and social media, which sometimes causes them to be stereotyped as tech-addicted, anti-social, or “social justice warriors.”
What are Millennials (Gen Y)?
Millennials, also known as Generation Y, include anyone born between 1981 and 1996 (ages 26 to 41 in 2022) and represent about a quarter of the US population. Much of this cohort entered the workforce at the height of the Great Recession, and have struggled with the subsequent widening of the generational wealth gap.
Millennials have led older generations in technology adoption and embracing digital solutions. Their financial status and tech-savviness have fundamentally changed how they live and work—earning them stereotypes that they job hop and have killed a number of industries. Prior to Gen Z, millennials were the largest and most racially and ethnically diverse generation.
What is Generation X (Gen X)?
Generation X, also known as Gen X, the latchkey generation or, jokingly, the forgotten or middle child generation, consists of people born between 1965 and 1980 (ages 42-57 in 2022). Currently, Gen X comprises 20.6% of the US population, making them smaller than any other age demographic.
This cohort grew up with higher divorce rates and more two-income households, resulting in a general lack of an adult presence in their childhoods and teenage years. As such, Gen X is generally viewed as peer-oriented and entrepreneurial in spirit.
What is Generation Alpha?
Gen Alpha, which includes children born after 2010, is already set to be the most transformative generation yet. Alphas haven’t just grown up with technology—they’ve been completely immersed in it since birth. Early in their formative years, these children are comfortable speaking to voice assistants and swiping on smartphones. They don’t consider technologies to be tools used to help achieve tasks, but rather as deeply integrated parts of everyday life.
The number of US Gen Z digital buyers will surpass 41 million in 2022. Insider Intelligence
FAQs About Gen Z
What are the Generation Z birth years & age range?
Generation Z is broadly defined as the 72 million people born between 1997 and 2012.
Generation Z vs. Millennials (Gen Y)
Gen Z most closely mirrors millennials on key social and political issues, but without much of the optimism; More US Gen Zers than any other generation (68%) feel the US is headed in the wrong direction, and fewer Gen Zers than any other generation (32%) feel the country is headed in the right direction.
Is Generation Z conservative?
Generation Z considers itself more accepting and open-minded than any generation before it. Almost half of Gen Zs are minorities, compared to 22% of Baby Boomers, and the majority of Gen Z supports social movements such as Black Lives Matter, transgender rights, and climate change.
What are common names for Generation Z?
Generation Z, or Gen Z, is also sometimes referred to as iGen, or Centennials.
What is after Generation Z?
The generation that follows Gen Z is Generation Alpha, which includes anyone born after 2010. Gen Alpha is still very young, but is on track to be the most transformative age group ever.
What are the common Generation Z characteristics?
The average Gen Z got their first smartphone just before their 12th birthday. They communicate primarily through social media and texts, and spend as much time on their phones as older generations do watching television.
The majority of Gen Zs prefer streaming services to traditional cable, as well as getting snackable content they can get on their phones and computers.
More to Learn
Generation Z will soon become the most pivotal generation to the future of retail, and many will have huge spending power by 2026. To capture a piece of this growing cohort, retailers and brands need to start establishing relationships with Gen Zers now.
But Gen Zers are different from older generations, because they are the first consumers to have grown up wholly in the digital era. They’re tech-savvy and mobile-first—and they have high standards for how they spend their time online.
After ignoring the digital revolution and millennial buyers for too long, retailers and brands have spent the last decade trying to catch up to millennials’ interests and habits—so it’s critical for them to get ahead of Gen Z’s tendency to be online at all times, and make sure to meet this generation’s digital expectations.
Want more retail & ecommerce insights?
Sign up for eMarketer Retail Daily, our free newsletter.
By clicking “Sign Up”, you agree to receive emails from Insider Intelligence (e.g. FYIs, partner content, webinars, and other offers) and accept our Terms of Service and Privacy Policy. You can opt-out at any time. | Gen Z Terms and Definitions
What is Generation Z (Gen Z)?
Generation Z, is the youngest, most ethnically-diverse, and largest generation in American history, comprising 27% of the US population. Pew Research recently defined Gen Z as anyone born 1997 onwards. Gen Z grew up with technology, the internet, and social media, which sometimes causes them to be stereotyped as tech-addicted, anti-social, or “social justice warriors.”
What are Millennials (Gen Y)?
Millennials, also known as Generation Y, include anyone born between 1981 and 1996 (ages 26 to 41 in 2022) and represent about a quarter of the US population. Much of this cohort entered the workforce at the height of the Great Recession, and have struggled with the subsequent widening of the generational wealth gap.
Millennials have led older generations in technology adoption and embracing digital solutions. Their financial status and tech-savviness have fundamentally changed how they live and work—earning them stereotypes that they job hop and have killed a number of industries. Prior to Gen Z, millennials were the largest and most racially and ethnically diverse generation.
What is Generation X (Gen X)?
Generation X, also known as Gen X, the latchkey generation or, jokingly, the forgotten or middle child generation, consists of people born between 1965 and 1980 (ages 42-57 in 2022). Currently, Gen X comprises 20.6% of the US population, making them smaller than any other age demographic.
This cohort grew up with higher divorce rates and more two-income households, resulting in a general lack of an adult presence in their childhoods and teenage years. As such, Gen X is generally viewed as peer-oriented and entrepreneurial in spirit.
| no |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.brookings.edu/articles/millennials/ | The millennial generation: A demographic bridge to America's ... | More On
The millennial generation, over 75 million strong is America’s largest—eclipsing the current size of the postwar baby boom generation. Millennials make up nearly a quarter of the total U.S. population, 30 percent of the voting age population, and almost two-fifths of the working age population.
Most notably, the millennial generation, now 44 percent minority, is the most diverse adult generation in American history. While its lasting legacy is yet to be determined, this generation is set to serve as a social, economic, and political bridge to chronologically successive (and increasingly) racially diverse generations.
With an emphasis on its unique racial diversity, this report examines the demographic makeup of millennials for the nation, the 100 largest metropolitan areas, and all 50 states.
A bridge spanning the cultural generation gap
Despite today’s divisive generational politics, millennials are poised to become a demographic bridge between the largely white older generations (pre-millennials) and much more racially diverse younger generations (post-millennials). As they progress into middle age, millennials will continue to pave the way for the generations behind them as workers, consumers, and leaders in business and government in their acceptance by and participation in tomorrow’s more racially diverse America.
As the cultural generation gap graphic shows, while both the post-millennial and pre-millennial populations were majority white in 2015 (51.5 percent and 68.4 percent, respectively), both population groups are projected to substantially decrease their shares of white population by 2035, to 46 percent and 64.8 percent, respectively. Yet, even in 2035, the millennial generation will represent a bridge to the more racially diverse young adult population. Read more about the cultural generation gap on page 31
Millennials are by far the most diverse generation when compared to older generations. Most white baby boomers and their elders were born in an era when immigration was at a historic low point and when the immigrants who did arrive in America were mostly white Europeans. Then, the nation’s much smaller minority population was composed mostly of black Americans, residing in highly segregated cities. The large waves of immigration to the U.S. in the 1980s and 1990s, especially from Latin America and Asia, coupled with the aging of the white population1, made millennials a more racially and ethnically diverse generation than any that preceded it. Read more about millennials’ unique racial/ethnic diversity on page 6
Millennials and seniors by race ethnicity, 2015
Education attainment
Compared to older generations at the same relative time in young adult life, millennials have attained higher levels of education, which, for their generation more than others, is tied to higher future earnings and well-being. More than a third of all millennials ages 25-34 achieved college educations by 2015, up from less than 30 percent for comparably aged young adults in 2000 and not quite a quarter for those in 1980.
Notably, postsecondary education attainment has risen for all racial and ethnic young adult groups. There have also been positive changes in related measures such as declines in high school dropout rates and increased college enrollment for all major ethnic groups. Still, there remain sharp disparities in education attainment across groups, with Hispanic and black millennials falling behind their Asian and white counterparts. Read more about millennials’ education attainment statistics on page 12
The housing bust and the Great Recession have affected millennials’ short-term, and potentially long-term, ability to buy homes. Nationally, homeownership rates have not shown long-term declines. They stayed relatively stable since the 1960s except for a housing boom from the late 1990s through 2006. The subsequent housing bust occurred just before most millennials entered the market. This tamped down their homeownership rate compared with young adults at earlier ages, as high interest rates, a reluctance to buy, and debt or low savings prompted many millennials to live with relatives or move to rental housing.
All racial groups registered recent housing-bust-related declines in homeownership, but this was especially the case for blacks who, along with many Hispanics, bore the brunt of fewer lower-cost, subprime loans amid a deficit of resources. This delay in homeownership may be robbing millennials of a head start toward a traditional means of wealth accumulation. Read more about millennials’ homeownership statistics on page 13
While the economy and employment have climbed back from the worst of the recession and post-recession years, as late as 2015, millennials were more likely to be in poverty than most baby boomers and Gen Xers at similar ages.
A 2016 GenForward Survey of millennials of different racial-ethnic groups found that blacks and Hispanics, in particular, consistently report more economic vulnerability than whites or Asians.2 Moreover, it has been estimated that the loss of wealth resulting from the foreclosure crisis between 2007 and 2009 disproportionately affected black and Hispanic families, making them less able to provide support for their own and their children’s education and home purchases.3Read more about millennials’ poverty statistics on page 14
Poverty rates of millennials, ages 25-34, by race/ethnicity, 2015
Marital status
Millennials are slower than earlier generations to get married, have children, and leave their parents’ homes. The median age of marriage was lowest during the 1950s—at age 20 for women and 22 for men. By 2015, these rose to ages 27 and 29, respectively. Allowing longer periods for higher education and rising women’s labor force participation have pushed up the ages of marriage and childbearing over the decades. However, the Great Recession and resulting housing crash led millennials to even further delay these domestic milestones.
The broad pattern toward delay in marriage has been followed by millennials in each racial and ethnic group. Blacks continue to exhibit the lowest share of persons who are currently married—halving their share, at ages 25-34, from 47 percent in 1980 to 23 percent. Just as with the national patterns, long term shifts toward later marriage have been amplified for all groups by recent economic conditions. Read more about millennials’ marriage statistics on page 11
Marital status of millennials, ages 25-34, by race/ethnicity, 2015
Where do millennials live?
100 largest metro areas, 2015
An inclusive, diverse America
Millennials are already making an indelible imprint on the nation as evident from the tremendous publicity they receive and the consumer base they represent. Yet, the most consequential characteristic embodied by the members of this unique generation, as the country evolves demographically, is their racial and ethnic diversity.
[related-books]
Despite coming of age in the midst of the Great Recession and the subsequent housing market crash, the racially and ethnically diverse millennial generation tends to be optimistic about the future. Amidst signs that the employment situation is improving, and indications that housing affordability is reviving, a majority of millennials say that they want to get married, have children, and purchase a home.1 Specifically, Hispanic, Asian, and black millennials are more likely than whites to say that they will do better financially than their parents and that the life of their generation will be better than that of their parents.2
By example and as advocates, millennials of all racial and ethnic backgrounds can make the case that investing in a more inclusive America is essential to the nation’s economic success and will, as well, benefit older populations. As they move into middle age, millennials will represent the new face of America in business, in politics, in popular culture, and as the nation’s image to the rest of the world.
This report draws from a variety of U.S. Census Bureau data, including the Current Population Survey, the American Community Survey, census estimates and projections, as well as historical decennial censuses. It also presents metropolitan area projections conducted by the author. Millennials are defined in this report as persons born between 1981 and 1997.
The Brookings Institution is a nonprofit organization based in Washington, D.C. Our mission is to conduct in-depth, nonpartisan research to improve policy and governance at local, national, and global levels. | More On
The millennial generation, over 75 million strong is America’s largest—eclipsing the current size of the postwar baby boom generation. Millennials make up nearly a quarter of the total U.S. population, 30 percent of the voting age population, and almost two-fifths of the working age population.
Most notably, the millennial generation, now 44 percent minority, is the most diverse adult generation in American history. While its lasting legacy is yet to be determined, this generation is set to serve as a social, economic, and political bridge to chronologically successive (and increasingly) racially diverse generations.
With an emphasis on its unique racial diversity, this report examines the demographic makeup of millennials for the nation, the 100 largest metropolitan areas, and all 50 states.
A bridge spanning the cultural generation gap
Despite today’s divisive generational politics, millennials are poised to become a demographic bridge between the largely white older generations (pre-millennials) and much more racially diverse younger generations (post-millennials). As they progress into middle age, millennials will continue to pave the way for the generations behind them as workers, consumers, and leaders in business and government in their acceptance by and participation in tomorrow’s more racially diverse America.
As the cultural generation gap graphic shows, while both the post-millennial and pre-millennial populations were majority white in 2015 (51.5 percent and 68.4 percent, respectively), both population groups are projected to substantially decrease their shares of white population by 2035, to 46 percent and 64.8 percent, respectively. Yet, even in 2035, the millennial generation will represent a bridge to the more racially diverse young adult population. Read more about the cultural generation gap on page 31
Millennials are by far the most diverse generation when compared to older generations. Most white baby boomers and their elders were born in an era when immigration was at a historic low point and when the immigrants who did arrive in America were mostly white Europeans. | yes |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://nypost.com/2020/01/25/generation-z-is-bigger-than-millennials-and-theyre-out-to-change-the-world/ | Generation Z is bigger than millennials — and they're out to change ... | The eldest member of Generation Z — the demographic born between 1996 and 2010 — is just 24, and yet the group’s dominance is already being felt.
Last year, they became the largest generation, constituting 32 percent of the global population — or 2.47 billion of the 7.7 billion people on Earth — surpassing the millennials and Baby Boomers, respectively.
But their strength isn’t just in their numbers.
“They see the world so differently than those who came before them,” says Meghan Grace, author of “Generation Z: A Century in the Making.” According to Pew Research on American Gen-Zers, nearly half are ethnic minorities (48 percent) and a third know someone who uses gender-neutral pronouns (35 percent).
Most are pursuing college (59 percent compared to 32 percent of Gen-Xers at their age). They’re progressive but less partisan — a third decline to call themselves Democrats or Republicans. And they’re better at saving money (32 percent do it regularly compared to 23 percent of Gen-Xers when they were the same age), thanks to the ripple effects of the 2008 recession (when the oldest of them was just 13). “They’ve seen how an economic downturn can impact people’s lives,” says Grace.
While a recent survey by condom company SKYN found they’re having sex younger (starting at age 16 compared to 18 for millennials) and they’re more sexually adventurous (42 percent would have a threesome compared to 30 percent of millennials), 65 percent of them take safe sex seriously, an upswing from 54 percent of millennials, and they’re also bringing back traditional values like marriage (80 percent of them plan on getting married someday, finds youth marketing firm YPulse).
The suicide rate among people aged 10 to 24 has jumped 56 percent since 2007, according to the CDC, but this same cohort is more likely (27 percent) to report their mental-health struggles than millennials (15 percent) and Gen-Xers (13 percent), according to a report last year by the American Psychological Association.
And though Gen-Zers are less religious than any previous generation (about one third of them have no religious affiliation, and according to research by the Barna Group, 21 percent of Gen-Zers identify as atheist or agnostic, compared to 15 percent of millennials), they’re also more accommodating to religious minorities in the workplace than previous generations, according to a Becket Fund for Religious Liberty report.
Here, six Gen-Zers talk about their generation — and how they see themselves fitting into it.
Stefan Jeremiah
‘I love old-timey books’
Sasha Raven Gross, 11
Boerum Hill, Brooklyn
Gross describes her generation in just three words: “Phone and screen absorbed,” she says. She doesn’t use the social-media platforms so popular with her peers — “I don’t have TikTok or Instagram,” she laughs. “TikTok is just weird” — but she does identify with Gen-Z’s frustration.
“One thing we all have in common is we don’t feel heard,” she says. “It’s like when you’re a little kid and you’re at a party and you’re trying to get people’s attention but nobody is listening to you and you’re like, ‘Pay attention! I’m trying to speak here!’ ”
Gross feels like she has a lot to say. Kids in her class have been talking about the situation in Iran “and saying things like, ‘It’s going to be World War Three.’ ” She describes herself as “actually kind of scared.” She gets particularly nervous when she hears about the immigration crisis. “It makes me scared for my mother,” she says, an immigrant from Taiwan. “I don’t know if she’d be allowed in if she was coming to this country today.”
She finds solace in books. Gross is a voracious reader, a fan of novels like “The Hate U Give” and “To All the Boys I’ve Loved Before,” and she has a special place in her heart for the Harry Potter series. She prefers physical books, or as she calls them, “old-timey books.” With e-readers like Kindle, it’s too easy to lose focus, she says, “and it’s more satisfying to turn paper pages.”
In general, she’s not all that interested in the digital world. The Internet is just something she has to use for homework. “The only time I go online is to look up something for school,” she says.
Stefan Jeremiah
‘The Internet is a safe haven for me’
Keith Paris, 20
Crown Heights, Brooklyn
Paris embodies his own vision of Generation Z. He started his own YouTube channel not to become famous — he only has 371 subscribers — but to have something in his life that he controls completely.
“I’m the one who calls the shots,” he says. “I edit it, decide what I want to talk about. It’s completely mine.”
Paris uses the channel to discuss things like being gay — he came out while a sophomore in high school — and his life as an amputee. He was born without a tibia in his left leg, and since having it amputated, he wears a prosthetic just below his knee.
“Growing up, I never met a person who looked like me,” he says. “I wish I had. It would’ve made such a difference. It’s a big reason why I do this channel now. Representation is important.”
Paris’ childhood was less than happy. He says he was bullied, has experimented with self-harming, and was raised by Caribbean parents who didn’t understand his sexuality. But the Internet helped him realize he wasn’t alone.
“It was a safe haven for me,” Paris says. “I could connect with people who felt like me, who were going through similar problems. It made me realize, ‘Oh, I’m not the crazy one.’” As for online trolls, Paris isn’t concerned. “Just block ‘em,” he laughs. “I have so much love, I don’t have time for hate.”
Brendan Baber
‘I practice esports 7 to 8 hours a day’
Max Baber, 16
Kenosha, Wisconsin
During a campus visit to Bryant & Stratton College in Milwaukee, Baber was told by one of the esport directors, “In two years, I’m gonna come and recruit you.”
Recruit him for what? Being exceptionally good at playing a first-person shooter game called “Overwatch.”
There are more than 60 colleges and universities in the US that recruit esports players — those who excel at online multiplayer games like “Fortnite” and “League of Legends” — in the same way they do varsity athletes. At the University of California, Irvine, gifted gamers qualify for scholarships worth up to $6,000 to play on the school’s varsity team, and they play in front of sold-out crowds on a 3,500-square-foot arena on campus that the school built in 2016.
Baber has been playing “Overwatch” competitively since he was 14. To be in the same league as his egame idols, players like Sinatraa and Ryu Je-hong, requires intense training. “It’s no joke,” Baber says. “You have to practice 7 to 8 hours a day.”
He’s currently got his eye on the University of Wisconsin at Madison, which has had an esports team since 2009 and now offers a “genuine full-ride scholarship for esports,” and Bryant & Stratton, where he received what he calls an “indirect offer.”
Baber admits he’s been slacking on the video games this year because it’s been affecting his schoolwork. “It’s definitely still something I want to pursue,” he says of his gaming career. “But it’s such a young and fragile market. I feel like I need a plan B. I’m not going to put all my eggs in that basket.”
As for his generation, he says it’s “up to us to fix” the big issues like climate change. “If we don’t look for solutions, nothing’s going to change.”
Stefan Jeremiah
‘My Christian faith defines everything I do’
Abigail Murphy, 21
Crown Heights, Brooklyn
One of Murphy’s first real memories, when she was just 4 years old, was 9/11. She was visiting Brooklyn with her family — her father, a pastor in Michigan at the time, was in town for missionary work — and she vividly recalls the confusion, the adults crying, and her parents taking Murphy and her four siblings for a walk, just to remind them that they were safe.
“So much of that day colored the way I view the world,” she says. “There was just this feeling that we needed to do something.” Her family moved to the city three months later, so that her father could open a church in the Hell’s Kitchen neighborhood, the Messiah’s Reformed Fellowship.
Today, Murphy is a senior at The King’s College in New York, where she’s majoring in humanities, and a member of the Intercollegiate Studies Institute, a conservative think tank founded by William F. Buckley. “My Christian faith defines everything I do,” she says. “But we should be willing to engage with people who think differently from us. That’s how we grow and evolve and recognize the beauty and diversity in the world.”
Murphy has mixed feelings about her generation. “From what I can see among my peers, we don’t understand how to achieve our dreams within reason and with patience,” she says. “Depression seems more common than ever in teenagers and young adults, and we tend to overanalyze everything.” Much of that has to do with social media, which she avoids. “I hate it,” she says. “It’s harmful to our psyche.”
But in the real world, she’s eager to have difficult conversations. She describes herself as conservative but not somebody “who thinks secularism or liberalism is trying to take over the world,” she says. “I want to ask questions, and listen. I’m passionate about my beliefs, but I don’t think we should shy away or shut down people who disagree with us.”
Stefan Jeremiah
‘I’m exhausted by all the bad things in the world’
Kendy Rudy, 15
Neshanic Station, New Jersey
How to sum up her generation in a sentence? “We’re just over it,” says Rudy. “My friends and I are exhausted by all the bad things in the world over our lifetimes, and we’re just teenagers.”
Rudy is still fuming about how her attempts to organize a school walkout in 2018, as part of a nationwide response to the Parkland shootings, were sabotaged. “The high schoolers got to do it, but the principal pulled the middle schoolers into the auditorium and made us stay there the whole time,” she says. “We had to listen to him ramble on about how protests don’t accomplish anything.”
Rudy says she often wonders how she’d react during an actual school shooting. “All the time,” she says. “We’re reminded of it with every drill.” The shooting drills happen so often that none of her peers take them seriously. “The other kids just assume it’s a drill and not the real thing, so everybody talks really loudly, waiting for it to be over.”
She’s not mad at previous generations for the legacy they’ve left — “I think they did what they could, they’ve just run out of ideas” — but she doesn’t sound especially hopeful for the future. She wants to work in prison reform and is inspired by Ruth Bader Ginsburg (“I want her to be my grandma”), but at the moment her only focus is finishing high school “as fast as possible.”
“It’s too much,” she says. “Teachers will pile on homework every single night. The workload never ends. You just try to get a few hours of sleep before you have to get up and do it all again. Nobody cares what all of this stress is doing to us.”
Stefan Jeremiah
‘I don’t get bent out of shape if people use the wrong words’
Avie Acosta, 21
Chinatown, Manhattan
In many ways, Acosta has a familiar story. A kid with big dreams and a cloistering family moves from a small town in Middle America — in her case, Edmond, Oklahoma — to the big city and reinvents herself. But Acosta’s journey is a little different. Born male, she first began transitioning in her teens. Feeling unsupported by her family, she moved to New York at 19, and after going on and off hormone treatments, she now identifies as gender nonconforming.
She came to New York with modeling ambitions — she booked her first fashion shows from Instagram, using photos taken “by my little brother in our family’s living room” — and has already worked for designers like Marc Jacobs, Moschino, and Random Identities, becoming the first transgender model to sign with talent management company IMG.
During her late teens, Acosta liked to describe herself as “unoffendable,” which was more wishful thinking than reality. “I was secretly so sensitive,” she says. But she’s finally reached a place where gender politics feels kind of meaningless.
“When people ask what my pronouns are, my response is either ‘I don’t care’ or for the sake of simplicity I might tell them ‘they, them.’ But I’m not going to be one of those people who gets bent out of shape if someone uses the wrong words around me. That’s just silly. It genuinely does not matter.”
She rarely socializes with her peers, preferring the company of Gen-Xers or older. “I don’t feel particularly connected to my generation,” Acosta says. “I’m not sure how I would define us. Maybe not a satisfying response, but that’s my truth at this moment.” | The eldest member of Generation Z — the demographic born between 1996 and 2010 — is just 24, and yet the group’s dominance is already being felt.
Last year, they became the largest generation, constituting 32 percent of the global population — or 2.47 billion of the 7.7 billion people on Earth — surpassing the millennials and Baby Boomers, respectively.
But their strength isn’t just in their numbers.
“They see the world so differently than those who came before them,” says Meghan Grace, author of “Generation Z: A Century in the Making.” According to Pew Research on American Gen-Zers, nearly half are ethnic minorities (48 percent) and a third know someone who uses gender-neutral pronouns (35 percent).
Most are pursuing college (59 percent compared to 32 percent of Gen-Xers at their age). They’re progressive but less partisan — a third decline to call themselves Democrats or Republicans. And they’re better at saving money (32 percent do it regularly compared to 23 percent of Gen-Xers when they were the same age), thanks to the ripple effects of the 2008 recession (when the oldest of them was just 13). “They’ve seen how an economic downturn can impact people’s lives,” says Grace.
While a recent survey by condom company SKYN found they’re having sex younger (starting at age 16 compared to 18 for millennials) and they’re more sexually adventurous (42 percent would have a threesome compared to 30 percent of millennials), 65 percent of them take safe sex seriously, an upswing from 54 percent of millennials, and they’re also bringing back traditional values like marriage (80 percent of them plan on getting married someday, finds youth marketing firm YPulse).
| no |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.bankrate.com/mortgages/millennials-and-homebuying/ | Millennial Homebuyers Reshape The Real Estate Market | Bankrate | Millennials and homebuying: Real estate adapts to the largest generation
Advertiser Disclosure
We are an independent, advertising-supported comparison service. Our goal is to help you make smarter financial decisions by providing you with interactive tools and financial calculators, publishing original and objective content, by enabling you to conduct research and compare information for free - so that you can make financial decisions with confidence.
Bankrate has partnerships with issuers including, but not limited to, American Express, Bank of America, Capital One, Chase, Citi and Discover.
How We Make Money
The offers that appear on this site are from companies that compensate us. This compensation may impact how and where products appear on this site, including, for example, the order in which they may appear within the listing categories, except where prohibited by law for our mortgage, home equity and other home lending products. But this compensation does not influence the information we publish, or the reviews that you see on this site. We do not include the universe of companies or financial offers that may be available to you.
The Bankrate promise
At Bankrate we strive to help you make smarter financial decisions. While we adhere to strict
,
this post may contain references to products from our partners. Here's an explanation for
.
Bankrate logo
The Bankrate promise
Founded in 1976, Bankrate has a long track record of helping people make smart financial choices.
We’ve maintained this reputation for over four decades by demystifying the financial decision-making
process and giving people confidence in which actions to take next.
Our mortgage reporters and editors focus on the points consumers care about most — the latest rates, the best lenders, navigating the homebuying process, refinancing your mortgage and more — so you can feel confident when you make decisions as a homebuyer and a homeowner.
Bankrate logo
Editorial integrity
Bankrate follows a strict editorial policy, so you can trust that we’re putting your interests first. Our award-winning editors and reporters create honest and accurate content to help you make the right financial decisions.
Key Principles
We value your trust. Our mission is to provide readers with accurate and unbiased information, and we have editorial standards in place to ensure that happens. Our editors and reporters thoroughly fact-check editorial content to ensure the information you’re reading is accurate. We maintain a firewall between our advertisers and our editorial team. Our editorial team does not receive direct compensation from our advertisers.
Editorial Independence
Bankrate’s editorial team writes on behalf of YOU – the reader. Our goal is to give you the best advice to help you make smart personal finance decisions. We follow strict guidelines to ensure that our editorial content is not influenced by advertisers. Our editorial team receives no direct compensation from advertisers, and our content is thoroughly fact-checked to ensure accuracy. So, whether you’re reading an article or a review, you can trust that you’re getting credible and dependable information.
Bankrate logo
How we make money
You have money questions. Bankrate has answers. Our experts have been helping you master your money for over four decades. We continually strive to provide consumers with the expert advice and tools needed to succeed throughout life’s financial journey.
Bankrate follows a strict
editorial policy, so you can trust that our content is honest and accurate. Our award-winning editors and reporters create honest and accurate content to help you make the right financial decisions. The content created by our editorial staff is objective, factual, and not influenced by our advertisers.
We’re transparent about how we are able to bring quality content, competitive rates, and useful tools to you by explaining how we make money.
Bankrate.com is an independent, advertising-supported publisher and comparison service. We are compensated in exchange for placement of sponsored products and, services, or by you clicking on certain links posted on our site. Therefore, this compensation may impact how, where and in what order products appear within listing categories, except where prohibited by law for our mortgage, home equity and other home lending products. Other factors, such as our own proprietary website rules and whether a product is offered in your area or at your self-selected credit score range can also impact how and where products appear on this site. While we strive to provide a wide range offers, Bankrate does not include information about every financial or credit product or service.
Millennials are the largest generation in the U.S., and everyone has been expecting members of this massive demographic cohort to reshape the housing market. So far, however, it hasn’t quite happened.
Not that millennials don’t value homeownership – two-thirds of them say it’s a central part of the American dream, according to Bankrate’s 2023 Financial Security survey. Still, it’s been a struggle for many aspiring millennial homebuyers to become homeowners.
These twenty- and thirty-somethings face a tough market. Home prices remain near record levels, and mortgage rates are much higher than they were two years ago. Low inventory, high inflation, expensive financing: The combination has created an affordability squeeze that’s forcing many millennials to keep renting.
Here’s a profile of this generation, their challenges when it comes to homebuying, and their behavior when they do become homeowners.
Mortgage
Millennials and homebuying statistics
There were 72.1 million millennials in the U.S. as of 2019.1
Millennials represent 43% of homebuyers, the highest share of any any generation.2
Millennials in a changing housing market
Millennials are typically defined as those born between the early 1980s and the mid-1990s. Their entry into the real estate market has looked different from that of older generations. Generally, millennials are buying their first homes later than their baby-boomer parents. There are a number of reasons behind that delay, but high student debt loads and the lingering effects of career stagnation caused by the Great Recession are some of the most commonly cited reasons.
In the last few years, the triple whammy of pandemic-elevated home prices, tight inventory and rising mortgage rates hasn’t helped. Among millennial non-homeowners surveyed by Bankrate earlier this year, many cited paltry savings and too-high home prices as their reason for continuing to rent.
Saving enough money, in particular, continues to prove challenging. In Bankrate’s survey, over half (53 percent) of the older millennials who aspired to homeownership pointed to being unable to afford the down payment and closing costs more than any other reason or any other age group. Younger millennials blamed an array of affordability hurdles: not having enough income (49 percent), home prices being too high (47 percent) and not being able to afford the down payment and closing costs (42 percent).
Dovetailing with that is data from the National Association of Realtors (NAR). In its “2022 Home Buyers and Sellers Generational Trends Report,” 27 percent of younger millennials say gathering enough funds for a down payment is the “most difficult step” of the homebuying process, with 25 percent relying on a gift from family or friends to help with their purchase.
Mortgage
Reflecting the tight real estate market, Bankrate’s Financial Security survey found nearly two-thirds of Americans (64 percent) are willing to sacrifice to find affordable housing. Among millennials in that group, 33 percent would buy a fixer-upper, 32 percent would move out of state and 31 percent would downsize their living space.
In addition to being held back by financial considerations, many millennials are in a general pattern of reaching life milestones later. The average age for getting married has been rising, for example. In 2020, the median age for a man’s first marriage was above 30 for the first time in history, according to Census estimates, while the median age of a first-time bride was above 28, also for the first time. Subsequently, millennials are starting their families later. And they’re waiting to buy homes. In the NAR survey, the typical first-time homebuyer was 36 years old — up from 33 the previous year. That was an all-time high.
Millennials and home renovations
Since the pandemic, remodeling has been all the rage among American homeowners. Given their tight budgets and low rates of homeownership, millennials haven’t fully jumped into that game yet. They made up just 9 percent of homeowners who renovated in 2022, according to the “2023 US Home & Houzz Study,” a survey by home remodeling platform Houzz.
Still, millennials’ median spend on renovations has increased by 33 percent compared with 2021 and doubled since 2020. It’s now at $20,000.
Houzz found that millennials did more home system upgrades than other generations, with automation and security enhancements being their top priorities. And reflecting work-from-home trends, home office upgrades also were more popular among millennials than among members of any older generation in 2022.
To pay for renovations, most millennials (88 percent) use cash from savings. However, 35 percent also use credit cards, and they’re more likely to use them than older generations do. Only 15 percent of millennials used a secured home loan.
Tips for millennial homebuyers
If you’re looking to become a homeowner, there are a few key bits of advice to keep in mind:
Shop around with multiple mortgage lenders to make sure you’re getting the best deal. It’s not just about interest rates, but the all-in costs and other terms and conditions on your loan.
Make a budget and stick to it. You don’t want to wind up with more house than you can afford. Keep that budget going once you move, too. In Bankrate’s survey, the top regret for millennial homebuyers was maintenance and hidden costs being more expensive than expected (expressed by 42 percent of those with buyer’s remorse).You’ll want to be ready to cover the ongoing expenses, plus whatever issues inevitably crop up.
Be strategic in financing home renos. Using credit cards to pay for home improvements is a risky move, considering their double-digit interest rates: The average interest rate on credit cards as of mid-May was just above 20 percent, according to Bankrate’s national survey of lenders. In contrast, the average rate on a home equity line of credit (HELOC) was around 8 percent. The interest could be tax-deductible as well, if you itemize on your return. You could also consider a home equity loan, which offers a slightly higher, but fixed interest rate.
Legal
How we make money
Bankrate.com is an independent, advertising-supported publisher and comparison service. We are compensated in exchange for placement of sponsored products and, services, or by you clicking on certain links posted on our site. Therefore, this compensation may impact how, where and in what order products appear within listing categories, except where prohibited by law for our mortgage, home equity and other home lending products. Other factors, such as our own proprietary website rules and whether a product is offered in your area or at your self-selected credit score range can also impact how and where products appear on this site. While we strive to provide a wide range offers, Bankrate does not include information about every financial or credit product or service. |
Millennials are the largest generation in the U.S., and everyone has been expecting members of this massive demographic cohort to reshape the housing market. So far, however, it hasn’t quite happened.
Not that millennials don’t value homeownership – two-thirds of them say it’s a central part of the American dream, according to Bankrate’s 2023 Financial Security survey. Still, it’s been a struggle for many aspiring millennial homebuyers to become homeowners.
These twenty- and thirty-somethings face a tough market. Home prices remain near record levels, and mortgage rates are much higher than they were two years ago. Low inventory, high inflation, expensive financing: The combination has created an affordability squeeze that’s forcing many millennials to keep renting.
Here’s a profile of this generation, their challenges when it comes to homebuying, and their behavior when they do become homeowners.
Mortgage
Millennials and homebuying statistics
There were 72.1 million millennials in the U.S. as of 2019.1
Millennials represent 43% of homebuyers, the highest share of any any generation.2
Millennials in a changing housing market
Millennials are typically defined as those born between the early 1980s and the mid-1990s. Their entry into the real estate market has looked different from that of older generations. Generally, millennials are buying their first homes later than their baby-boomer parents. There are a number of reasons behind that delay, but high student debt loads and the lingering effects of career stagnation caused by the Great Recession are some of the most commonly cited reasons.
In the last few years, the triple whammy of pandemic-elevated home prices, tight inventory and rising mortgage rates hasn’t helped. Among millennial non-homeowners surveyed by Bankrate earlier this year, many cited paltry savings and too-high home prices as their reason for continuing to rent.
| yes |
Demographics | Are millennials the largest generation in the U.S.? | yes_statement | "millennials" are the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. is made up of "millennials".. the u.s. has the "largest" population of "millennials" compared to other "generations". | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8368917/ | Generation Z: What's Next? - PMC | Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
The current generation of learners in medical education, Millennials, has been extensively characterized. This work has helped to inform teaching innovation and curricula over the past decade. However, a new generation will soon enter medical school—Generation Z (Gen Z).
Gen Z, known by a variety of monikers including iGen, Plurals, Founders, Pivotals, and the Homeland Generation, is made up of individuals born from around 1990–2010. This group makes up the largest percentage of the US population and is the most diverse to date. This, combined with the large amount of discretionary spending money at their disposal, has led to their intensive study by the business sector [1]. Even this early in their lives, it is apparent that Gen Z students are not simply young Millennials.
It is necessary to review the origins of commonly held characteristics of the Millennial generation that currently makes up the majority of medical students and residents in comparison with Gen Z learners. Millennials came of age during the relatively prosperous and peaceful 1990s, leading to an optimistic (and sometimes altruistic) outlook on the world around them. Their childhoods were more structured and scheduled than any previous generation and they had a great deal of influence on their family environment, the result of which is less comfort with ambiguity and a preference for informal communication with teachers and leaders. Their Baby Boomer parents tended to practice “helicopter” parenting, always being present for crisis management or to aid in logistical matters, translating into a need for more ready access to support systems than their predecessors [2]. During their youth, two messages were prevalent in their homes, schools, clubs, and popular media: “You can be anything you want to be” and “Participation in the team is the most important thing.” [3, 4]. These messages have led Millennials to have higher self-esteem compared with other generations and an affinity for group work [3].
Gen Z children were products of the post-9/11 world, a time of economic lability, political polarization, and multiple foreign wars. The media they consumed was more focused on negativity, with almost every Gen Z child seeing a popular figure they idolized suffer failures or scandals in full public view. However, they were also witness to advances in equality such as the realities of an African American president and strides related to gay marriage [1]. Their Generation Z parents often have had a “CIA” parenting style, using technology to be involved in their children’s lives and academic progress with less visible presence in day-to-day matters. Gen Z’s parents also tended to espouse their own preferences related to independence, identifying and coping with shortcomings, and skepticism with established processes and trends. These factors have led Gen-Zers to have a more pragmatic view of the world than Millennials, manifested as a higher prevalence of risk aversion, financial frugality, and an expectation that they will need to work harder than the generation that preceded them [3, 5].
Technology has played a pivotal role in shaping Gen Z’s preferences. At least 75% own smart devices and access them multiple times per hour [6]. Most spend at least 9 h interacting digital content daily [5]. Online videos are the preferred information source, with 95% watching YouTube every day. A typical Gen Z individual views approximately 70 videos on average daily, and at least two-thirds go to online videos for everyday instructional information [7]. However, Gen Z does not just consume material. More than half regularly create their own online content with a substantial number posting videos on social media platforms at least weekly [5, 7]. Electronic gaming is also important to Gen Z; it is estimated that many spend more than an hour each day playing video games [7]. Like their Millennial predecessors, Gen Z prefers electronic methods of communication with texting, mobile messaging apps, and social media being used most often [5, 7].
Experience with technology and in the online world has affected Gen Z’s preferences. Gen Z students expect on-demand, low barrier access to all information [7, 8], often selecting sources that package information in “bite-sized” pieces [9, 10]. Having grown up in the era of user reviews, they expect the ability to provide and receive real-time feedback, as well as having access to that provided by their peers [3, 7]. They also place priority on personalization and relationships, including institutions and authority figures [3].
Secondary and undergraduate teachers have noted other differences between Millennial and Gen Z learners. Gen Z students seem to have a greater tendency for “DIY” (task-oriented) and multichannel information gathering [11]. In the era of pushed information and “hyperlinks,” some have noted a reduced ability to form conceptual connections and greater difficulty distinguishing fact from opinion online [12]. Gen Z has also been shown to have a higher tendency to task-switch, shifting rapidly from one activity, task, or information source to another [13].
These lessons can inform pedagogical strategies to connect with Gen Z medical students. When developing curricula, skills to stress will likely include linkage of concepts, framing of questions, vetting of online content, and etiquette related to both providing and receiving feedback. Asynchronous content may be best received if it is video-based and personalized [7, 8, 14, 15]. Instructor-developed resources may not be the primary ones used by students [7]. As such, it will be critical for educators to be knowledgeable in the external sources Gen Z students are using, aid them in selecting those of high quality, curate recommended ones, and (perhaps) incorporate them into their lessons or learning plans [1]. Frequent use of reflection activities may be popular with Gen Z learners, playing into their social media experience [16]. Finally, real-time feedback on progress, perhaps using tools such as dashboards, will be of importance to Gen Z medical students [3, 5, 15].
Despite the generation’s technophilia, classroom interactions, and relationships with their instructors, both within and outside school, are rated as the most critical aspects of their learning [7, 14, 16]. These students value the personal experiences and discussions of practical applicability that the instructor-student dynamic enables. Of note, these interactions do not necessarily need to occur physically; virtual/online interactions and those occurring via social media or text messaging are of equivalent value to face-to-face meetings [3, 8].
In summary, while many of the trends noted among current Millennial medical students will continue among Gen Z learners, there are differences in the perspectives, preferences, and expectations of Gen Z students that may impact how they approach their professional training. Knowledge of these differences will help instructors better connect with the next generation of students and continue to develop effective, student-centered educational programs.
Compliance with Ethical Standards
Conflict of Interest
The author declares that he has no conflicts of interest.
Ethical Approval
Not applicable.
Informed Consent
Not applicable.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
The current generation of learners in medical education, Millennials, has been extensively characterized. This work has helped to inform teaching innovation and curricula over the past decade. However, a new generation will soon enter medical school—Generation Z (Gen Z).
Gen Z, known by a variety of monikers including iGen, Plurals, Founders, Pivotals, and the Homeland Generation, is made up of individuals born from around 1990–2010. This group makes up the largest percentage of the US population and is the most diverse to date. This, combined with the large amount of discretionary spending money at their disposal, has led to their intensive study by the business sector [1]. Even this early in their lives, it is apparent that Gen Z students are not simply young Millennials.
It is necessary to review the origins of commonly held characteristics of the Millennial generation that currently makes up the majority of medical students and residents in comparison with Gen Z learners. Millennials came of age during the relatively prosperous and peaceful 1990s, leading to an optimistic (and sometimes altruistic) outlook on the world around them. Their childhoods were more structured and scheduled than any previous generation and they had a great deal of influence on their family environment, the result of which is less comfort with ambiguity and a preference for informal communication with teachers and leaders. Their Baby Boomer parents tended to practice “helicopter” parenting, always being present for crisis management or to aid in logistical matters, translating into a need for more ready access to support systems than their predecessors [2]. During their youth, two messages were prevalent in their homes, schools, clubs, and popular media: “You can be anything you want to be” and “Participation in the team is the most important thing.” [3, 4]. | no |
Demographics | Are millennials the largest generation in the U.S.? | no_statement | "millennials" are not the "largest" "generation" in the u.s.. the "largest" "generation" in the u.s. does not consist of "millennials".. "millennials" do not make up the majority of the u.s. population. | https://www.governing.com/now/how-much-could-younger-voters-affect-future-election-outcomes | How Much Could Younger Voters Affect Future Election Outcomes? | How Much Could Younger Voters Affect Future Election Outcomes?
Millennial and Gen Z Americans will be the majority of the electorate in 2028. But predicting which party will benefit will be challenging. These young voters care more about policy than party, according to experts.
A crowd of supporters at the Pittsburgh stop on a tour by Sen. Bernie Sanders aimed at registering young voters before the 2022 midterm elections.
(Jeff Swensen/TNS)
In Brief:
Gen Z and millennial voters are on their way to becoming the majority of the voting population.
In 2026, Gen Z will become the first majority nonwhite generation.
Some think these trends could benefit Democrats, but researchers find that the life challenges young voters face make them care more about policy than party.
Generation Z, comprised of Americans born since 1997, is the largest generation in American history. It’s also the most diverse, so much so that some have proposed that it be called the “plurals” to reflect the racial and ethnic pluralism that will make Gen Z the first majority nonwhite generation.
By 2028 Gen Z and its predecessor, the millennials (born 1981-1996), will make up over half of the voting population. They’ll come close (48.5 percent) by 2024.
Some see voting preferences in the midterm elections as a clear sign that political winds will shift significantly in coming years.
What’s in a Name?
Giving a group name to persons born within a certain time period may have functional value, but RAND Corporation sociologist Marek Posard cautions against assuming that members of these groups share “generational” characteristics. In fact, he notes, the baby boomers are the only cohort recognized as a "generation" by the Census Bureau.
The “baby boom” was a distinct and significant demographic event. American soldiers deployed overseas during World War II returned at about the same time, and the country experienced its largest ever year-over-year increase in birth rate. “When that rate came back down, that was the end of the baby boom generation,” Posard says.
Birth year is a one-dimensional data point. It does not take into account the differences in urban and rural attitudes and experiences. Posard’s research suggests that it is not a reliable predictor of future priorities. (As the graph above shows, baby boomers — once feared as rule breakers whose sexual freedom, drug use and mass protests would overturn the social order — were more likely to support Republican candidates for Congress in 2022.)
“Millennials who are in their thirties might be getting married, trying to have children, buying a home, paying off their student loans and that's going to be a whole new set of stressors," Posard says.
What it means to be a “Democrat” or a “Republican” is also changing continuously, he observes. Once a group of people united by at least a roughly common worldview, “Parties are [now] just patchworks of interest groups glued together, and they will shift and change to win votes — we have realignments every few decades.”
Even if grouping young voters into generational categories has limitations for forecasting behavior, there are some things that are different about Gen Z and millennial youth.
Diversity is the Baseline
The most clear-cut and immutable characteristic of Gen Z is its diversity. By 2026, it will be the first majority nonwhite generation in American history. Millennials are more than twice as diverse as boomers.
A survey published in February by the Public Religion Research Institute and the Brookings Institution found that more than 5 in 10 Republicans believe the U.S. should be “a strictly Christian" nation. Nearly as many Gen Zers and millennials say that they have no religious affiliation at all.
The Human Rights Campaign reported recently that it is tracking 340 anti-LBTQ+ bills currently in state legislatures. (Of the 315 “discriminatory” bills it says were introduced in 2022, 29 became law.) A broader range of sexual preference is another attribute of younger Americans, though it’s not possible to know if they are also more willing than previous generations to be honest about such things.
Millennials are the most-educated generation. Almost 40 percent have bachelor’s degrees or higher, something only 15 percent of baby boomers had attained by the same age.
Policy, Not Politics
Ruby Belle Booth is the elections coordinator for the Center for Information and Research on Civic Learning Engagement (CIRCLE), part of the Tisch College of Civic Life at Tufts University. She oversees research efforts aimed at increasing understanding of how young people engage with elections and democracy itself.
Rather than a taking a “generational” approach, CIRCLE focuses its work on 18- to 29-year-olds, consistent with the Census Bureau’s definition of “young adults.” At present, this group is about evenly made up of Gen Zers and millennials.
“Starting with millennials we see a movement away from a lot of traditional institutions, both politically and otherwise, that has continued into Gen Z,” Booth says. “You see it with religion and marriage, and you also see it with political parties.”
One way this manifests is that these voters are loyal first to their policy priorities, not to any party. Young voters don't have high degrees of trust in either party, and are frustrated by the polarization and stagnation of recent years.
One of the big challenges CIRCLE has identified — and one of the big opportunities for candidates — is that many young people lack access and exposure to information about elections. They are aware of the power that the right to vote gives them, but unwilling to sign off on things they don’t understand.
The lack of infrastructure within schools, workplaces and the culture to foster this understanding is as much a barrier to participation as restrictive election laws, Booth believes.
Candidates and campaigns do a poor job of reaching beyond college campuses to engage young people and could learn from community-based organizations. Third parties are stepping up to fill the void. For example, BallotReady — an election-focused startup launched in 2015 by a pair of graduate students, attempts to fill the voter information gap — helps millions of constituents across the 50 states with data on hundreds of thousands of candidates and elected officials through a customized digital platform.
Climate is an imminent threat to younger voters in a way it has not been to previous generations. They see gun control through the eyes of survivors of an era of school shooting tragedies. “There are certain things that the youngest generation is not going to back down on,” Booth says. “Abortion is non-negotiable.”
Inflation was the top concern in a post-midterm survey conducted by CIRCLE. Unlike drag queens and anti-racism library books, the economic situation in the country is a real threat to the survival of millennial and Gen Z voters. The American dream of homeownership is farther away from them than it ever was for baby boomers.
Unsurprisingly, defending equality is a priority reflected in all of CIRCLE’s polling of young voters. “It's a belief that's so core to their approach to things that when they're thinking about economic issues or reproductive rights, they're thinking about how those things apply to people of all sorts of backgrounds,” Booth says.
A Culture Transforming
Does either party have a certain advantage in winning support from younger voters? “Both parties course correct eventually, and history proves that they tend to find that center,” says Posard. “That kind of throws a wrench into forecasting for generational changes.”
If economic stresses become great enough, realistic and honest economic solutions alone could win the day. (Some Gen Zers are secretly hoping for a recession that could give them a shot at homeownership.)
The safest prediction may be that the nature of the emerging electorate will inevitably transform politics. The first nonwhite majority will want to see candidates for office at every level who look like them, and the pool from which such candidates can be drawn will be bigger than ever.
“So many young people believe deeply in the power of their own voice,” says Booth, a member of Gen Z herself. “Hopefully, combining that with the beautiful diversity of this generation will lead to serious and much-needed change in our political and democratic systems.” | How Much Could Younger Voters Affect Future Election Outcomes?
Millennial and Gen Z Americans will be the majority of the electorate in 2028. But predicting which party will benefit will be challenging. These young voters care more about policy than party, according to experts.
A crowd of supporters at the Pittsburgh stop on a tour by Sen. Bernie Sanders aimed at registering young voters before the 2022 midterm elections.
(Jeff Swensen/TNS)
In Brief:
Gen Z and millennial voters are on their way to becoming the majority of the voting population.
In 2026, Gen Z will become the first majority nonwhite generation.
Some think these trends could benefit Democrats, but researchers find that the life challenges young voters face make them care more about policy than party.
Generation Z, comprised of Americans born since 1997, is the largest generation in American history. It’s also the most diverse, so much so that some have proposed that it be called the “plurals” to reflect the racial and ethnic pluralism that will make Gen Z the first majority nonwhite generation.
By 2028 Gen Z and its predecessor, the millennials (born 1981-1996), will make up over half of the voting population. They’ll come close (48.5 percent) by 2024.
Some see voting preferences in the midterm elections as a clear sign that political winds will shift significantly in coming years.
What’s in a Name?
Giving a group name to persons born within a certain time period may have functional value, but RAND Corporation sociologist Marek Posard cautions against assuming that members of these groups share “generational” characteristics. In fact, he notes, the baby boomers are the only cohort recognized as a "generation" by the Census Bureau.
The “baby boom” was a distinct and significant demographic event. American soldiers deployed overseas during World War II returned at about the same time, and the country experienced its largest ever year-over-year increase in birth rate. | no |
Chronobiology | Are morning people more productive than night owls? | yes_statement | "morning" "people" are more "productive" than "night" "owls".. productivity is higher in "morning" "people" compared to "night" "owls". | https://www.lifehack.org/313462/this-why-morning-people-are-more-likely-successful-backed-science | This Is Why Morning People Are More Likely To Be Successful ... | Night people (those who are most alert at night, and typically stay up long after dark) might be a bit smarter than morning people, according to a report published by Roberts and Kyllonen in a 1999 issue of Personality and Individual Differences. But, morning people (those who are up and about early in the morning, roughly the same time even on weekends) are more likely to be successful.
That might come as a shocker to you, but it is scientifically proven. Here’s why morning people are likely to be more successful than night or evening people, backed by science:
1. They are more proactive
Christoph Randler, a biology professor at the University of Education in Heidelberg, Germany, reported in a paper published in the Journal of Applied Social Psychology that morning people are more proactive than evening types. He described proactivity as the willingness and ability to take action to change a situation to one’s advantage.
Because morning people tend to be more proactive than evening people, they do well in business, Randler said. In an interview on Harvard Business Review Randler noted:
“When it comes to business success, morning people hold the important cards. They tend to get better grades in school, which gets them into better colleges, which then leads to better job opportunities.”
This finding makes sense because, in theory, earlier in the morning is when your mind is most rested, your motivation highest and there is relatively less distractions. The mind is most creative at night, but most productive in the morning. This might explain why morning people tend to rule the world – winning the promotions and high level contracts.
2. They are less prone to bad habits and drug abuse.
Not that evening types are always ill-mannered and drug dependent. Actually, night owls are smarter and more creative. But, morning “larks” hit the sack early at a respectable evening hour (typically in bed before 11 p.m.). That seems to make them a little less vulnerable than night people to bad habits—namely, drinking, smoking, and even infidelity.
ADVERTISING
A number of studies support this assertion. One study of 537 individuals comprising of professionals and students with different but regular work schedules found that night types consume more alcohol than morning larks. Another study of 676 adults from a Finnish Twin Cohort found that night people were much more likely to be current or lifelong smokers, much less likely to stop smoking, and at much higher risk for nicotine dependence as per diagnostic criteria compared with morning folks.
These findings are not entirely surprising considering that the nightlife is more conducive to drinking and infidelity.
3. They are conscientious, less showy, and thus more agreeable
The tendency to drink and smoke more among night people is associated with a trait that psychologists call “novelty-seeking” or simply NS.
According to PhyscologyToday, NS is “a personality trait associated with exploratory activity where someone seeks new and exciting stimulation and responds strongly from the surge of dopamine and adrenaline released when anyone has a novel experience.”
Numerous studies have linked night people with this “novelty-seeking” characteristic. Randler and a colleague also studied the relationship between morningness–eveningness and temperament in adolescents ages 12 to 18. They found that evening types tend to display an extravagance in approach to reward cues (showoffs.) Morning people are more conscientious and less showy, and thus more agreeable. Agreeableness is a positive trait that can help in the pursuit of success, though not always.
4. They procrastinate less
A 1997 study led by delay researcher Joseph Ferrari of DePaul looked at college students and found that trait procrastinators referred to themselves as “night” people. Ferrari discovered there is a link between procrastinating behaviors and a general preference to do activities in the evenings. This finding that evening people tend to be worse procrastinators was based on six days of daily task records.
In 2008, a team of researchers that included Ferrari did a follow up study on procrastination. This time they looked at adults with a mean age of 50. The findings of the earlier study held true. Once again night people were associated more with avoiding tasks that needed to be completed. The 2008 study was reported in the Journal of General Psychology.
Given that putting off impending tasks to a later time, sometimes to the “last minute” before a deadline can create problems, the researchers also hinted that this general tendency to delay tasks until nighttime may cost night people career success. That’s particularly true at jobs where strong daytime work ethics are expected or required.
ADVERTISING
5. They have better moods and tend to be happier
That’s the argument that was put forth in a 2012 paper by Dr. Lynn Hasher and Renee Biss, psychologists at University of Toronto. The researchers assessed a sample of 297 older adults (59 to 79) and 435 young adults (17 to 38) on their current moods, as well as their preference to mornings or nights. They found that morning people were generally happier and more alert than their peers who sleep in.
One reason night people might find it harder to stay alert and feel less happy than morning people is because of the disconnect between their nighttime preferences and conventional daytime expectations. Generally, night people are out of sync with the typical day-to-day schedule. They often have to force themselves to wake up early and perform at their peak during the day, which leaves them emotionally drained, and can even cause them sleep loss. Social scientists call this effect “social jetlag.”
For morning people, everything is as it should be. Morning people are happy with the typical day’s schedule.
“Waking up early may indeed make one happy as a lark,” wrote the researchers.
And who’s to say when you’re happy and alert and proactive you can’t perform better? | They tend to get better grades in school, which gets them into better colleges, which then leads to better job opportunities.”
This finding makes sense because, in theory, earlier in the morning is when your mind is most rested, your motivation highest and there is relatively less distractions. The mind is most creative at night, but most productive in the morning. This might explain why morning people tend to rule the world – winning the promotions and high level contracts.
2. They are less prone to bad habits and drug abuse.
Not that evening types are always ill-mannered and drug dependent. Actually, night owls are smarter and more creative. But, morning “larks” hit the sack early at a respectable evening hour (typically in bed before 11 p.m.). That seems to make them a little less vulnerable than night people to bad habits—namely, drinking, smoking, and even infidelity.
ADVERTISING
A number of studies support this assertion. One study of 537 individuals comprising of professionals and students with different but regular work schedules found that night types consume more alcohol than morning larks. Another study of 676 adults from a Finnish Twin Cohort found that night people were much more likely to be current or lifelong smokers, much less likely to stop smoking, and at much higher risk for nicotine dependence as per diagnostic criteria compared with morning folks.
These findings are not entirely surprising considering that the nightlife is more conducive to drinking and infidelity.
3. They are conscientious, less showy, and thus more agreeable
The tendency to drink and smoke more among night people is associated with a trait that psychologists call “novelty-seeking” or simply NS.
According to PhyscologyToday, NS is “a personality trait associated with exploratory activity where someone seeks new and exciting stimulation and responds strongly from the surge of dopamine and adrenaline released when anyone has a novel experience.”
Numerous studies have linked night people with this “novelty-seeking” characteristic. | yes |
Chronobiology | Are morning people more productive than night owls? | yes_statement | "morning" "people" are more "productive" than "night" "owls".. productivity is higher in "morning" "people" compared to "night" "owls". | https://www.entrepreneur.com/science-technology/yes-you-are-more-productive-in-the-morning-heres-why/444737 | Yes, You Are More Productive in the Morning. Here's Why ... | Sorry Snoozers, Science Says You Are More Productive in the Early Morning. Here's Why.
Here's why early morning routines are better for productivity and your mental health.
What is more important to you: sleeping in and starting your workday later or waking up earlier and completing all your tasks? My guess is that you have already figured out whether you are a morning person or a night person by now. However, the secret to success for many people in business, sports, and art is to get up early.
For example, Dwayne "The Rock" Johnson starts every day at 3:30 a.m., Apple, Tim Cook, rises at 3:45 a.m., Ellevest CEO and co-founder Sallie Krawcheck gets up at 4 a.m. In addition to Oprah Winfrey, Michelle Obama, and Indra Nooyi, have been known to rise before sunrise.
The founding editor of mymorningroutine.com, Benjamin Spall, has interviewed hundreds of successful figures about their morning routines. "It's not a coincidence that all of these people these people have routines," he tells CNBC Make It.
But why exactly are they so productive in the morning? Well, let's find out.
Sure, daylight boosts vitamin D production, making larks more productive. However, for early risers, it is also the feeling of having the entire day ahead to accomplish whatever we had planned in advance.
2. They tend to be more proactive.
According to Christoph Randler, a biology professor at the University of Education in Heidelberg, Germany, early birds do better in business.
"When it comes to business success, morning people hold the important cards," Randler told the Harvard Business Review of his research, first published in the Journal of Applied Social Psychology. "[T]hey tend to get better grades in school, which gets them into better colleges, which then leads to better job opportunities. Morning people also anticipate problems and try to minimize them. They're proactive."
The reason this makes sense is that, in theory, early in the morning is when your brain is rested, your motivation is high, and you're less distracted. While a person's creativity is strongest at night, his or her productivity is strongest in the morning. It is possible that this is the reason why morning people tend to be promoted and win high-level jobs.
3. There is a greater level of physical activity among them.
Early birds are more likely to pick up hobbies that require moving around more during the day, as they have plenty of time to do so. It doesn't matter whether you are playing sports, taking long walks, or commuting to work. Exercise relieves stress, gives our brains a break, improves focus, and just makes us feel better. As we get more satisfied, we're more willing to take on challenges, which leads to an increase in productivity.
Researchers, however, suggest that morning larks aren't necessarily predisposed to be better at physical activity. The problem is also related to the fact that night owls don't have enough opportunities to exercise between the hours of 8 p.m. and 2 a.m. when their energy peaks. In the evenings, for instance, outdoor activities become increasingly limited. This is another example of how nature is designed to benefit early risers.
4. The early birds eat healthier.
Obviously, early birds are no different from those who eat junk food later in the day. Yeah, I'm the first to admit that I can't resist the occasional pizza. When it comes to heavy foods at night, however, I usually refrain from eating them. The reason? I don't want my stomach to digest all night since I'm going to bed soon. As a result, I'll probably have a sleepless night.
It has been observed that diet choices are less favorable for night owls. When working at night, their energy levels can fluctuate wildly. To stay up and running, the body requires more fuel, which leads to unhealthy snacking or drinking. In the case of larks, this isn't a problem, since they sleep all night long.
5. Drug abuse and bad habits are less likely to occur in them.
However, early birds usually go to bed before 11 p.m. This makes them less vulnerable than night people to bad habits, such as smoking, drinking, and cheating.
There is evidence to support this assertion in a number of studies. Researchers found that those with evening work schedules consumed more alcohol than those with morning work schedules, based on a study of 537 individuals. According to data from a Finnish Twin Cohort of 676 adults, nighttime people are much more likely to smoke, less likely to quit, and more likely to develop nicotine dependence than morning people.
Due to the nightlife's conduciveness to drinking and infidelity, these findings are not entirely surprising.
6. They are conscientious, less showy, and more agreeable
Continuing from the previous point, drinking and smoking more are associated with the trait psychologists call "novelty seeking," or NS.
According to PhyscologyToday, NS is "a personality trait associated with an exploratory activity where someone seeks new and exciting stimulation and responds strongly from the surge of dopamine and adrenaline released when anyone has a novel experience."
There have been numerous studies linking night people with this "novelty-seeking" behavior. In addition, Randler and a colleague also examined the relationship between morningness and eveningness and temperament in adolescents between the ages of 12 and 18. As far as rewards are concerned, evening types tend to be extravagant in their approach.
In general, morning people are more conscientious and less showy, which makes them more agreeable. Though not always helpful, agreeableness can help in the pursuit of success.
7. They have more time, clarity, and control.
Did you know that waking up just one hour earlier each morning would give you 15 additional days a year if you maintain your normal sleep pattern? Consequently, it is no secret that many successful people wake up early in order to have uninterrupted time to do their own thing.
Getting up early helps you get organized, think strategically, and plan. Additionally, many early risers report being more creative and inspired in the mornings.
The hours between 5 and 6 a.m. aren't when most people are awake. As a result, it's a terrific opportunity to work for yourself rather than for others. Moreover, you won't be distracted by texts, emails, or phone calls since social media has been halted. In the early morning, everything is quiet, and the world is at a standstill, giving you time to yourself. In the morning, productive people exercise, read, have breakfast, and map out their day. And it's also likely that you will encounter fewer distractions from your colleagues if you arrive first.
8. They procrastinate less.
According to a 1997 study authored by delay researcher Joseph Ferrari at DePaul, trait procrastinators call themselves "night people." According to Ferrari, procrastinating behaviors are associated with an evening preference. Based on six days of daily task records, it was found that evening people tend to be worse procrastinators.
An investigation of procrastination was conducted in 2008 by a research team that included Ferrari and reported in the Journal of General Psychology. They examined 50-year-old adults this time. It turned out that the earlier study was accurate. People who spend the night avoid tasks that need to be completed more often.
Research suggests that putting off tasks until nighttime may cost night people career success if they tend to delay tasks until the "last minute" before a deadline. This is particularly true at jobs requiring or requiring strong work ethics during the daytime.
Additionally, when teammates and coworkers cooperate, they become more productive, and receiving feedback in a timely manner helps them establish work and personal boundaries. Just ensure you set personal and professional boundaries if you finish your work before sunset.
Side note: early in the morning was when I found the easiest to make money online. I was working at a full time job and started waking up at 5:30. Instead of going to the gym, I focused on side gigs, especially passive income side gigs that helped me to grow my passive income to over $45,000/month in 12 years. It didn't happen overnight but that extra 2.5 hours a day really added up.
The Downside of Being a Morning Person
While there are many research-based benefits listed above, there are some drawbacks to being a morning person.
Getting up early isn't for everyone.
It may not be possible for you to wake up early regardless of how strong your resolve is and how loud your alarms are. "Whether you're an early bird or a night owl, that's a genetic predisposition," sleep specialist Michael Breus told Fast Company. "There's only so much you're going to be able to do to try to change that." Early birds may be more productive during the workday, but your chronotype, which is your natural sleep habits, determines how productive you are during the day.
As a result, Breus suggests leaning into your chronotype rather than blindly following the habits of early birds, who constitute only 15% of the population. About 1 in 2 people have relatively "normal" sleeping habits; they function best when they don't stay up too late or wake up too early and wake up at the same time every day. Breus describes naturally late risers–about 20% of the population–as wolves, people who sometimes struggle to get up early but are more productive at night.
As it turns out, night owls had significantly more connections. Their communication would be more frequent, and they would organize gatherings more frequently. Even better, night owls have a habit of finding other owls quickly. Aledavood was surprised to find larks lacking in this respect. The majority of their social media time was spent alone since their schedules centered more around the morning and early afternoon.
The study was the first to confirm that night owls have stronger and bigger social networks, as Aledavood herself stated. While morning larks rule the working world, owls rule the social world.
Getting up earlier may be detrimental to your life.
Are you a late sleeper? It's possible that you're smarter than your early-bird peers.
The Daily Mail reports that people who sleep in, tend to be smarter than those who wake up early.
In addition, late risers are more energetic.
Participants slept and awoke according to their normal schedules. A variety of tests were given to them throughout the day. As soon as participants awoke, they all performed well on their tasks. Despite spending the same amount of time awake as the early birds, after 10 hours, the night owls performed significantly better.
Dr. Philippe Peigneux, of the University of Liege in Belgium, said: 'During the evening session, evening types were less sleepy and tended to perform faster than morning types.'
Further, a study conducted by the University of Westminster found that those who rise earlier have higher levels of the stress hormone cortisol. Also, Lisa Artis, from the Sleep Council says there's no evidence that waking up early gives you an advantage. "While a minority may be part of the 'sleepless elite,' the majority are probably well versed at masking the signs of exhaustion."
"In today's busy world, we're all very eager to believe that sleeping one hour less will give us one more hour of productivity, but in reality, it's likely to have the opposite effect," she explains. "Natural sleep has restorative functions -- it detoxes the neurotoxic waste that accumulates when you're awake. Too little sleep, and this waste remains. Lack of sleep can be dangerous in other ways: it is one of the main contributors to burnout in top business leaders."
Your Wake-Up Time Doesn't Always Matter
To be honest, you'll be more productive in the morning if aligns with your circadian rhythms. That means if you're more of a night owl, then you won't be productive bright and early. So, instead of being concerned about when you wake up, focus on getting a successful night's sleep.
Make sure you get eight and a half hours of sleep every night.
In order to maximize your productivity throughout the day, sleep scientist Daniel Gartenberg recommends getting eight and a half hours of sleep each night.
You should be asleep by 7:30 p.m. if you plan to set your alarm a few hours earlier. Perhaps that's possible for you. But it's unattainable for parents with nine-to-five jobs. Moreover, it's going to be hard to keep up with your social life even if you don't have children.
In other words, if you focus more on getting adequate sleep, you'll wake up with a clear mind and be more competitive than the person who stayed at the office until 9 p.m., went to bed at 11 p.m., and woke up at 4 a.m.
At night, limit exposure to blue light.
The digital world has made it nearly impossible to get to bed early in the past few years, according to research. All the LED lights in the house, blue screens on smartphones, and all the lights in the house on at 11 p.m. confuse the body into thinking it's daylight.
Our bodies haven't evolved much since prehistoric times when sunlight was the source of our lives. Our wake-up and sleep times were dictated by the sun. We kept functioning until our bodies got near exhaustion because technology advances faster than evolution.
After 8 p.m., limit your screen time and only turn on the lights that are necessary. It won't take long for you to re-adjust when you keep your body's usual rhythm.
Be sure to listen to your body.
Throughout the day, you probably experience periods of increased alertness and periods of low energy. You have a "chronotype," or personal circadian rhythm, that determines this pattern. Although they tend to run in families, they vary from person to person.
In most cases, people fall into one of two categories:
Early birds. Some research suggests that an early bird's body clock may run slightly faster than 24 hours if they have the most energy first thing in the day.
Night owls. Studies suggest that evening people have slower body clocks than those who are awake during the day. You'll have difficulty waking up in the morning and feeling alert. Towards the end of the day, like 11 p.m., you'll have the most energy.
Chronotypes aren't set in stone, however. As we age, our circadian rhythms change. Due to the body clock shift during adolescence, for example, teens want to sleep longer in the morning and later into the night.
Moreover, depending on your work schedule or school schedule, you may have to change your sleep habits.
It's possible to change your circadian rhythm yourself. But make sure to do it slowly. For instance, during the week, wake up 15 minutes earlier every day.
Engage in more physical activity.
Anybody who's ever tried the age-old advice will confirm that it works. You will be more likely to get to sleep earlier and sleep better if you do more physical activity during the day.
Spend some time in the park with your family, pets, or friends, or engage in some sports activities, gardening, or hiking. Also, walking can often suffice.
Be consistent with your hours.
Commit to your sleep schedule once you find one that works for you.
There is no difference between getting up early and sleeping early, according to a 2017 Harvard study. Keeping a consistent schedule is the most important thing.
Over a month, researchers studied the sleeping habits of 61 students and correlated their academic performance with their habits. In contrast to students who slept and woke up at the same time every day, those who had irregular hours had worse grades.
Discuss your sleep schedule with your boss if it doesn't align with your work hours since many businesses are adjusting office hours to accommodate their employees' internal clocks. If you're in a leadership position, then you can set your own hours.
Avoid hitting the snooze button.
"By dozing off for those extra minutes, we're preparing our bodies for another sleep cycle, which is then quickly interrupted -- causing us to feel fatigued for the rest of the day that lies ahead," sleep expert Neil Robinson said in an interview with The Independent.
With the right preparation and a forward-thinking approach, businesses can navigate this AI revolution, maximizing its benefits while paving the way for a future where humans and AI not only coexist but thrive together. | But why exactly are they so productive in the morning? Well, let's find out.
Sure, daylight boosts vitamin D production, making larks more productive. However, for early risers, it is also the feeling of having the entire day ahead to accomplish whatever we had planned in advance.
2. They tend to be more proactive.
According to Christoph Randler, a biology professor at the University of Education in Heidelberg, Germany, early birds do better in business.
"When it comes to business success, morning people hold the important cards," Randler told the Harvard Business Review of his research, first published in the Journal of Applied Social Psychology. "[T]hey tend to get better grades in school, which gets them into better colleges, which then leads to better job opportunities. Morning people also anticipate problems and try to minimize them. They're proactive. "
The reason this makes sense is that, in theory, early in the morning is when your brain is rested, your motivation is high, and you're less distracted. While a person's creativity is strongest at night, his or her productivity is strongest in the morning. It is possible that this is the reason why morning people tend to be promoted and win high-level jobs.
3. There is a greater level of physical activity among them.
Early birds are more likely to pick up hobbies that require moving around more during the day, as they have plenty of time to do so. It doesn't matter whether you are playing sports, taking long walks, or commuting to work. Exercise relieves stress, gives our brains a break, improves focus, and just makes us feel better. As we get more satisfied, we're more willing to take on challenges, which leads to an increase in productivity.
Researchers, however, suggest that morning larks aren't necessarily predisposed to be better at physical activity. The problem is also related to the fact that night owls don't have enough opportunities to exercise between the hours of 8 p.m. and 2 a.m. when their energy peaks. In the evenings, for instance, outdoor activities become increasingly limited. | yes |
Chronobiology | Are morning people more productive than night owls? | yes_statement | "morning" "people" are more "productive" than "night" "owls".. productivity is higher in "morning" "people" compared to "night" "owls". | https://www.cnbc.com/2019/02/15/study-reveals-if-night-or-morning-people-have-brain-function-advantage.html | Study reveals if night or morning people have brain function ... | Early birds vs. night owls: How one has an advantage at work, according to science
You probably know whether you're a morning or night person. Now science has some good news for the early risers: Apparently the early bird really does get the worm.
There are fundamental differences in brain function between night owls and early birds, and night owls may have impaired function during regular work-day hours, according to a new study published Thursday in the academic journal "Sleep."
Researchers at the University of Birmingham looked at the brain function (among other things) of 38 people who were categorized as either night owls, who had an average bedtime of 2:30 a.m. and a wake-up time of 10:00 a.m., or morning larks, who had average bedtime of 11 p.m. and wake time of 6:30 a.m.
Participants underwent MRI scans, were asked to complete a series of tasks and participated in testing sessions at different times during the day between 8 a.m. and 8 p.m., while also being asked to report their level of sleepiness.
Overall, researchers found that night owls had lower resting brain connectivity in ways that are associated with poorer attention, slower reactions and increased sleepiness throughout the hours of a typical work day.
Meanwhile, brain connectivity in the regions of the brain that can predict better performance and lower sleepiness were significantly higher in larks at all times, "suggesting that the resting state brain connectivity of night owls is impaired throughout the whole day." (The "resting state" of the brain, Live Science notes, means not doing a particular task and letting the mind wander.)
"A huge number of people struggle to deliver their best performance during work or school hours they are not naturally suited to," says the study's lead researcher, Dr. Elise Facer-Childs, of the University of Birmingham's Centre for Human Brain Health. "There is a critical need to increase our understanding of these issues in order to minimize health risks in society, as well as maximize productivity."
And whether you're a morning or night person might be dictated by your genes. A separate study published in January looked at the genomes of almost 700,000 people, using data from 23andMe and the U.K. Biobank. It found that there are hundreds of genes that are associated with whether you are a night owl or an early bird. Regions of the genome that the study found to be relevant to whether you're a morning or night person included genes involved in metabolism, the biological clock and genes that function in the retina.
"All times of day are not created equal," Pink previously told CNBC Make It. "Our performance varies considerably over the course of the day, and what task to do at a certain time really depends on the nature of the task. If we look at the evidence, we can be doing the right work, at the right time."
According to Pink, for larks, the morning is the best time to do analytical work that requires focus, and more administrative or routine work should be done later in the day. The reverse is true for night owls. | Early birds vs. night owls: How one has an advantage at work, according to science
You probably know whether you're a morning or night person. Now science has some good news for the early risers: Apparently the early bird really does get the worm.
There are fundamental differences in brain function between night owls and early birds, and night owls may have impaired function during regular work-day hours, according to a new study published Thursday in the academic journal "Sleep. "
Researchers at the University of Birmingham looked at the brain function (among other things) of 38 people who were categorized as either night owls, who had an average bedtime of 2:30 a.m. and a wake-up time of 10:00 a.m., or morning larks, who had average bedtime of 11 p.m. and wake time of 6:30 a.m.
Participants underwent MRI scans, were asked to complete a series of tasks and participated in testing sessions at different times during the day between 8 a.m. and 8 p.m., while also being asked to report their level of sleepiness.
Overall, researchers found that night owls had lower resting brain connectivity in ways that are associated with poorer attention, slower reactions and increased sleepiness throughout the hours of a typical work day.
Meanwhile, brain connectivity in the regions of the brain that can predict better performance and lower sleepiness were significantly higher in larks at all times, "suggesting that the resting state brain connectivity of night owls is impaired throughout the whole day." (The "resting state" of the brain, Live Science notes, means not doing a particular task and letting the mind wander.)
"A huge number of people struggle to deliver their best performance during work or school hours they are not naturally suited to," says the study's lead researcher, Dr. Elise Facer-Childs, of the University of Birmingham's Centre for Human Brain Health. "There is a critical need to increase our understanding of these issues in order to minimize health risks in society, as well as maximize productivity. | yes |
Chronobiology | Are morning people more productive than night owls? | yes_statement | "morning" "people" are more "productive" than "night" "owls".. productivity is higher in "morning" "people" compared to "night" "owls". | https://www.reveriepage.com/blog/are-night-owls-more-creative-or-less-productive-than-morning-people | Are Night Owls More Creative or Less Productive Than Morning ... | Are Night Owls More Creative or Less Productive Than Morning People?
Do you prefer to sleep late and stay up through the night, or do you wake up with the sun and turn it in as soon as it sets? It turns out that whether you're a morning person or a night owl can play an important role when it comes to your creativity levels and productivity. Whether morning people know how to enhance the pros and minimize the cons of melatonin or night owls have a particular knack for creative problem-solving, it's clear that each lifestyle has advantages and disadvantages.
The Benefits of Being a Morning Person
Research shows that there are numerous benefits to being a morning person. First and foremost, waking up early has been linked to increased productivity and success. Early risers tend to have more time to plan and prioritize their day, allowing them to get a head start on their tasks and accomplish more.
Additionally, early risers tend to have better sleep quality as they establish a consistent sleep schedule and are not as likely to suffer from insomnia or other sleep disorders. Another benefit of being a morning person is engaging in healthy habits, such as exercise and meditation before the day gets busy.
Finally, waking up early allows for more personal time, whether for hobbies, reading, or simply enjoying coffee in peace. Overall, embracing the morning can lead to a happier, more productive, and more fulfilling life.
The Drawbacks of Being a Morning Person
While many people strive to be morning people, there are some drawbacks to this sleep schedule. For one, being a morning person means staying up late and enjoying late-night activities without sacrificing your sleep routine can be challenging. Additionally, since most social activities tend to happen in the evening, morning people may miss opportunities to spend time with friends and family.
On a more practical level, waking up early daily can strain your body and increase tiredness and fatigue. Despite these drawbacks, however, being a morning person has benefits in terms of productivity and efficiency, so it's all about finding the right balance for your lifestyle.
The Benefits of Being a Night Owl
While being a morning person is often touted as the optimal lifestyle, being a night owl has plenty of benefits. For one, night owls tend to have increased creativity and productivity during the late-night hours when most people sleep. Also, night owls often find it easier to focus and work for extended periods without distraction.
Furthermore, being a night owl allows for flexibility in scheduling and can lead to a better work-life balance for individuals who prefer to work during non-traditional hours. Of course, it's crucial to maintain a healthy sleep schedule regardless of one's preferred wake-up time, but there's no denying the numerous advantages of being a night owl.
The Drawbacks of Being a Night Owl
Being a night owl can have its perks, but it also has drawbacks that can negatively impact your health and daily life. Staying late can interrupt your circadian rhythm, leading to fatigue, difficulty falling asleep, and daytime sleepiness. This can result in decreased productivity at work or school and reduced quality of life overall.
Additionally, studies have shown those night owls are more prone to depression, anxiety, and other mental health issues. Maintaining a consistent sleep schedule and prioritizing restful sleep is crucial, even if staying up late feels like the more attractive choice.
Be Successful Regardless of Your Sleep Schedule
In a world where hustle culture is glorified, and sleep is often sacrificed, it's easy to start believing that success is only achievable if you're willing to burn the midnight oil. But the truth is that success doesn't depend on a perfect sleep schedule. What's more important is being intentional and disciplined with your time.
Whether you're a night owl or an early bird, the key to success is finding and sticking to a routine that works for you. Set specific goals, prioritize tasks, and create a work environment without distractions. With determination and self-discipline, success can be yours, regardless of your sleep schedule.
In both sleep routines, using sleep aids to get a restful night's sleep is vital. You will be on the right track when you enhance the pros and minimize the cons of melatonin. Use it strategically to improve your sleep routine and enhance your productivity, creativity, and overall quality of life.
With good habits and a healthy sleep schedule, you'll be well-prepared for challenges.
Final Thoughts
When it comes to your sleep schedule, there's no one-size-fits-all solution. What works for some people may not work for others, so the best approach is to experiment with different wake-up times and routines until you find what feels most comfortable and productive. Whether you embrace the morning or stay up late into the night, success is achievable if you find a balance that works for your lifestyle.
This year from November 13 to 17, Brands like Nike, Gap Inc., and Nordstrom, which houses a gamut of fashion name brands, to name a few, came to AfroTech and set out on the next generation of their business operations. | Are Night Owls More Creative or Less Productive Than Morning People?
Do you prefer to sleep late and stay up through the night, or do you wake up with the sun and turn it in as soon as it sets? It turns out that whether you're a morning person or a night owl can play an important role when it comes to your creativity levels and productivity. Whether morning people know how to enhance the pros and minimize the cons of melatonin or night owls have a particular knack for creative problem-solving, it's clear that each lifestyle has advantages and disadvantages.
The Benefits of Being a Morning Person
Research shows that there are numerous benefits to being a morning person. First and foremost, waking up early has been linked to increased productivity and success. Early risers tend to have more time to plan and prioritize their day, allowing them to get a head start on their tasks and accomplish more.
Additionally, early risers tend to have better sleep quality as they establish a consistent sleep schedule and are not as likely to suffer from insomnia or other sleep disorders. Another benefit of being a morning person is engaging in healthy habits, such as exercise and meditation before the day gets busy.
Finally, waking up early allows for more personal time, whether for hobbies, reading, or simply enjoying coffee in peace. Overall, embracing the morning can lead to a happier, more productive, and more fulfilling life.
The Drawbacks of Being a Morning Person
While many people strive to be morning people, there are some drawbacks to this sleep schedule. For one, being a morning person means staying up late and enjoying late-night activities without sacrificing your sleep routine can be challenging. Additionally, since most social activities tend to happen in the evening, morning people may miss opportunities to spend time with friends and family.
On a more practical level, waking up early daily can strain your body and increase tiredness and fatigue. Despite these drawbacks, however, being a morning person has benefits in terms of productivity and efficiency, so it's all about finding the right balance for your lifestyle.
| yes |
Chronobiology | Are morning people more productive than night owls? | yes_statement | "morning" "people" are more "productive" than "night" "owls".. productivity is higher in "morning" "people" compared to "night" "owls". | https://medium.com/@sleepguru/why-are-some-people-more-productive-at-night-26994b6d2564 | Why Are Some People More Productive at Night? | by SleepGuru ... | Why Are Some People More Productive at Night?
There is a big debate between being a night owl and an early riser. Stereotypes lean towards early risers claiming them to be better at productivity and intelligence. However, both the mindset and culture is shifting from the perception that early risers are a better bet. People have started looking at those who live at night in a different light than before. This article will list some reasons why you might want to consider ditching going to bed early and burn that night oil instead.
There is research that links night owls with better creativity, intelligence, and productivity. Some reasons why you might find yourself more productive at night are listed below:
Peace and Quiet: Night-time is more peaceful and quieter than the day time. You will find that the quiet of the night helps you concentrate more without any disturbances making you more productive.
Alertness: People who stay awake at night have a much better concentration and mental alertness after waking than people who go to bed early. Hence along with being more productive during the night hours, they are also produced during the day.
Social: Night owls are generally more social since they have more time to go out for drinks and catching up with friends than those who prefer hitting the sheets early. This makes them happier, boosting their productiveness.
Energy Burst: It is commonly believed that people tend to get tired and exhausted by the end of the day. However, studies reveal that some people are known to have energy peaks at night. This makes them ready and geared up to tackle challenging work and activities.
Concentration: Night Owls are known to have a higher concentration and intelligence levels. As per a study with 15-night birds and 16 early risers, the concentration of early risers fell after 10 hours of work time, and the concentration level of night improved after that. This allows them to be more productive at night.
Flexible Sleeping Pattern: Some people have the innate ability to sleep when they want. These people can take advantage of the peace afforded at night to get a job done rather than in the daytime when there are lots of things seeking your attention.
REM Sleep: This phase occurs in the second half of the night and is associated with the creative aspect of the brain. There are a lot of examples of people getting their breakthrough ideas and writers getting their inspiration when in this phase. People have reported being at their best when they wake up from this cycle and start working on those ideas.
Time Restrictions: Daytime is associated with a multitude of meetings, dates, and appointments. This tends to disrupt your work. However, there are no meetings scheduled at night. You have the unique opportunity to work uninterrupted, boosting your productivity.
Relaxed: Typically, people feel much more relaxed at night time approaches. This is because there is a dip in the stress hormone cortisol, which remains high during the day as the evening approaches and night sets in your body reduce the secretion of cortisol to prepare for sleep, making you relaxed and hence more productive.
Why Should You Not Work at Night?
You might feel like you are more productive at night and tend to get more done during those hours. However, being a night owl is not all that it is cracked up to be. Night Owl is a term used for people who tend to stay awake late at night and catch up on most of their sleep in the early morning hours or through naps during the day. Though night owls have been known to be more productive, creative, and intelligent, it comes with a cost to their physical and mental health. Some problems faced with burning the midnight oil are:
Night owls are more prone to sleep disorders like insomnia because of their disruptive sleep schedules.
People with a tendency to stay awake later at night also display behavioral problems like anxiety and depression.
Increased risk of heart attacks, behavioral issues, type2 diabetes, strokes, and a host of other problems.
Compromised immunity because of disruptive sleep.
A feeling of disconnect from the world because of different schedules.
Social distancing since you don’t have time to socialize because of night-time working.
Family distancing from different sleep-wake patterns.
Q & A Round
Is your brain more active at night?
The time of the day affects your brain activity and efficiency. Research has found that the average brain becomes more active in the evening than at any other time during the day. This influences the brain’s ability and capacity to learn.
Are night people more intelligent than morning people?
Some research found that night owl are more intelligent than their counterparts. A study by the University of Madrid on a group of 1000 people concluded that night owl is better at creativity, intelligence, and reasoning. This makes them more likely to get high end paying jobs than morning people.
What is the optimal time to learn something new?
Your brain is sharpest between 10 AM to 2 PM. Then from 4 PM to 10 PM. The brain capacity is known to remain the lowest from 4 AM to 7 AM. Hence the best time for acquiring new knowledge will be from 10 AM to 2 PM and again between 4 PM and 10 PM.
Are you more productive at night or in the morning?
It would be wrong to give a blanket statement regarding this. There are theories to support both of these notions. In the end, it depends on your biology. Some people are natural morning people, and some are night owls. Where the morning people perform better in the mornings, the night owls are more productive at night.
Bottom Line
There is no discounting the research that suggests that some people are more productive at night. The trick lies in understanding whether you fall into that category or not. If you try to push yourself into working at night with the hopes of reaping its benefit, but your biology fights against it, then you might end up harming yourself more. Any night time activity that you might want to indulge in must only be done when your mind and body permit it. | Why Are Some People More Productive at Night?
There is a big debate between being a night owl and an early riser. Stereotypes lean towards early risers claiming them to be better at productivity and intelligence. However, both the mindset and culture is shifting from the perception that early risers are a better bet. People have started looking at those who live at night in a different light than before. This article will list some reasons why you might want to consider ditching going to bed early and burn that night oil instead.
There is research that links night owls with better creativity, intelligence, and productivity. Some reasons why you might find yourself more productive at night are listed below:
Peace and Quiet: Night-time is more peaceful and quieter than the day time. You will find that the quiet of the night helps you concentrate more without any disturbances making you more productive.
Alertness: People who stay awake at night have a much better concentration and mental alertness after waking than people who go to bed early. Hence along with being more productive during the night hours, they are also produced during the day.
Social: Night owls are generally more social since they have more time to go out for drinks and catching up with friends than those who prefer hitting the sheets early. This makes them happier, boosting their productiveness.
Energy Burst: It is commonly believed that people tend to get tired and exhausted by the end of the day. However, studies reveal that some people are known to have energy peaks at night. This makes them ready and geared up to tackle challenging work and activities.
Concentration: Night Owls are known to have a higher concentration and intelligence levels. As per a study with 15-night birds and 16 early risers, the concentration of early risers fell after 10 hours of work time, and the concentration level of night improved after that. This allows them to be more productive at night.
Flexible Sleeping Pattern: Some people have the innate ability to sleep when they want. These people can take advantage of the peace afforded at night to get a job done rather than in the daytime when there are lots of things seeking your attention.
| no |
Chronobiology | Are morning people more productive than night owls? | no_statement | "morning" "people" are not more "productive" than "night" "owls".. there is no significant difference in "productivity" between "morning" "people" and "night" "owls". | https://www.cornerstone.edu/blog-post/four-reasons-youre-not-being-productive-and-how-to-improve/ | Four Reasons You're Not Being Productive (and How to Improve ... | Student Experience
Four Reasons You're Not Being Productive (And How to Improve!)
Being productive is key to success, but it isn’t always easy. In fact, at times it can be extremely difficult to stay on task.
There are many distractions that can keep people from staying productive. It can also be difficult to stay focused on something that is uninteresting or tedious.
But why is productivity so important, and why are some people so much better at it than others?
If you’re struggling to stay on task, this article will help you identify common reasons why people get off track and provide four ways to improve your productivity.
Why Is Productivity Important?
This may seem like an easy question to answer, but why are so many people concerned with being productive?
In simple terms, productivity is important because you can get more done. If you’re a productive person, you can do more with less time. That means you can take on harder, more important tasks. It also means that you have more time to do the things you enjoy like hobbies or spending time with friends.
Another benefit of productivity is the feeling of accomplishment. Taking something off your to-do list releases dopamine in your system, which is a natural mood enhancer. You get a boost in your mood every time you check something off.
People love the feeling of finishing something, especially when it is difficult or important to them. If you want to be more productive but you’re not sure where to begin, below we discuss four reasons why people struggle to be productive and four ways to improve it.
4 Reasons People Struggle to Be Productive
Understanding why you’re having trouble staying productive is a good way to figure out a solution. Here are four common things that keep people from staying productive.
1. Technology Distractions
Today’s modern societies are blessed and cursed with technology. We’re living in an age where technology can help build our dreams quicker and take them further than ever than before.
At the same time, people are easily distracted and consumed with things like social media, responding to texts, constant notification alerts and much more.
For many people, it’s very easy to pass the time scrolling through social media or checking messages. According to a 2018 Nielsen report, American adults spend on average 11 hours a day interacting with media. This includes listening to the radio, watching TV and spending time on a tablet, computer or smartphone.
2. Lack of Direction
For some people, productivity stalls because of a lack of direction. A person may know what their end goal is but they have no idea how to get there.
This often happens when you think a task is difficult or when you’ve never done it before. It can also happen when you’re overwhelmed with a lot of other activities. When your brain is full of too many other thoughts, it can be a struggle to focus on the task at hand and accomplish what you need to.
3. Overly Difficult Work or Boredom With Tasks
Sometimes people struggle to stay productive simply because they’re bored with the work. They may find it uninteresting or tedious which makes it harder to finish.
The same thing can be true with work that is overly difficult or complicated. When a task seems too hard to finish, it is common for people to procrastinate. They tend to find excuses to not start it or to focus on smaller easier tasks instead. This leads to low productivity and a failure to accomplish work that actually needs to get done.
4. Starting Too Late in the Day
Have you ever heard or said the expression ‘there just aren’t enough hours in a day’?
Sometimes this is true. There are times when your to-do list is far longer than one day can hold.
Another reason this happens is that people don’t start working on something until the day is almost over. This may be because they had other activities during the main portion of the day or because they didn’t start their day until late into the morning.
Four Ways to Be More Productive
If you’ve been having a difficult time staying focused and completing your tasks, here are four ways to stay productive.
1. Limit Distractions
Distractions are everywhere. Your distractions may come from your social life or from your favorite game or app on your phone.
It’s important to cultivate a healthy social life, and it’s okay to have downtime to relax. But when it’s time to work, do your best to set those distractions aside.
If your cell phone or social media keep you from focusing on your work, distance yourself from it. Put your cell phone in another room. Keep social media tabs off your computer. To give you a little extra help, try using an app on your phone that sets a limit on your screen time or social media usage for a period of time. Apps like Offtime and Moment track how you’re spending your time on your phone and help you to set limits on the areas you need to.
Whatever your distraction is, give yourself some distance. Find a quiet place where you can focus on your work and not feel the need to check thirty-five messages on your phone.
2. Tackle Harder or More Important Tasks First
Many people pick their order of tasks at random or try to do the things that are easiest first. This is not the most productive way to perform duties. If you’re struggling to stay productive because you can’t pick a direction, try focusing on the harder or more important tasks first.
It’s easy to feel directionless when you have an overwhelming amount of responsibilities. You can’t do them all in one day, so do the ones that are most important.
Being an early riser is associated with important health benefits and improved performance in work and school.
Harvard Business Review released a study in 2010 that showed that early risers tend to get better grades and be more successful than night owls. Biologist Christoph Randler explained, those early risers “tend to get better grades in school, which get them into better colleges, which then lead to better job opportunities. Morning people also anticipate problems and try to minimize them … A number of studies have linked this trait, proactivity, with better job performance, greater career success, and higher wages.”
Don’t waste your most productive time laying in bed. Early to bed and early to rise and you’ll be able to start the day fresh and ready for your tasks.
4. Take a Break and Relax
Many people try to push through work, eat lunch at their desks and powerhouse through the week.
As it turns out, all work and no play is actually less productive.
In 2013, The New York Times published an article called, “Relax! You’ll be More Productive!” In it, author Tony Schwartz explains that people who take more vacations and get more sleep at night are more productive than people who don’t.
Research suggests that people only have the capacity to focus for 90 to 120 minutes before needing a break. Those who do take a break are usually able to return to their work refreshed for another 90 to 120 minutes.
Taking breaks can also help break up the monotony and keep people from becoming bored with a project. If what they’re working on is overly complicated or dense, they may want to take breaks sooner.
Grow in Productivity
With degree programs that are catered to meet the needs of busy working adults, you can improve your productivity as you work and return to the classroom to achieve your personal and professional goals. Connect with an enrollment specialist today to discover how our convenient programs allow you to learn on your own time.
Helpful Links
Cornerstone University admits students of any race, color, national and ethnic origin to all the rights, privileges, programs, and activities generally accorded or made available to students at the school. We do not discriminate on the basis of race, color, national and ethnic origin in administration of its educational policies, admissions policies, scholarship and loan programs, and athletic and other school-administered programs. | You can’t do them all in one day, so do the ones that are most important.
Being an early riser is associated with important health benefits and improved performance in work and school.
Harvard Business Review released a study in 2010 that showed that early risers tend to get better grades and be more successful than night owls. Biologist Christoph Randler explained, those early risers “tend to get better grades in school, which get them into better colleges, which then lead to better job opportunities. Morning people also anticipate problems and try to minimize them … A number of studies have linked this trait, proactivity, with better job performance, greater career success, and higher wages.”
Don’t waste your most productive time laying in bed. Early to bed and early to rise and you’ll be able to start the day fresh and ready for your tasks.
4. Take a Break and Relax
Many people try to push through work, eat lunch at their desks and powerhouse through the week.
As it turns out, all work and no play is actually less productive.
In 2013, The New York Times published an article called, “Relax! You’ll be More Productive!” In it, author Tony Schwartz explains that people who take more vacations and get more sleep at night are more productive than people who don’t.
Research suggests that people only have the capacity to focus for 90 to 120 minutes before needing a break. Those who do take a break are usually able to return to their work refreshed for another 90 to 120 minutes.
Taking breaks can also help break up the monotony and keep people from becoming bored with a project. If what they’re working on is overly complicated or dense, they may want to take breaks sooner.
Grow in Productivity
With degree programs that are catered to meet the needs of busy working adults, you can improve your productivity as you work and return to the classroom to achieve your personal and professional goals. Connect with an enrollment specialist today to discover how our convenient programs allow you to learn on your own time.
| yes |
Malacology | Are most octopuses venomous? | yes_statement | most "octopuses" are "venomous".. the majority of "octopuses" possess "venom". | https://en.wikipedia.org/wiki/Octopus | Octopus - Wikipedia | An octopus (PL: octopuses or octopodes[a]) is a soft-bodied, eight-limbed mollusc of the orderOctopoda (/ɒkˈtɒpədə/, ok-TOP-ə-də[3]). The order consists of some 300 species and is grouped within the class Cephalopoda with squids, cuttlefish, and nautiloids. Like other cephalopods, an octopus is bilaterally symmetric with two eyes and a beaked mouth at the center point of the eight limbs.[b] The soft body can radically alter its shape, enabling octopuses to squeeze through small gaps. They trail their eight appendages behind them as they swim. The siphon is used both for respiration and for locomotion, by expelling a jet of water. Octopuses have a complex nervous system and excellent sight, and are among the most intelligent and behaviourally diverse of all invertebrates.
Octopuses inhabit various regions of the ocean, including coral reefs, pelagic waters, and the seabed; some live in the intertidal zone and others at abyssal depths. Most species grow quickly, mature early, and are short-lived. In most species, the male uses a specially adapted arm to deliver a bundle of sperm directly into the female's mantle cavity, after which he becomes senescent and dies, while the female deposits fertilised eggs in a den and cares for them until they hatch, after which she also dies. Strategies to defend themselves against predators include the expulsion of ink, the use of camouflage and threat displays, the ability to jet quickly through the water and hide, and even deceit. All octopuses are venomous, but only the blue-ringed octopuses are known to be deadly to humans.
Historically, the first plural to commonly appear in English language sources, in the early 19th century, is the latinate form "octopi",[12] followed by the English form "octopuses" in the latter half of the same century. The Hellenic plural is roughly contemporary in usage, although it is also the rarest.[13]
Fowler's Modern English Usage states that the only acceptable plural in English is "octopuses", that "octopi" is misconceived, and "octopodes" pedantic;[14][15][16] the last is nonetheless used frequently enough to be acknowledged by the descriptivistMerriam-Webster 11th Collegiate Dictionary and Webster's New World College Dictionary. The Oxford English Dictionary lists "octopuses", "octopi", and "octopodes", in that order, reflecting frequency of use, calling "octopodes" rare and noting that "octopi" is based on a misunderstanding.[17] The New Oxford American Dictionary (3rd Edition, 2010) lists "octopuses" as the only acceptable pluralisation, and indicates that "octopodes" is still occasionally used, but that "octopi" is incorrect.[18]
Anatomy and physiology
Size
The giant Pacific octopus(Enteroctopus dofleini) is often cited as the largest known octopus species. Adults usually weigh around 15 kg (33 lb), with an arm span of up to 4.3 m (14 ft).[19] The largest specimen of this species to be scientifically documented was an animal with a live mass of 71 kg (157 lb).[20] Much larger sizes have been claimed for the giant Pacific octopus:[21] one specimen was recorded as 272 kg (600 lb) with an arm span of 9 m (30 ft).[22] A carcass of the seven-arm octopus, Haliphron atlanticus, weighed 61 kg (134 lb) and was estimated to have had a live mass of 75 kg (165 lb).[23][24] The smallest species is Octopus wolfi, which is around 2.5 cm (1 in) and weighs less than 1 g (0.035 oz).[25]
External characteristics
The octopus is bilaterally symmetrical along its dorso-ventral (back to belly) axis; the head and foot are at one end of an elongated body and function as the anterior (front) of the animal. The head includes the mouth and brain. The foot has evolved into a set of flexible, prehensile appendages, known as "arms", that surround the mouth and are attached to each other near their base by a webbed structure.[26] The arms can be described based on side and sequence position (such as L1, R1, L2, R2) and divided into four pairs.[27][26] The two rear appendages are generally used to walk on the sea floor, while the other six are used to forage for food.[28] The bulbous and hollow mantle is fused to the back of the head and is known as the visceral hump; it contains most of the vital organs.[29][30] The mantle cavity has muscular walls and contains the gills; it is connected to the exterior by a funnel or siphon.[26][31] The mouth of an octopus, located underneath the arms, has a sharp hard beak.[30]
The skin consists of a thin outer epidermis with mucous cells and sensory cells, and a connective tissue dermis consisting largely of collagen fibres and various cells allowing colour change.[26] Most of the body is made of soft tissue allowing it to lengthen, contract, and contort itself. The octopus can squeeze through tiny gaps; even the larger species can pass through an opening close to 2.5 cm (1 in) in diameter.[30] Lacking skeletal support, the arms work as muscular hydrostats and contain longitudinal, transverse and circular muscles around a central axial nerve. They can extend and contract, twist to left or right, bend at any place in any direction or be held rigid.[32][33]
The interior surfaces of the arms are covered with circular, adhesive suckers. The suckers allow the octopus to anchor itself or to manipulate objects. Each sucker is usually circular and bowl-like and has two distinct parts: an outer shallow cavity called an infundibulum and a central hollow cavity called an acetabulum, both of which are thick muscles covered in a protective chitinous cuticle. When a sucker attaches to a surface, the orifice between the two structures is sealed. The infundibulum provides adhesion while the acetabulum remains free, and muscle contractions allow for attachment and detachment.[34][35] Each of the eight arms senses and responds to light, allowing the octopus to control the limbs even if its head is obscured.[36]
The eyes of the octopus are large and at the top of the head. They are similar in structure to those of a fish, and are enclosed in a cartilaginous capsule fused to the cranium. The cornea is formed from a translucent epidermal layer; the slit-shaped pupil forms a hole in the iris just behind the cornea. The lens is suspended behind the pupil; photoreceptive retinal cells cover the back of the eye. The pupil can be adjusted in size; a retinal pigment screens incident light in bright conditions.[26]
Some species differ in form from the typical octopus body shape. Basal species, the Cirrina, have stout gelatinous bodies with webbing that reaches near the tip of their arms, and two large fins above the eyes, supported by an internal shell. Fleshy papillae or cirri are found along the bottom of the arms, and the eyes are more developed.[37][38]
Circulatory system
Octopuses have a closed circulatory system, in which the blood remains inside blood vessels. Octopuses have three hearts; a systemic or main heart that circulates blood around the body and two branchial or gill hearts that pump it through each of the two gills. The systemic heart is inactive when the animal is swimming and thus it tires quickly and prefers to crawl.[39][40] Octopus blood contains the copper-rich protein haemocyanin to transport oxygen. This makes the blood very viscous and it requires considerable pressure to pump it around the body; octopuses' blood pressures can exceed 75 mmHg (10 kPa).[39][40][41] In cold conditions with low oxygen levels, haemocyanin transports oxygen more efficiently than haemoglobin. The haemocyanin is dissolved in the plasma instead of being carried within blood cells, and gives the blood a bluish colour.[39][40]
The systemic heart has muscular contractile walls and consists of a single ventricle and two atria, one for each side of the body. The blood vessels consist of arteries, capillaries and veins and are lined with a cellular endothelium which is quite unlike that of most other invertebrates. The blood circulates through the aorta and capillary system, to the vena cavae, after which the blood is pumped through the gills by the branchial hearts and back to the main heart. Much of the venous system is contractile, which helps circulate the blood.[26]
Respiration
Octopus with open siphon. The siphon is used for respiration, waste disposal and discharging ink.
Respiration involves drawing water into the mantle cavity through an aperture, passing it through the gills, and expelling it through the siphon. The ingress of water is achieved by contraction of radial muscles in the mantle wall, and flapper valves shut when strong circular muscles force the water out through the siphon.[42] Extensive connective tissue lattices support the respiratory muscles and allow them to expand the respiratory chamber.[43] The lamella structure of the gills allows for a high oxygen uptake, up to 65% in water at 20 °C (68 °F).[44] Water flow over the gills correlates with locomotion, and an octopus can propel its body when it expels water out of its siphon.[43][41]
The thin skin of the octopus absorbs additional oxygen. When resting, around 41% of an octopus's oxygen absorption is through the skin. This decreases to 33% when it swims, as more water flows over the gills; skin oxygen uptake also increases. When it is resting after a meal, absorption through the skin can drop to 3% of its total oxygen uptake.[45]
Digestion and excretion
The digestive system of the octopus begins with the buccal mass which consists of the mouth with its chitinous beak, the pharynx, radula and salivary glands.[46] The radula is a spiked, muscular tongue-like organ with multiple rows of tiny teeth.[30] Food is broken down and is forced into the oesophagus by two lateral extensions of the esophageal side walls in addition to the radula. From there it is transferred to the gastrointestinal tract, which is mostly suspended from the roof of the mantle cavity by numerous membranes. The tract consists of a crop, where the food is stored; a stomach, where food is ground down; a caecum where the now sludgy food is sorted into fluids and particles and which plays an important role in absorption; the digestive gland, where liver cells break down and absorb the fluid and become "brown bodies"; and the intestine, where the accumulated waste is turned into faecal ropes by secretions and blown out of the funnel via the rectum.[46]
During osmoregulation, fluid is added to the pericardia of the branchial hearts. The octopus has two nephridia (equivalent to vertebrate kidneys) which are associated with the branchial hearts; these and their associated ducts connect the pericardial cavities with the mantle cavity. Before reaching the branchial heart, each branch of the vena cava expands to form renal appendages which are in direct contact with the thin-walled nephridium. The urine is first formed in the pericardial cavity, and is modified by excretion, chiefly of ammonia, and selective absorption from the renal appendages, as it is passed along the associated duct and through the nephridiopore into the mantle cavity.[26][47]
A common octopus (Octopus vulgaris) moving around. Its nervous system allows the arms to move with some autonomy.
Nervous system and senses
Octopuses (along with cuttlefish) have the highest brain-to-body mass ratios of all invertebrates;[48] this is greater than that of many vertebrates.[49] Octopuses have the same jumping genes that are active in the human brain, implying an evolutionary convergence at molecular level.[50] The nervous system is complex, only part of which is localised in its brain, which is contained in a cartilaginous capsule.[51] Two-thirds of an octopus's neurons are in the nerve cords of its arms; these are capable of complex reflex actions without input from the brain.[52] Unlike vertebrates, the complex motor skills of octopuses are not organised in their brains via internal somatotopic maps of their bodies.[53] The nervous system of cephalopods is the most complex of all invertebrates.[54][55] The giant nerve fibers of the cephalopod mantle have been widely used for many years as experimental material in neurophysiology; their large diameter (due to lack of myelination) makes them relatively easy to study compared with other animals.[56]
Like other cephalopods, octopuses have camera-like eyes,[48] and can distinguish the polarisation of light. Colour vision appears to vary from species to species, for example being present in O. aegina but absent in O. vulgaris.[57]Opsins in the skin respond to different wavelengths of light and help the animals choose a coloration that camouflages them; the chromatophores in the skin can respond to light independently of the eyes.[58][59]
An alternative hypothesis is that cephalopod eyes in species which only have a single photoreceptor protein may use chromatic aberration to turn monochromatic vision into colour vision, though this sacrifices image quality. This would explain pupils shaped like the letter U, the letter W, or a dumbbell, as well as explaining the need for colourful mating displays.[60]
Attached to the brain are two organs called statocysts (sac-like structures containing a mineralised mass and sensitive hairs), that allow the octopus to sense the orientation of its body. They provide information on the position of the body relative to gravity and can detect angular acceleration. An autonomic response keeps the octopus's eyes oriented so that the pupil is always horizontal.[26] Octopuses may also use the statocyst to hear sound. The common octopus can hear sounds between 400 Hz and 1000 Hz, and hears best at 600 Hz.[61]
Octopuses have an excellent somatosensory system. Their suction cups are equipped with chemoreceptors so they can taste what they touch. Octopus arms move easily because the sensors recognise octopus skin and prevent self-attachment.[62] Octopuses appear to have poor proprioceptive sense and must observe the arms visually to keep track of their position.[63][64]
Ink sac
The ink sac of an octopus is located under the digestive gland. A gland attached to the sac produces the ink, and the sac stores it. The sac is close enough to the funnel for the octopus to shoot out the ink with a water jet. Before it leaves the funnel, the ink passes through glands which mix it with mucus, creating a thick, dark blob which allows the animal to escape from a predator.[65] The main pigment in the ink is melanin, which gives it its black colour.[66] Cirrate octopuses usually lack the ink sac.[37]
Life cycle
Reproduction
Octopuses are gonochoric and have a single, posteriorly-located gonad which is associated with the coelom. The testis in males and the ovary in females bulges into the gonocoel and the gametes are released here. The gonocoel is connected by the gonoduct to the mantle cavity, which it enters at the gonopore.[26] An optic gland creates hormones that cause the octopus to mature and age and stimulate gamete production. The gland may be triggered by environmental conditions such as temperature, light and nutrition, which thus control the timing of reproduction and lifespan.[67][68]
When octopuses reproduce, the male uses a specialised arm called a hectocotylus to transfer spermatophores (packets of sperm) from the terminal organ of the reproductive tract (the cephalopod "penis") into the female's mantle cavity.[69] The hectocotylus in benthic octopuses is usually the third right arm, which has a spoon-shaped depression and modified suckers near the tip. In most species, fertilisation occurs in the mantle cavity.[26]
The reproduction of octopuses has been studied in only a few species. One such species is the giant Pacific octopus, in which courtship is accompanied, especially in the male, by changes in skin texture and colour. The male may cling to the top or side of the female or position himself beside her. There is some speculation that he may first use his hectocotylus to remove any spermatophore or sperm already present in the female. He picks up a spermatophore from his spermatophoric sac with the hectocotylus, inserts it into the female's mantle cavity, and deposits it in the correct location for the species, which in the giant Pacific octopus is the opening of the oviduct. Two spermatophores are transferred in this way; these are about one metre (yard) long, and the empty ends may protrude from the female's mantle.[70] A complex hydraulic mechanism releases the sperm from the spermatophore, and it is stored internally by the female.[26]
Female giant Pacific octopus guarding strings of eggs
About forty days after mating, the female giant Pacific octopus attaches strings of small fertilised eggs (10,000 to 70,000 in total) to rocks in a crevice or under an overhang. Here she guards and cares for them for about five months (160 days) until they hatch.[70] In colder waters, such as those off Alaska, it may take up to ten months for the eggs to completely develop.[71]: 74 The female aerates them and keeps them clean; if left untended, many will die.[72] She does not feed during this time and dies soon after. Males become senescent and die a few weeks after mating.[67]
The eggs have large yolks; cleavage (division) is superficial and a germinal disc develops at the pole. During gastrulation, the margins of this grow down and surround the yolk, forming a yolk sac, which eventually forms part of the gut. The dorsal side of the disc grows upward and forms the embryo, with a shell gland on its dorsal surface, gills, mantle and eyes. The arms and funnel develop as part of the foot on the ventral side of the disc. The arms later migrate upward, coming to form a ring around the funnel and mouth. The yolk is gradually absorbed as the embryo develops.[26]
In the argonaut (paper nautilus), the female secretes a fine, fluted, papery shell in which the eggs are deposited and in which she also resides while floating in mid-ocean. In this she broods the young, and it also serves as a buoyancy aid allowing her to adjust her depth. The male argonaut is minute by comparison and has no shell.[74]
Lifespan
Octopuses have a relatively short lifespan; some species live for as little as six months. The Giant Pacific octopus, one of the two largest species of octopus, may live for as much as five years. Octopus lifespan is limited by reproduction.[75] For most octopuses the last stage of their life is called senescence. It is the breakdown of cellular function without repair or replacement. For males, this typically begins after mating. Senescence may last from weeks to a few months, at most. For females, it begins when they lay a clutch of eggs. Females will spend all their time aerating and protecting their eggs until they are ready to hatch. During senescence, an octopus does not feed and quickly weakens. Lesions begin to form and the octopus literally degenerates. Unable to defend themselves, octopuses often fall prey to predators.[76] The larger Pacific striped octopus (LPSO) is an exception, as it can reproduce repeatedly over a life of around two years.[75]
Octopus reproductive organs mature due to the hormonal influence of the optic gland but result in the inactivation of their digestive glands. Unable to feed, the octopus typically dies of starvation.[76] Experimental removal of both optic glands after spawning was found to result in the cessation of broodiness, the resumption of feeding, increased growth, and greatly extended lifespans. It has been proposed that the naturally short lifespan may be functional to prevent rapid overpopulation.[77]
Behaviour and ecology
Most species are solitary when not mating,[80] though a few are known to occur in high densities and with frequent interactions, signaling, mate defending and eviction of individuals from dens. This is likely the result of abundant food supplies combined with limited den sites.[81] The LPSO has been described as particularly social, living in groups of up to 40 individuals.[82][83] Octopuses hide in dens, which are typically crevices in rocky outcrops or other hard structures, though some species burrow into sand or mud. Octopuses are not territorial but generally remain in a home range; they may leave in search of food. They can navigate back to a den without having to retrace their outward route.[84] They are not migratory.[85]
Octopuses bring captured prey to the den, where they can eat it safely. Sometimes the octopus catches more prey than it can eat, and the den is often surrounded by a midden of dead and uneaten food items. Other creatures, such as fish, crabs, molluscs and echinoderms, often share the den with the octopus, either because they have arrived as scavengers, or because they have survived capture.[86] On rare occasions, octopuses hunt cooperatively with other species, with fish as their partners. They regulate the species composition of the hunting group—and the behavior of their partners—by punching them.[87]
A benthic (bottom-dwelling) octopus typically moves among the rocks and feels through the crevices. The creature may make a jet-propelled pounce on prey and pull it toward the mouth with its arms, the suckers restraining it. Small prey may be completely trapped by the webbed structure. Octopuses usually inject crustaceans like crabs with a paralysing saliva then dismember them with their beaks.[88][90] Octopuses feed on shelled molluscs either by forcing the valves apart, or by drilling a hole in the shell to inject a nerve toxin.[91][90] It used to be thought that the hole was drilled by the radula, but it has now been shown that minute teeth at the tip of the salivary papilla are involved, and an enzyme in the toxic saliva is used to dissolve the calcium carbonate of the shell. It takes about three hours for O. vulgaris to create a 0.6 mm (0.024 in) hole. Once the shell is penetrated, the prey dies almost instantaneously, its muscles relax, and the soft tissues are easy for the octopus to remove. Crabs may also be treated in this way; tough-shelled species are more likely to be drilled, and soft-shelled crabs are torn apart.[92]
Some species have other modes of feeding. Grimpoteuthis has a reduced or non-existent radula and swallows prey whole.[37] In the deep-sea genus Stauroteuthis, some of the muscle cells that control the suckers in most species have been replaced with photophores which are believed to fool prey by directing them to the mouth, making them one of the few bioluminescent octopuses.[93]
Locomotion
Octopuses swim with their arms trailing behind.
Octopuses mainly move about by relatively slow crawling with some swimming in a head-first position. Jet propulsion or backward swimming, is their fastest means of locomotion, followed by swimming and crawling.[94] When in no hurry, they usually crawl on either solid or soft surfaces. Several arms are extended forward, some of the suckers adhere to the substrate and the animal hauls itself forward with its powerful arm muscles, while other arms may push rather than pull. As progress is made, other arms move ahead to repeat these actions and the original suckers detach. During crawling, the heart rate nearly doubles, and the animal requires ten or fifteen minutes to recover from relatively minor exercise.[32]
Most octopuses swim by expelling a jet of water from the mantle through the siphon into the sea. The physical principle behind this is that the force required to accelerate the water through the orifice produces a reaction that propels the octopus in the opposite direction.[95] The direction of travel depends on the orientation of the siphon. When swimming, the head is at the front and the siphon is pointed backward but, when jetting, the visceral hump leads, the siphon points at the head and the arms trail behind, with the animal presenting a fusiform appearance. In an alternative method of swimming, some species flatten themselves dorso-ventrally, and swim with the arms held out sideways, and this may provide lift and be faster than normal swimming. Jetting is used to escape from danger, but is physiologically inefficient, requiring a mantle pressure so high as to stop the heart from beating, resulting in a progressive oxygen deficit.[94]
Cirrate octopuses cannot produce jet propulsion and rely on their fins for swimming. They have neutral buoyancy and drift through the water with the fins extended. They can also contract their arms and surrounding web to make sudden moves known as "take-offs". Another form of locomotion is "pumping", which involves symmetrical contractions of muscles in their webs producing peristaltic waves. This moves the body slowly.[37]
In 2005, Adopus aculeatus and veined octopus (Amphioctopus marginatus) were found to walk on two arms, while at the same time mimicking plant matter.[96] This form of locomotion allows these octopuses to move quickly away from a potential predator without being recognised.[94] Some species of octopus can crawl out of the water briefly, which they may do between tide pools.[97][98] "Stilt walking" is used by the veined octopus when carrying stacked coconut shells. The octopus carries the shells underneath it with two arms, and progresses with an ungainly gait supported by its remaining arms held rigid.[99]
In laboratory experiments, octopuses can readily be trained to distinguish between different shapes and patterns. They have been reported to practise observational learning,[102] although the validity of these findings is contested.[100] Octopuses have also been observed in what has been described as play: repeatedly releasing bottles or toys into a circular current in their aquariums and then catching them.[103] Octopuses often break out of their aquariums and sometimes into others in search of food.[97][104][105] The veined octopus collects discarded coconut shells, then uses them to build a shelter, an example of tool use.[99]
Camouflage and colour change
Video of Octopus cyanea moving and changing its colour, shape and texture
Octopuses use camouflage when hunting and to avoid predators. To do this they use specialised skin cells which change the appearance of the skin by adjusting its colour, opacity, or reflectivity. Chromatophores contain yellow, orange, red, brown, or black pigments; most species have three of these colours, while some have two or four. Other colour-changing cells are reflective iridophores and white leucophores.[106] This colour-changing ability is also used to communicate with or warn other octopuses.[107]
Octopuses can create distracting patterns with waves of dark coloration across the body, a display known as the "passing cloud". Muscles in the skin change the texture of the mantle to achieve greater camouflage. In some species, the mantle can take on the spiky appearance of algae; in others, skin anatomy is limited to relatively uniform shades of one colour with limited skin texture. Octopuses that are diurnal and live in shallow water have evolved more complex skin than their nocturnal and deep-sea counterparts.[107]
A "moving rock" trick involves the octopus mimicking a rock and then inching across the open space with a speed matching that of the surrounding water.[108]
Defence
Aside from humans, octopuses may be preyed on by fishes, seabirds, sea otters, pinnipeds, cetaceans, and other cephalopods.[109] Octopuses typically hide or disguise themselves by camouflage and mimicry; some have conspicuous warning coloration (aposematism) or deimatic behaviour.[107] An octopus may spend 40% of its time hidden away in its den. When the octopus is approached, it may extend an arm to investigate. 66% of Enteroctopus dofleini in one study had scars, with 50% having amputated arms.[109] The blue rings of the highly venomous blue-ringed octopus are hidden in muscular skin folds which contract when the animal is threatened, exposing the iridescent warning.[110] The Atlantic white-spotted octopus (Callistoctopus macropus) turns bright brownish red with oval white spots all over in a high contrast display.[111] Displays are often reinforced by stretching out the animal's arms, fins or web to make it look as big and threatening as possible.[112]
Once they have been seen by a predator, they commonly try to escape but can also use distraction with an ink cloud ejected from the ink sac. The ink is thought to reduce the efficiency of olfactory organs, which would aid evasion from predators that employ smell for hunting, such as sharks. Ink clouds of some species might act as pseudomorphs, or decoys that the predator attacks instead.[113]
When under attack, some octopuses can perform arm autotomy, in a manner similar to the way skinks and other lizards detach their tails. The crawling arm may distract would-be predators. Such severed arms remain sensitive to stimuli and move away from unpleasant sensations.[114] Octopuses can replace lost limbs.[115]
Some octopuses, such as the mimic octopus, can combine their highly flexible bodies with their colour-changing ability to mimic other, more dangerous animals, such as lionfish, sea snakes, and eels.[116][117]
Pathogens and parasites
The diseases and parasites that affect octopuses have been little studied, but cephalopods are known to be the intermediate or final hosts of various parasitic cestodes, nematodes and copepods; 150 species of protistan and metazoan parasites have been recognised.[118] The Dicyemidae are a family of tiny worms that are found in the renal appendages of many species;[119] it is unclear whether they are parasitic or endosymbionts. Coccidians in the genus Aggregata living in the gut cause severe disease to the host. Octopuses have an innate immune system; their haemocytes respond to infection by phagocytosis, encapsulation, infiltration, or cytotoxic activities to destroy or isolate the pathogens. The haemocytes play an important role in the recognition and elimination of foreign bodies and wound repair. Captive animals are more susceptible to pathogens than wild ones.[120] A gram-negative bacterium, Vibrio lentus, can cause skin lesions, exposure of muscle and sometimes death.[121]
Evolution
The scientific name Octopoda was first coined and given as the order of octopuses in 1818 by English biologist William Elford Leach,[122] who classified them as Octopoida the previous year.[2] The Octopoda consists of around 300 known species[123] and were historically divided into two suborders, the Incirrina and the Cirrina.[38] More recent evidence suggests Cirrina is merely the most basal species, not a unique clade.[124] The incirrate octopuses (the majority of species) lack the cirri and paired swimming fins of the cirrates.[38] In addition, the internal shell of incirrates is either present as a pair of stylets or absent altogether.[125]
Fossil history and phylogeny
The Cephalopoda evolved from a mollusc resembling the Monoplacophora in the Cambrian some 530 million years ago. The Coleoidea diverged from the nautiloids in the Devonian some 416 million years ago. In turn, the coleoids (including the squids and octopods) brought their shells inside the body and some 276 million years ago, during the Permian, split into the Vampyropoda and the Decabrachia.[127] The octopuses arose from the Muensterelloidea within the Vampyropoda in the Jurassic. The earliest octopus likely lived near the sea floor (benthic to demersal) in shallow marine environments.[127][128][126] Octopuses consist mostly of soft tissue, and so fossils are relatively rare. As soft-bodied cephalopods, they lack the external shell of most molluscs, including other cephalopods like the nautiloids and the extinct Ammonoidea.[129] They have eight limbs like other Coleoidea, but lack the extra specialised feeding appendages known as tentacles which are longer and thinner with suckers only at their club-like ends.[130] The vampire squid (Vampyroteuthis) also lacks tentacles but has sensory filaments.[131]
The molecular analysis of the octopods shows that the suborder Cirrina (Cirromorphida) and the superfamily Argonautoidea are paraphyletic and are broken up; these names are shown in quotation marks and italics on the cladogram.
RNA editing and the genome
Octopuses, like other coleoid cephalopods but unlike more basal cephalopods or other molluscs, are capable of greater RNA editing, changing the nucleic acid sequence of the primary transcript of RNA molecules, than any other organisms. Editing is concentrated in the nervous system, and affects proteins involved in neural excitability and neuronal morphology. More than 60% of RNA transcripts for coleoid brains are recoded by editing, compared to less than 1% for a human or fruit fly. Coleoids rely mostly on ADAR enzymes for RNA editing, which requires large double-stranded RNA structures to flank the editing sites. Both the structures and editing sites are conserved in the coleoid genome and the mutation rates for the sites are severely hampered. Hence, greater transcriptome plasticity has come at the cost of slower genome evolution.[133][134]
The octopus genome is unremarkably bilaterian except for large developments of two gene families: protocadherins, which regulate the development of neurons; and the C2H2 zinc-finger transcription factors. Many genes specific to cephalopods are expressed in the animals' skin, suckers, and nervous system.[48]
Relationship to humans
In art, literature, and mythology
Ancient seafaring people were aware of the octopus, as evidenced by artworks and designs. For example, a stone carving found in the archaeological recovery from Bronze Age Minoan Crete at Knossos (1900–1100 BC) depicts a fisherman carrying an octopus.[135] The terrifyingly powerful Gorgon of Greek mythology may have been inspired by the octopus or squid, the octopus itself representing the severed head of Medusa, the beak as the protruding tongue and fangs, and its tentacles as the snakes.[136] The Kraken are legendary sea monsters of giant proportions said to dwell off the coasts of Norway and Greenland, usually portrayed in art as giant octopuses attacking ships. Linnaeus included it in the first edition of his 1735 Systema Naturae.[137][138] One translation of the Hawaiian creation myth the Kumulipo suggests that the octopus is the lone survivor of a previous age.[139][140][141] The Akkorokamui is a gigantic octopus-like monster from Ainu folklore, worshipped in Shinto.[142]
Danger to humans
Octopuses generally avoid humans, but incidents have been verified. For example, a 2.4-metre (8 ft) Pacific octopus, said to be nearly perfectly camouflaged, "lunged" at a diver and "wrangled" over his camera before it let go. Another diver recorded the encounter on video.[151] All species are venomous, but only blue-ringed octopuses have venom that is lethal to humans.[152] Bites are reported each year across the animals' range from Australia to the eastern Indo-Pacific Ocean. They bite only when provoked or accidentally stepped upon; bites are small and usually painless. The venom appears to be able to penetrate the skin without a puncture, given prolonged contact. It contains tetrodotoxin, which causes paralysis by blocking the transmission of nerve impulses to the muscles. This causes death by respiratory failure leading to cerebral anoxia. No antidote is known, but if breathing can be kept going artificially, patients recover within 24 hours.[153][154] Bites have been recorded from captive octopuses of other species; they leave swellings which disappear in a day or two.[155]
As a food source
Octopus fisheries exist around the world with total catches varying between 245,320 and 322,999 metric tons from 1986 to 1995.[156] The world catch peaked in 2007 at 380,000 tons, and had fallen by a tenth by 2012.[157] Methods to capture octopuses include pots, traps, trawls, snares, drift fishing, spearing, hooking and hand collection.[156] Octopuses have a food conversion efficiency greater than that of chickens, making octopus aquaculture a possibility.[158] Octopuses compete with human fisheries targeting other species, and even rob traps and nets for their catch; they may, themselves, be caught as bycatch if they cannot get away.[159]
Octopus is eaten in many cultures, such as those on the Mediterranean and Asian coasts.[160] The arms and other body parts are prepared in ways that vary by species and geography. Live octopuses or their wriggling pieces are consumed as ikizukuri in Japanese cuisine and san-nakji in Korean cuisine.[161][162] If not prepared properly, however, the severed arms can still choke the diner with their suction cups, causing at least one death in 2010.[163] Animal welfare groups have objected to the live consumption of octopuses on the basis that they can experience pain.[164]
In science and technology
In classical Greece, Aristotle (384–322 BC) commented on the colour-changing abilities of the octopus, both for camouflage and for signalling, in his Historia animalium: "The octopus ... seeks its prey by so changing its colour as to render it like the colour of the stones adjacent to it; it does so also when alarmed."[165] Aristotle noted that the octopus had a hectocotyl arm and suggested it might be used in sexual reproduction. This claim was widely disbelieved until the 19th century. It was described in 1829 by the French zoologist Georges Cuvier, who supposed it to be a parasitic worm, naming it as a new species, Hectocotylus octopodis.[166][167] Other zoologists thought it a spermatophore; the German zoologist Heinrich Müller believed it was "designed" to detach during copulation. In 1856 the Danish zoologist Japetus Steenstrup demonstrated that it is used to transfer sperm, and only rarely detaches.[168]
Octopuses offer many possibilities in biological research, including their ability to regenerate limbs, change the colour of their skin, behave intelligently with a distributed nervous system, and make use of 168 kinds of protocadherins (humans have 58), the proteins that guide the connections neurons make with each other. The California two-spot octopus has had its genome sequenced, allowing exploration of its molecular adaptations.[48] Having independently evolved mammal-like intelligence, octopuses have been compared by the philosopher Peter Godfrey-Smith, who has studied the nature of intelligence,[170] to hypothetical intelligent extraterrestrials.[171] Their problem-solving skills, along with their mobility and lack of rigid structure enable them to escape from supposedly secure tanks in laboratories and public aquariums.[172]
Due to their intelligence, octopuses are listed in some countries as experimental animals on which surgery may not be performed without anesthesia, a protection usually extended only to vertebrates. In the UK from 1993 to 2012, the common octopus (Octopus vulgaris) was the only invertebrate protected under the Animals (Scientific Procedures) Act 1986.[173] In 2012, this legislation was extended to include all cephalopods[174] in accordance with a general EU directive.[175]
Some robotics research is exploring biomimicry of octopus features. Octopus arms can move and sense largely autonomously without intervention from the animal's central nervous system. In 2015 a team in Italy built soft-bodied robots able to crawl and swim, requiring only minimal computation.[176][177] In 2017 a German company made an arm with a soft pneumatically controlled silicone gripper fitted with two rows of suckers. It is able to grasp objects such as a metal tube, a magazine, or a ball, and to fill a glass by pouring water from a bottle.[178]
See also
Notes
^"Tentacle" is a common umbrella term for cephalopod limbs. In teuthological context, octopuses have "arms" with suckers along their entire length while "tentacle" is reserved for appendages with suckers only near the end of the limb, which octopuses lack.[4]
^Fowler, Henry Watson (1994). A Dictionary of Modern English Usage. p. 316. ISBN978-1-85326-318-7. In Latin plurals there are some traps for non-Latinists; the termination of the singular is no sure guide to that of the plural. Most Latin words in -us have plural in -i, but not all, & so zeal not according to knowledge issues in such oddities as...octopi...; as caution the following list may be useful:...octopus, -podes
^Butterfield, Jeremy (2015). Fowler's Dictionary of Modern English Usage. Oxford University Press. ISBN978-0-19-174453-2. The only correct plural in English is octopuses. The Greek original is ὀκτώπους, -ποδ- (which would lead to a pedantic English pl. form octopodes). The pl. form octopi, which is occasionally heard (mostly in jocular use), though based on modL octopus, is misconceived
^Scheel, D.; et al. (2017). "A second site occupied by Octopus tetricus at high densities, with notes on their ecology and behavior". Marine and Freshwater Behaviour and Physiology. 50 (4): 285–291. doi:10.1080/10236244.2017.1369851. S2CID89738642.
^Rodaniche, Arcadio F. (1991). "Notes on the behavior of the Larger Pacific Striped Octopus, an undescribed species of the genus Octopus". Bulletin of Marine Science. 49: 667.
^Lee, Henry (1875). "V: The octopus out of water". Aquarium Notes – The Octopus; or, the "devil-fish" of fiction and of fact. London: Chapman and Hall. pp. 38–39. OCLC1544491. Retrieved 11 September 2015. The marauding rascal had occasionally issued from the water in his tank, and clambered up the rocks, and over the wall into the next one; there he had helped himself to a young lump-fish, and, having devoured it, returned demurely to his own quarters by the same route, with well-filled stomach and contented mind. | Most species grow quickly, mature early, and are short-lived. In most species, the male uses a specially adapted arm to deliver a bundle of sperm directly into the female's mantle cavity, after which he becomes senescent and dies, while the female deposits fertilised eggs in a den and cares for them until they hatch, after which she also dies. Strategies to defend themselves against predators include the expulsion of ink, the use of camouflage and threat displays, the ability to jet quickly through the water and hide, and even deceit. All octopuses are venomous, but only the blue-ringed octopuses are known to be deadly to humans.
Historically, the first plural to commonly appear in English language sources, in the early 19th century, is the latinate form "octopi",[12] followed by the English form "octopuses" in the latter half of the same century. The Hellenic plural is roughly contemporary in usage, although it is also the rarest.[13]
Fowler's Modern English Usage states that the only acceptable plural in English is "octopuses", that "octopi" is misconceived, and "octopodes" pedantic;[14][15][16] the last is nonetheless used frequently enough to be acknowledged by the descriptivistMerriam-Webster 11th Collegiate Dictionary and Webster's New World College Dictionary. The Oxford English Dictionary lists "octopuses", "octopi", and "octopodes", in that order, reflecting frequency of use, calling "octopodes" rare and noting that "octopi" is based on a misunderstanding.[17] The New Oxford American Dictionary (3rd Edition, 2010) lists "octopuses" as the only acceptable pluralisation, and indicates that "octopodes" is still occasionally used, but that "octopi" is incorrect.[18]
Anatomy and physiology
Size
The giant Pacific octopus(Enteroctopus dofleini) is often cited as the largest known octopus species. | yes |
Malacology | Are most octopuses venomous? | yes_statement | most "octopuses" are "venomous".. the majority of "octopuses" possess "venom". | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4396017/ | Infiltrated plaques resulting from an injury caused by the common ... | Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Abstract
Several species of octopus are considered venomous due to toxins present in the glands connected to their “beak”, which may be associated with hunt and kill of prey. Herein, we report an accident involving a common octopus (Octopus vulgaris) that injured an instructor during a practical biology lesson and provoked an inflamed infiltrated plaque on the hand of the victim. The lesion was present for about three weeks and was treated with cold compresses and anti-inflammatory drugs. It was healed ten days after leaving a hyperchromic macule at the bite site. The probable cause of the severe inflammation was the digestive enzymes of the glands and not the neurotoxins of the venom.
Background
Cephalopod mollusks are marine animals that include squids, octopuses, cuttlefish, and nautiluses. Octopuses have eight feet and a horny beak that is used to capture prey and for defense strategies including jets of water that propel their bodies quickly in the opposite direction of perceived threats and ejection of clouds of dark ink to confuse predators [1]. The suckers on their arms are capable of provoking purpuric lesions by suction, whereas their beaks can inflict lacerations to victims (especially fishermen and divers), where the venom contained in their salivary glands penetrates the body [1–3]. The venom contains digestive enzymes and proteinaceous neurotoxins that immobilize prey. Octopus vulgaris is the most common octopus found off the Brazilian and South American coast and recent reports have indicated the presence of cephalotoxin, a glycoprotein, in the saliva of this species [4]. The consumption of raw octopus may provoke neuromuscular toxicity [4]. Herein, we report an accident involving a common octopus (Octopus vulgaris), which injured an instructor in a practical biology lesson, causing an inflamed infiltrated plaque on the hand of the victim. Although there was no initial manifestation besides local pain, the wound had a chronic evolution.
Case presentation
The patient, a 53-year old female marine biologist, was “bitten” on the dorsum of the left hand by a small octopus three weeks before seeking medical attention. The injury occurred during a practical biology lesson when the patient was handling the live octopus (Figure 1). An erythematous edematous plaque, about 5.0 cm in diameter, appeared surrounding a small ulcer at the bite site. The wound was initially painful, but the reason for seeking medical help was due to difficulty in healing and the development of infiltration, hard on palpation, giving the site a hardened aspect (Figure 2). The injury was treated with cold compresses and a non-steroidal anti-inflammatory drugs for about ten days before resolution, leaving a local hyperchromic macule. No histopathological exam was performed due to the regression of the lesion when the patient returned after treatment.
Chronic infiltrated plaque on the dorsum of the left hand of the patient. The image shows the central ulceration caused by the beak of the octopus.
Common octopuses (Octopus vulgaris) represent the majority of octopuses captured in Brazil [5]. Similarly to Indo-Pacific genera (including Hapalochlaena, the blue-ringed octopus), they possess digestive enzymes and neurotoxins in glands connected to their horny beak and may use this venom for defense or to subdue prey. Blue-ringed octopus venom contains mainly tetrodotoxin (like puffer fish venom), while common octopus venom is composed of cephalotoxin, a toxin less powerful than tetrodotoxin, but also capable of causing paralysis and other manifestations in humans [1–4]. The exact effects of the digestive enzymes are not known, but they can clearly provoke inflammatory reactions in victims’ tissue.
Conclusions
Our patient possibly manifested the effects of the enzymatic action of octopus saliva, and not the neuromuscular signs of toxins from the venom, since tingling in the hand or arm, muscle weakness or systemic manifestations were not reported. Possible complications of envenomations include secondary infections, with fever and purulent secretion. In this case, the response to non-steroidal anti-inflammatory drugs was good and probably abbreviated the evolution of the problem.
Consent
Consent was obtained from the patient for publication of this case report and accompanying images.
Acknowledgments
The authors would like to thank the staff of “Casa do Mar” for assisting the injured biologist when she was bitten by the octopus during a field lesson and Prof Teodoro Vaske Junior for the identification of the octopus.
Footnotes
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
VHJ and CAM observed and described the case and prepared the manuscript for publication. Both authors read and approved the final manuscript. | The wound was initially painful, but the reason for seeking medical help was due to difficulty in healing and the development of infiltration, hard on palpation, giving the site a hardened aspect (Figure 2). The injury was treated with cold compresses and a non-steroidal anti-inflammatory drugs for about ten days before resolution, leaving a local hyperchromic macule. No histopathological exam was performed due to the regression of the lesion when the patient returned after treatment.
Chronic infiltrated plaque on the dorsum of the left hand of the patient. The image shows the central ulceration caused by the beak of the octopus.
Common octopuses (Octopus vulgaris) represent the majority of octopuses captured in Brazil [5]. Similarly to Indo-Pacific genera (including Hapalochlaena, the blue-ringed octopus), they possess digestive enzymes and neurotoxins in glands connected to their horny beak and may use this venom for defense or to subdue prey. Blue-ringed octopus venom contains mainly tetrodotoxin (like puffer fish venom), while common octopus venom is composed of cephalotoxin, a toxin less powerful than tetrodotoxin, but also capable of causing paralysis and other manifestations in humans [1–4]. The exact effects of the digestive enzymes are not known, but they can clearly provoke inflammatory reactions in victims’ tissue.
Conclusions
Our patient possibly manifested the effects of the enzymatic action of octopus saliva, and not the neuromuscular signs of toxins from the venom, since tingling in the hand or arm, muscle weakness or systemic manifestations were not reported. Possible complications of envenomations include secondary infections, with fever and purulent secretion. In this case, the response to non-steroidal anti-inflammatory drugs was good and probably abbreviated the evolution of the problem.
| yes |
Malacology | Are most octopuses venomous? | yes_statement | most "octopuses" are "venomous".. the majority of "octopuses" possess "venom". | https://a-z-animals.com/blog/shark-vs-octopus-who-would-win-in-a-fight/ | Shark vs Octopus: Who Would Win in a Fight? - AZ Animals | Shark vs Octopus: Who Would Win in a Fight?
WATCH: Sharks biting alligators, the most epic lion battles, and MUCH more.
Enter your email in the box below to get the most mind-blowing animal stories and videos delivered directly to your inbox every day.
Thanks for subscribing!
The oceans are filled with amazing aquatic creatures. Among them are sharks and octopuses. Sharks are known for being some of the deadliest and most frightening animals that lurk in the ocean. They evolved to be perfect hunters. Meanwhile, octopuses are often credited as being the smartest invertebrate animals alive. In a battle between a shark vs octopus, could brain power outclass that much brawn? We’ll get to the bottom of this situation by showing you how these creatures measure up to one another and what would happen in a fight.
– 25 mph over short distances – Takes in water and then pushes it out through their siphon to swim – Typically crawls on the bottom of the ocean floor
Senses
– Good vision with sharp focus and night vision. – Great whites hear low frequencies, but it’s not their best sense. – Incredible smell for substances at 1 part per 10 billion parts of water – Possess ampullae of Lorenzini to detect electrical fields
– Very good sense of sight, and they can detect how much light is coming at them – Can detect chemicals in the water (smell) on their arms – Low-frequency hearing, but some argue they are effectively dead
What Are Key Differences Between a Shark and an Octopus?
The most significant differences between a shark and an octopus are their morphology, size, and senses. Sharks are cartilaginous fish with a torpedo-shaped body that can weigh up to 2,400lbs, while octopuses are cephalopods with eight appendages protruding from their heads that rarely weigh over 150lbs.
Sharks have highly attuned senses of sight and smell, and they function similarly to other fish and even humans. Octopuses gain sensory information from various parts of their bodies in atypical ways. For example, they can sense light and “smell” using their arms.
Sharks and octopuses are very different animals, and that can make comparing them in the realm of combat a little tricky. However, we’re going to take a look at several factors that will help us understand how a battle between these animals would play out.
What Are the Key Factors in a Fight Between a Shark and an Octopus?
When we talk about a fight between two animals, we can’t just focus on the greatest differences between them. A shark vs octopus battle is no different in that respect. We must look at definitive pieces of information like size, speed, and combat skills to determine which of these animals has what it takes to kill the other.
Shark vs Octopus: Size
Sharks are larger than octopuses both on average and at their extremes. The largest sharks can reach over 20ft in length and weigh 2,400lbs. However, the biggest octopus ever recorded only weighed about 600lbs and grew to 30ft in length.
In short, most sharks will have a significant size advantage over an octopus.
Shark vs Octopus: Speed and Movement
Sharks are faster than octopuses. The average shark can move at anywhere between 15mph and 25mph, but the mako shark can move between 30mph and 45mph.
Octopuses usually creep along the bottom of the water, moving less than 5mph. Yet, they can swim with burst speeds of 25mph by taking in and spraying out water.
Sharks have the speed advantage over octopuses.
Shark vs Octopus: Senses
Sharks have better senses than octopuses. As we said, sharks are just about the perfect hunters in the wild. They can see well, have a sense of smell that is simply astounding, and they can detect the electrical fields in their prey with their ampullae of Lorenzini.
Octopuses have unique senses. They can sense light throughout their bodies without having great vision. They can “smell” through their arms. Their hearing is limited to low frequencies, and some researchers believe their hearing is simply negligible. Octopuses have unique senses, and more research is needed to understand how they perceive the world.
Sharks are protected by their massive size and their ability to swim quickly over long distances and short distances. Their skin offers them some protection too.
Octopuses have better defenses than sharks.
Shark vs Octopus: Combat Skills
Octopuses vary in their approach to fighting depending on their species. Some of them use venom to overwhelm their opponents. Others will latch onto their prey with their tentacles and viciously bite them with their sharp beaks while also potentially introducing venom into the wounds. While it’s believed that all octopuses are venomous to some extent, not all their venom is impactful against other creatures.
Sharks have one major form of attack: biting. They use their incredible biting power to tear away large chunks of flesh, exsanguinating their foes by driving teeth up to 6 inches long into them and viciously shearing it away.
Who Would Win in a Fight Between a Shark and an Octopus?
A shark would win a fight against an octopus. Although we can find cases where an octopus kills a smaller shark, the size disparity is simply too much for an octopus to overcome. Even if the octopus uses camouflage, it can’t hide from a shark completely. Remember, they don’t need to see their prey to hunt them since they can smell very well and detect electrical fields coming from aquatic animals.
If the two were put in a clear tank of water, the shark would probably just sink its teeth into the octopus and call it a day, even if it was the largest one ever seen. A few bites from the shark would cause such incredible losses in terms of flesh and blood that the creature would die quickly.
An octopus could try to latch onto a shark and bite, potentially delivering venom into the shark’s system. However, unless that venom worked very quickly, the shark is going to come out ahead.
The Featured Image
I've been a freelance writer since 2013, and I've written in a variety of niches such as managed service providers, animals, and retail distribution. I graduated from Rowan University in 2014. When I'm not working, I enjoy playing video games, reading, and writing for fun. | Sharks are protected by their massive size and their ability to swim quickly over long distances and short distances. Their skin offers them some protection too.
Octopuses have better defenses than sharks.
Shark vs Octopus: Combat Skills
Octopuses vary in their approach to fighting depending on their species. Some of them use venom to overwhelm their opponents. Others will latch onto their prey with their tentacles and viciously bite them with their sharp beaks while also potentially introducing venom into the wounds. While it’s believed that all octopuses are venomous to some extent, not all their venom is impactful against other creatures.
Sharks have one major form of attack: biting. They use their incredible biting power to tear away large chunks of flesh, exsanguinating their foes by driving teeth up to 6 inches long into them and viciously shearing it away.
Who Would Win in a Fight Between a Shark and an Octopus?
A shark would win a fight against an octopus. Although we can find cases where an octopus kills a smaller shark, the size disparity is simply too much for an octopus to overcome. Even if the octopus uses camouflage, it can’t hide from a shark completely. Remember, they don’t need to see their prey to hunt them since they can smell very well and detect electrical fields coming from aquatic animals.
If the two were put in a clear tank of water, the shark would probably just sink its teeth into the octopus and call it a day, even if it was the largest one ever seen. A few bites from the shark would cause such incredible losses in terms of flesh and blood that the creature would die quickly.
An octopus could try to latch onto a shark and bite, potentially delivering venom into the shark’s system. However, unless that venom worked very quickly, the shark is going to come out ahead.
| yes |
Malacology | Are most octopuses venomous? | yes_statement | most "octopuses" are "venomous".. the majority of "octopuses" possess "venom". | https://factopolis.com/facts-about-octopuses/ | 16 Amazing Facts About Octopuses - Factopolis | 16 Amazing Facts About Octopuses
Who doesn’t love a couple of good facts about octopuses? In this article, you’ll learn more about one of the most amazing creatures on our planet, a creature that can change its color, disguise itself and even use tools.
Some of the traits of octopuses even make them feel somewhat otherworldly.
1. All octopuses are venomous
Did you know that all octopuses are venomous? That’s right – every single one of them! Though most octopus species have venom that isn’t harmful to humans, there are a few exceptions. They have a special venom salivary gland that allows them to inject venom into their prey as they bite it.
2. The blue-ringed octopus’ venom is deadly
They may be one of the cutest octopuses as they are tiny and colorful but they have enough venom to kill multiple people (over 20) in a matter of minutes. They are one of the most poisonous marine animals. What’s even scarier is that their bites can be painless and people might not even realize they were bitten until symptoms kick in. It can lead to death within minutes if not treated.
They can be found in tide pools and coral reefs in the Pacific ocean and Indian ocean, from Japan to Australia. Luckily, this species is pretty docile and its first instinct when facing danger is to flee and not attack.
3. Their blood is blue-hued
The majority of animal species have red blood, however, octopuses do not. Their blood is blue. Pretty cool right? Our blood is iron-based (hemoglobin), which gives it the red color, and the blood of octopus is copper-based (hemocyanin) and this gives it its blue color. One of the coolest facts about octopuses for sure!
4. Octopuses have 3 hearts
Why settle with one if you can have 3? Not all of them have the same role either, two hearts pump the blood to the gills and one heart pumps the blood to the rest of its body.
5. They have been observed to use tools
Octopuses have been observed to use tools frequently. For example, they use coconut shells and use them as protective shielding when moving in exposed areas. They would use seashells for this as well.
6. They come in many sizes
When you think about an octopus, you usually have an orange-red baseball ball-sized animal in mind, but these creatures come in many sizes. From the likes of the most adorable Octopus wolfii which measures less than an inch / less than 2.5 cm to the likes of the Giant Pacific octopus which can measure up to 30 feet / 9 meters in length.
7. Octopuses live in saltwater only
There are no known species of octopus that would live in freshwater. They don’t tolerate freshwater. The body of an octopus would not handle the osmotic change in freshwater.
There have been some allegations of a freshwater octopus being observed in the rivers of North America. Some of the sightings proved to be a hoax. Sometimes people found dead octopuses, these were most likely sadly released from aquariums.
In theory, if an octopus-like creature existed in freshwater its physiology would be very different from that of an octopus.
8. They can regrow their limbs
One of the most fascinating facts about octopuses is that they can regrow their limbs. They aren’t as cool as some species of starfish that can grow an entirely new starfish from a severed limb, but being able to regrow a limb is still pretty cool.
When the male mates or when some of the species feel threatened, they can detach their own limbs.
9. Their reproduction has a deadly twist
This is one of the most bizarre facts about octopuses. Most octopuses only mate once, both females and males die soon after reproducing.
Once she lays the eggs, the female will quit eating and will waste away all while also protecting and caring for the eggs. By the time the eggs hatch she will die.
The male octopus will usually die within months of mating. They too stop feeding and they also become uncoordinated making them easy prey.
10. Octopuses can change color
They are (one of) the chameleons of the sea. They can change color because of chromatophores, special organs that are able to change color. These organs can be found throughout the skin of the octopus.
11. When threatened, most can squirt ink
They are cephalopods just like squid and it is normal to see them squirt ink if they are threatened. They will squirt ink to confuse and escape their predators. The ink is black or bluish-black.
The deep-sea octopuses (Cirrina) don’t have ink sacs so they do not produce (or squirt) ink.
12. Octopus are mostly solitary animals
This animal is notoriously solitary. But as with all things, there are exceptions to the rule. Scientists observed a species (Octopus tetricus) also living in a group.
13. They breathe through gills, like fish
Octopuses have gills that allow them to breathe underwater as fish do. They are not fish though, octopuses are marine mollusks, and they are closely related to squid, nautilus, and cuttlefish.
14. Octopus can survive on land for a short period of time
They need water to breathe but they can survive on land for short periods of time, ranging from a couple of minutes to almost half an hour.
Some are known to get out of the water, especially at night, to hunt for food on the shores.
15. They have beaks
When you think about beaks, birds are the first and usually the only animal that comes to mind. Octopus has beaks as well and if you look at it it is pretty similar to that of a bird (especially to that of a parrot). They are made differently though, the beak of a bird is made out of keratin and an octopus’s beak is made of chitin.
16. Octopuses can squeeze through tiny holes
Since they have no bones their body is very squeezable. The beak is the hardest part of their bodies so if the beak fits through the hole, so will the octopus.
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience. | 16 Amazing Facts About Octopuses
Who doesn’t love a couple of good facts about octopuses? In this article, you’ll learn more about one of the most amazing creatures on our planet, a creature that can change its color, disguise itself and even use tools.
Some of the traits of octopuses even make them feel somewhat otherworldly.
1. All octopuses are venomous
Did you know that all octopuses are venomous? That’s right – every single one of them! Though most octopus species have venom that isn’t harmful to humans, there are a few exceptions. They have a special venom salivary gland that allows them to inject venom into their prey as they bite it.
2. The blue-ringed octopus’ venom is deadly
They may be one of the cutest octopuses as they are tiny and colorful but they have enough venom to kill multiple people (over 20) in a matter of minutes. They are one of the most poisonous marine animals. What’s even scarier is that their bites can be painless and people might not even realize they were bitten until symptoms kick in. It can lead to death within minutes if not treated.
They can be found in tide pools and coral reefs in the Pacific ocean and Indian ocean, from Japan to Australia. Luckily, this species is pretty docile and its first instinct when facing danger is to flee and not attack.
3. Their blood is blue-hued
The majority of animal species have red blood, however, octopuses do not. Their blood is blue. Pretty cool right? Our blood is iron-based (hemoglobin), which gives it the red color, and the blood of octopus is copper-based (hemocyanin) and this gives it its blue color. One of the coolest facts about octopuses for sure!
4. Octopuses have 3 hearts
Why settle with one if you can have 3? Not all of them have the same role either, two hearts pump the blood to the gills and one heart pumps the blood to the rest of its body.
5. | yes |
Malacology | Are most octopuses venomous? | yes_statement | most "octopuses" are "venomous".. the majority of "octopuses" possess "venom". | https://www.gagebeasleyshop.com/blogs/gb-blog/the-types-of-poisonous-octopus | The Types of Poisonous Octopus You Need to Know About – Gage ... | The Types of Poisonous Octopus You Need to Know About
Octopus are some of the most interesting creatures on our planet. They're smart, they can defend themselves from predators, and as research has shown, octopuses are one of the only non-human animals that use tools. Researchers have taken on Octopuses as study subjects due to their high levels of intelligence and complex nervous systems. And given the size of their brains, their intelligence might even be on par with some vertebrates.
In a study carried out in 2009, researchers discovered that all octopus species are poisonous, and most of them are venomous.
Even though all are poisonous, there are some octopuses whose poison is so strong that it can kill a human being in minutes.
The following is a list of the strongest types of poisonous octopuses you need to know about.
The blue-ringed Octopus is the most poisonous of all octopuses. This type of Octopus has venom that contains tetrodotoxin, a substance that acts as a neurotoxin to humans.
Even though its entire body is covered with venom, this type of Octopus doesn't bite people itself. However, if threatened, the blue-ringed Octopus will show off by flashing its bright yellow and black rings. It can also spray poison at its enemies and cause blindness and paralysis.
The poison in this octopus' venom is strong enough to kill a human within minutes unless an antidote is given immediately.
The blue-ringed octopus is found in the Sea of Japan, Southern Australia, and the Philippines.
The mimic Octopus has a unique ability to imitate other animals such as lionfish, sea snakes, flatfish, brittle stars, crabs, and mantis shrimp. This Octopus can also change its body color as chameleons do.
Found in the Pacific Ocean, the mimic Octopus has venom in its saliva, which can cause paralysis and kill its prey.
One of the intelligent invertebrates is the Coconut Octopus. It is a very playful animal with less fear of its predators. Usually, they are found in shallow waters on coral reefs or anywhere where there are coconut trees like the western Pacific Ocean.
This medium-sized Octopus is the only one that has a coconut shell. They use it as a home and can often be found sitting on their shells. The animal uses them to protect themselves and as a buoyancy aid.
Common Octopus
The common Octopus is the most abundant species among all octopuses, and it can be found everywhere around the world. They live in large dens and eat crabs and mollusks such as prawns, snails, and clams.
This octopus' venom is not very strong, but still enough to paralyze small fish without killing them.
The common Octopus is very flexible and can squeeze through small holes and narrow cracks.
Blanket Octopus
If a poisonous octopus looks dangerous, it's the blanket octopus. This huge creature has a body covered with venomous mucus, and its arms are almost as long as its whole body.
The blanket octopuses females are 10,000 times bigger than the males, making them interesting for researchers.
They are commonly found floating in the subtropical and tropical oceans.
California Two-spot Octopus
Found in the shallow waters of the Pacific Ocean, this Octopus is tiny and has a brown-magenta color. This animal's venom is not deadly for humans but will only cause nausea and headaches.
It is one of the friendliest octopus’s species and is easily approached by humans. ordinary crab, but this tiny creature is one of the most venomous animals on the
With a purple-brown body color, this is the smallest Octopus in the world and has a light-producing organ on its forehead.
This cephalopod can predict when it will die because they don't grow new cells once they reach adulthood.
It produces light to distract predators, but it can also turn itself into a flat octopus to escape.
Hydrothermal Vent Octopus
The Hydrothermal Vent Octopus is the only Octopus that lives near hydrothermal vents. Their eyes are covered by skin, but they have sharp eyesight to detect animals in low visibility.
Found in the East Pacific rise and near large colonies of Giant tubeworms, this Octopus is tiny, around 6 inches long. They feed on crabs and shrimp and use the teeth in their suckers to tear apart food.
The underwater world is one of the most beautiful places globally, but it can be very dangerous too. Almost all animals carry poison or venom to protect themselves against predators, and octopuses are no exception.
Even though they are invertebrates that lack muscles or skeletons instead of hard shell-like skin, these intelligent creatures still have an arsenal of deadly weapons.
The venom is produced in the salivary glands and used to immobilize prey or defend against predators.
So, do you know which Octopus's venom is the deadliest? Check the list above one more time and see if you can find it. | The Types of Poisonous Octopus You Need to Know About
Octopus are some of the most interesting creatures on our planet. They're smart, they can defend themselves from predators, and as research has shown, octopuses are one of the only non-human animals that use tools. Researchers have taken on Octopuses as study subjects due to their high levels of intelligence and complex nervous systems. And given the size of their brains, their intelligence might even be on par with some vertebrates.
In a study carried out in 2009, researchers discovered that all octopus species are poisonous, and most of them are venomous.
Even though all are poisonous, there are some octopuses whose poison is so strong that it can kill a human being in minutes.
The following is a list of the strongest types of poisonous octopuses you need to know about.
The blue-ringed Octopus is the most poisonous of all octopuses. This type of Octopus has venom that contains tetrodotoxin, a substance that acts as a neurotoxin to humans.
Even though its entire body is covered with venom, this type of Octopus doesn't bite people itself. However, if threatened, the blue-ringed Octopus will show off by flashing its bright yellow and black rings. It can also spray poison at its enemies and cause blindness and paralysis.
The poison in this octopus' venom is strong enough to kill a human within minutes unless an antidote is given immediately.
The blue-ringed octopus is found in the Sea of Japan, Southern Australia, and the Philippines.
The mimic Octopus has a unique ability to imitate other animals such as lionfish, sea snakes, flatfish, brittle stars, crabs, and mantis shrimp. This Octopus can also change its body color as chameleons do.
Found in the Pacific Ocean, the mimic Octopus has venom in its saliva, which can cause paralysis and kill its prey.
One of the intelligent invertebrates is the Coconut Octopus. It is a very playful animal with less fear of its predators. | yes |
Malacology | Are most octopuses venomous? | yes_statement | most "octopuses" are "venomous".. the majority of "octopuses" possess "venom". | https://octonation.com/blue-ringed-octopus-facts/ | 5 Blue-Ringed Octopus Facts That'll Leave You Shook! - OctoNation ... | 5 Blue-Ringed Octopus Facts That’ll Leave You Shook!
Move over great white sharks, the deadliest creature in the ocean is gelatinous, has a beak, and is the size of a golf ball. I present to you the unassuming, but beautifully colored, Blue-Ringed Octopus whose saliva contains a toxin that’s 1,000 times more powerful than cyanide. Let’s take a dive and learn more fun Blue-Ringed Octopus facts!
Despite their small size, Blue Ringed Octopuses (Hapalochlaena sp., commonly referred to as BRO’s) are recognized as one of the most venomous animals in the world. There are 3 (and a disputed 4th) species within the family – all recognizable by the 50-60 iridescent blue rings that cover their body and are vividly contrasted on a yellow background.
The only outlier is the Blue-Lined Octopus (HapalochlaenaFasciata) who has rings on their arms, but lines on their head. They don’t always have this color pattern!
Where To Find The Blue-Ringed Octopus
You will find the Blue-Ringed Octopus in shallow coastal waters of the Indo-Pacific slinking around coral and rocky reefs, seagrass, and algae beds. They feed on small crustaceans including:
Crabs
Shrimp
Hermit Crabs
These tiny octopuses live for about 2 years, mating at the end of their lives. Females will lay 60-100 eggs, which are kept under the female’s arms or in her webbing till they hatch about a month later.
As with many octopus species, the male will die shortly after mating, and the female after her eggs are hatched.
Toxic Like Tetrodotoxin
All species in the Blue-Ringed Octopus family have saliva which contains a powerful neurotoxin, tetrodotoxin (TTX). It is produced by bacteria and is a powerful, fast-acting toxin.
One milligram of TTX can kill a person, making it one of the most known potent natural toxins. To put one milligram in perspective— it’s smaller than the period at the end of this sentence. Relative to other venoms, by weight, tetrodotoxin (TTX) is 10 to 100 times more lethal as black widow venom.
This toxin is also found in:
Pufferfish
Moon snails
Rough-skinned newts
🐙 Fun Fact 🐙
There is no antidote for TTX. Treatment consists of life-supportive measures, including artificial ventilation. So, make sure to respect their space and stay clear!
The Bite Of A Blue-Ringed Octopus
The bite has been described as a small laceration with no more than a tiny drop of blood and is usually painless. Even the unnoticeable bite from a Blue Ringed Octopus can shut down the nervous system of a large person or animal in just minutes!
A tiny bite could result in complete:
Paralysis
Blindness
Loss of senses
Nausea
And, could result in death within minutes!
The toxin goes after your nervous system, blocking nerve signals throughout the body so the first thing you will feel is numbness. Ultimately, it causes complete paralysis of your muscles including the ones that you need to breathe.
Despite its deadly venomous bite, it is one of the most sought-after subjects for scuba divers and underwater photographers.
🐙 Fun Fact 🐙
Heads up! ALL octopuses are venomous but only two contain the lethal TTX (the Blue-Ringed Octopus and the Mototi Octopus). Other octopus species use venom made from different compounds but still manage to paralyze the nervous system of their prey while their saliva liquifies/breaks down muscle tissue so they can eat.
But Wait…There’s Good News!
Okay, so I know that what a bit scary. BUT, here are three pieces of good news to follow the seemingly terrifying information you just learned:
1. The Blue-Ringed Octopus DOES NOT want to hurt you!
They are not hiding out in tidepools waiting to pounce on you. They prefer chilling in their little den, eating crabs, and living that solitary quiet life. This is probably why you rarely hear of anyone dying or getting ‘attacked’ by a Blue-Ringed Octopus.
Lucky for everyone- they mostly bite what they know they can subdue/eat. Since the 1960s, only 3 deaths have been attributed to a Blue-Ringed Octopus!
2. Don’t Wait- Get Medical Attention Right Away.
In the very rare event that you find yourself bitten by a BRO, getting medical attention right away gives you a very good chance of surviving a bite!
While there is no antidote for tetrodotoxin, thankfully there is modern medicine and technology where equipment will breathe for you and medical professionals will keep you alive till your body has metabolized the venom.
According to a report back in 2006, a 4-year-old boy was bitten twice by a Blue-Ringed Octopus while playing on a beach in Australia. Minutes after, he experienced vomiting, lost the ability to stand, and complained of blurred vision. His fast-acting parents got him to medical help within the first 20 minutes where he was intubated and ventilated for 17 hours.
Happily, that boy was fine and had no further complications from the incident, but it’s a good reminder to make sure you know what your kiddos (and yourself) are playing with at the beach.
3. Possibly Using Blue-Ringed Octopus Venom For Good?
Researchers at the University of Melbourne are looking into cephalopod venom and how it could lead to new drug discoveries! Dr. Bryan Fry, a biochemist who studies venom variation in the world, says:
“Venoms are toxic proteins with specialized functions such as paralyzing the nervous system. We hope that understanding the structure and mode of action of venom proteins we can benefit drug design for a range of conditions such as pain management, allergies, and cancer.”
How cool is that!?
The next time you come across the headlines “Octopus venom powerful enough to kill 26 adult humans within minutes”, you can stop the “jaw’s soundtrack” knowing they spend the majority of their lives in a den just chilling and going out to hunt for crabs occasionally.
The Blue-Ringed Octopus: Beautiful Yet Toxic!
If you want to educate yourself some more about all sorts of different cephalopods, take a look at our encyclopedia. Or, what we call it, our Octopedia!
Connect with other octopus lovers via the OctoNation Facebook group, OctopusFanClub.com!Make sure to follow us on Facebook and Instagram to keep up to date with the conservation, education, and ongoing research of cephalopods.
More Posts To Read:
If you enjoyed learning about the Blue-Ringed Octopus, we recommend taking a look at some of these other profiles:
Corinne is a biologist with 10 years of experience in the fields of marine and wildlife biology. She has a Master’s degree in marine science from the University of Auckland and throughout her career has worked on multiple international marine conservation projects as an environmental consultant. She is an avid scuba diver, underwater photographer, and loves to share random facts about sea creatures with anyone who will listen. Based in Japan, Corinne currently works in medical research and scientific freelance writing!
Similar Posts
Out of roughly 120 species of Cuttlefish in the world, the one from down under is the biggest of them all. The Giant Australian Cuttlefish, Sepia apama, lives fast and dies young, leading an extraordinary life during its short 1 to 2-year lifespan. These remarkable creatures have skin with a higher resolution than an iPhone…
Octopus? Squid? Cuttlefish? When you’re none of the above, but you’re still a mushy, 8-armed creature that moves about the ocean using jet propulsion, what could you possibly be? The one and only Vampyroteuthis infernalis, of course! Commonly known as the Vampire Squid, this exclusive cephalopod lives at crushing ocean depths with little oxygen. It…
Have you ever heard of the Glass Octopus? Nope, not an octopus sculpture made of glass. This is a very much alive, SEE-THROUGH octopus that floats along the currents deep below the ocean’s surface. We‘ve known of this glorious creature for over 100 years… so why are they just NOW going viral on the internet?…
Octopuses= Aliens? Science fiction is full of imagination, and there are plenty of sci-fi movies where aliens take the form of octopuses. These intelligent, undeniably adorable creatures are portrayed in a variety of ways across all sorts of pop culture. Read on to find how octopuses in science fiction are actually portrayed and how it…
Lopsided eyeballs with unique colors. Lights that flash and pulse to lure in prey. An invisibility cloak to move around the ocean undetected by predators. These are just a few of many wondrous delights exhibited by the fanciest looking squid in the ocean, the Strawberry Squid. If you’re just now learning about this bedazzled beauty, you’ve…
Ever wonder how baby octopuses come into this world? Who takes care of them? How many eggs does a mama octopus lay? Why does she die after her young hatch? You’re about to find out as we dive into how octopuses give birth and care for their precious eggs! There are many different species of…
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience. | Blindness
Loss of senses
Nausea
And, could result in death within minutes!
The toxin goes after your nervous system, blocking nerve signals throughout the body so the first thing you will feel is numbness. Ultimately, it causes complete paralysis of your muscles including the ones that you need to breathe.
Despite its deadly venomous bite, it is one of the most sought-after subjects for scuba divers and underwater photographers.
🐙 Fun Fact 🐙
Heads up! ALL octopuses are venomous but only two contain the lethal TTX (the Blue-Ringed Octopus and the Mototi Octopus). Other octopus species use venom made from different compounds but still manage to paralyze the nervous system of their prey while their saliva liquifies/breaks down muscle tissue so they can eat.
But Wait…There’s Good News!
Okay, so I know that what a bit scary. BUT, here are three pieces of good news to follow the seemingly terrifying information you just learned:
1. The Blue-Ringed Octopus DOES NOT want to hurt you!
They are not hiding out in tidepools waiting to pounce on you. They prefer chilling in their little den, eating crabs, and living that solitary quiet life. This is probably why you rarely hear of anyone dying or getting ‘attacked’ by a Blue-Ringed Octopus.
Lucky for everyone- they mostly bite what they know they can subdue/eat. Since the 1960s, only 3 deaths have been attributed to a Blue-Ringed Octopus!
2. Don’t Wait- Get Medical Attention Right Away.
In the very rare event that you find yourself bitten by a BRO, getting medical attention right away gives you a very good chance of surviving a bite!
While there is no antidote for tetrodotoxin, thankfully there is modern medicine and technology where equipment will breathe for you and medical professionals will keep you alive till your body has metabolized the venom.
| yes |
Malacology | Are most octopuses venomous? | yes_statement | most "octopuses" are "venomous".. the majority of "octopuses" possess "venom". | https://golden.com/wiki/Octopus-YX9B9 | Octopus - Wiki | Golden | Octopus
Aside from the fact that male octopus typically dies a few months after mating, the female octopus also dies shortly after their eggs hatch. The duration of egg incubation normally takes 2 to 10 months, depending on the species and water temperature. During which, the mother octopus stops eating, and only focuses on protecting her eggs from any danger.
Mating may take up to several hours.
When mating, the male octopus will insert his hectocotylus into the female’s mantle cavity and deposit spermatophores. The hectocotylus is the modified arm of the male octopus which they use to transfer sperm to the female. Depending on the species, the mating may take up to several hours.
However, there is a risk of the male octopus being eaten during mating due to the cannibal nature of the female octopus. To prevent this, they either mate in a distance, or the male octopus will mount onto the back of the female octopus, leaving time for his escape if things go wrong.
All octopuses have venom.
When an octopus catches its prey, it breaks into the shell and injects its venomous saliva into the prey to paralyze or kill it. Although all octopuses have venom, not all of them are dangerous to humans. Only the blue-ringed octopus is fatal just with one bite.
The blue-ringed octopus' venom is fatal.
The blue-ringed octopus is the world’s most venomous marine animal. This octopus can paralyze and kill an adult human with a single bite. They are commonly found in the coral reefs and tide pools in the Indian and Pacific oceans.
Octopus has decentralized brains.
Octopuses have decentralized brains and the majority of its neurons live in the arms. Those neurons assist the arms to independently touch, taste, and have their own basic motions giving the impression that octopus has nine brains.
Deep-sea octopuses can't produce ink.
All octopuses can produce ink except for those octopuses that live in the deep open ocean. The octopuses’ ink comes from the ink sacs in their gills. They squirt ink when they face danger and need to escape from their predators. Their ink is accompanied by mucous when produced.
Almost all octopuses are predatory.
The bottom-dwelling octopuses feed mainly on polychaete worms, whelks, clams, crustaceans. On the other hand, open-ocean octopuses feed mainly on other cephalopods, prawns, and fish.
Octopuses can regrow their arm if they lose one.
These soft-bodied octopuses are invertebrates – they don’t have bones. So their tentacles or “arms” are vulnerable to damage. The regrowth process will start as soon as they lose their tentacle or after it has been damaged.
An octopus can change color in an instant.
Octopuses can change their skin colors in the blink of an eye! The ‘chromatophores’, special cells of the octopus, are the reason behind their amazing transformation. These special cells beneath their skin have thousands of colors.
Timeline
February 15, 2022
Octopus are smart creatures.
Their ability to change colors, using greater force compared to their body weight, and squirt ink when they feel threatened is the evidence of its problem-solving skills. It is also a part of the evolution of its intelligence.
February 15, 2022
The octopus breathes through its gills.
Like fish, the octopus is breathing by extracting the water through their gills. When the octopus is out of the water, they cannot breathe as their gills are no longer buoyant. However, they can still survive for a short time. | Although all octopuses have venom, not all of them are dangerous to humans. Only the blue-ringed octopus is fatal just with one bite.
The blue-ringed octopus' venom is fatal.
The blue-ringed octopus is the world’s most venomous marine animal. This octopus can paralyze and kill an adult human with a single bite. They are commonly found in the coral reefs and tide pools in the Indian and Pacific oceans.
Octopus has decentralized brains.
Octopuses have decentralized brains and the majority of its neurons live in the arms. Those neurons assist the arms to independently touch, taste, and have their own basic motions giving the impression that octopus has nine brains.
Deep-sea octopuses can't produce ink.
All octopuses can produce ink except for those octopuses that live in the deep open ocean. The octopuses’ ink comes from the ink sacs in their gills. They squirt ink when they face danger and need to escape from their predators. Their ink is accompanied by mucous when produced.
Almost all octopuses are predatory.
The bottom-dwelling octopuses feed mainly on polychaete worms, whelks, clams, crustaceans. On the other hand, open-ocean octopuses feed mainly on other cephalopods, prawns, and fish.
Octopuses can regrow their arm if they lose one.
These soft-bodied octopuses are invertebrates – they don’t have bones. So their tentacles or “arms” are vulnerable to damage. The regrowth process will start as soon as they lose their tentacle or after it has been damaged.
An octopus can change color in an instant.
Octopuses can change their skin colors in the blink of an eye! The ‘chromatophores’, special cells of the octopus, are the reason behind their amazing transformation. These special cells beneath their skin have thousands of colors.
| yes |
Malacology | Are most octopuses venomous? | yes_statement | most "octopuses" are "venomous".. the majority of "octopuses" possess "venom". | https://oceanfauna.com/is-octopus-ink-poisonous/ | Is Octopus Ink Poisonous? [No, Here's Why] - Ocean Fauna | If you want to know more about the ink of octopus, this article is for you.
What Is Octopus Ink?
Octopus ink, also referred to as Cephalopod ink, is a unique substance secreted by most species of octopuses as a defensive strategy against their predators. This ink is usually dark-colored or luminescent and released into the surrounding water by the cephalopods.
Octopuses use their ink primarily as a diversion tactic when they sense danger. When threatened, octopuses release a cloud of ink that temporarily obscures their predator’s vision, allowing the cephalopod to quickly swim away and escape from danger.
The composition of octopus ink is complex and consists of various organic compounds, including melanin, mucins, enzymes, and amino acids. Most notably, the melanin present in octopus ink is responsible for its dark coloration and helps to protect the organism from UV radiation and oxidative damage.
Interestingly, not all octopuses produce the same type of ink. Some species release luminescent ink that glows in the dark and may be used to help deceive predators or attract prey. Additionally, the composition of octopus ink can change depending on the octopus’s diet, age, environment, and other factors.
Although octopus ink has long been known for its use as a defense mechanism, recent research has revealed potential commercial applications for this unique substance. For instance, researchers are exploring the use of octopus ink as a natural food coloring or a possible treatment for certain medical conditions. More details are discussed below.
Usages Of Octopus Ink
Octopus ink is a unique substance that serves a variety of purposes. In this section, the different usages of octopus ink will be examined in detail.
Defence Mechanism
One of the primary uses of octopus ink is as a self-defense mechanism. When octopuses feel threatened by predators, they eject ink from their ink sacs to distract and confuse their attackers.
The ink is made up of melanin and mucus, creating a dark cloud the predator can’t see through. This allows the octopus to escape while the predator is disoriented.
Potential Medical Benefits
Research suggests that cephalopod ink may have potential medical benefits for humans. Melanin, an ink component, has been shown to have anti-cancer properties.
In fact, some studies have even shown that melanin from cephalopods could be used in anti-cancer drugs and chemotherapy. Additionally, compounds found in the ink can help protect immune cells, which may boost our immune response.
Pigment/Dye
Octopus ink is also used as a pigment and a dye. The ink has a deep black color, making it a popular choice for artists and designers. In addition to being used in traditional art forms, such as calligraphy and painting, octopus ink is also used as a dye for fabrics and other materials.
Writing Ink
In addition to being used as a pigment and dye, octopus ink is also used as writing ink. Historically, octopus ink was used as a writing ink in many cultures around the world. It was particularly popular in ancient China and Japan, where it was used for calligraphy and other forms of writing.
Food Coloring
Octopus ink is also used as a black food coloring or sauce. It’s commonly used in dishes like pasta, risotto, and squid ink paella. The ink provides a unique flavor and color to these dishes, making them highly sought after by foodies and chefs.
What Is the Chemical Compound of Octopus Ink?
Octopus ink is a fascinating and unique substance that has always intrigued scientists and researchers. It is a complex mixture of compounds produced by specialized cells located in the ink sac, a small gland found in the octopus body.
The ink is usually made up of secretions from two glands. The ink sac and its ink gland produce black ink that contains melanin. The majority of knowledge about cephalopod ink comes from the study of this ink. The funnel organ is a second organ that produces mucus but is not as well researched.
Common Octopus Inking Undersea
Although the chemical composition of octopus ink can vary slightly depending on the species and habitat of the octopus, most of the ink is made up of a mixture of melanins, proteins, polysaccharides, and other organic compounds.
The most abundant compound in octopus ink is melanin, which is responsible for its dark color and has been shown to possess antioxidant and antimicrobial properties.
Other important compounds found in octopus ink include tyrosinase. This copper-containing enzyme produces melanin from the amino acid tyrosine and peptides, which are protein fragments that have been shown to have various biological activities, including antimicrobial, antiviral, and anti-cancer properties.
Regarding the chemical formula of octopus ink, it is difficult to define a single compound, as it is composed of multiple compounds that vary in their chemical structure and properties. Nonetheless, studies have shown that octopus ink contains several amino acids, such as glycine, alanine, and leucine, and carbohydrates, including glucose and fructose.
Are Octopuses Venomous?
Yes, all octopuses are venomous. The venom in octopuses is typically used for self-defence, though some species also use it for hunting.
The venom is primarily comprised of neurotoxins, which affect the victim’s nervous system, leading to paralysis and potentially death. While octopuses are venomous, they are generally not aggressive toward humans and will only use their venom as a last resort.
However, it is still important to exercise caution when interacting with octopuses, particularly those with bright or iridescent coloring, such as the blue-ringed octopus, which is known to be particularly dangerous to humans.
Ingestion of the venom of a blue-ringed octopus can lead to death in as little as 30 minutes, making it one of the most venomous animals in the world and highlighting the importance of respecting these incredible creatures in their natural habitat.
Are Octopus Ink Sacks and Venom Glands Different?
Yes, an octopus’s ink sacks and venom glands are indeed different and are not linked in any way. While both can elicit a defensive response from predators, they serve different purposes within the octopus’s arsenal of defence mechanisms.
The ink sacks of an octopus are primarily used as a distraction to allow the creature to quickly escape from a potential predator. When threatened, the octopus releases a cloud of ink into the water, creating a dark, diaphanous cloud that temporarily obscures the predator’s vision and allows the octopus to quickly get away.
On the other hand, the venom gland of an octopus is used as a defensive weapon to deter predators and also to capture prey. The octopus’s venom contains a mix of toxins that can be lethal to smaller animals and cause severe pain and inflammation in larger predators or humans.
Is Octopus Ink Edible?
Yes, octopus ink is edible and is considered a delicacy in many parts of the world. Octopus ink is rich in flavor and has a unique taste that is often described as briny, smoky, or earthy. The ink is extracted from the octopus’s ink sac, a small gland located near the digestive system.
Octopus ink is used as a natural food coloring and is often added to dishes like pasta, risotto, and squid ink paella. The ink provides a deep black color to the dish, giving it a unique appearance highly sought by foodies and chefs. The ink also adds flavor to food, giving it a subtle yet distinctive taste.
The use of octopus ink as a food ingredient is not limited to just savory dishes. It can also be used to add color and flavor to desserts, such as ice cream and chocolate.
The ink has a high concentration of melanin, a natural pigment that is found in human skin. Melanin has been shown to have various health benefits, including antioxidant properties, anti-inflammatory effects, and even protection against UV radiation.
When preparing octopus ink for cooking, it is important to note that it can stain clothing and surfaces easily, so care should be taken when handling it. Additionally, the ink should be used in moderation, as it has a very strong flavor that can overpower other ingredients if used in excess.
Conclusion
Now you have a compact knowledge of the ink of an octopus. Octopus ink has many uses, from self-defence to cooking.
If you have further queries regarding this matter, just let me know. I will answer them as soon as possible. | Are Octopuses Venomous?
Yes, all octopuses are venomous. The venom in octopuses is typically used for self-defence, though some species also use it for hunting.
The venom is primarily comprised of neurotoxins, which affect the victim’s nervous system, leading to paralysis and potentially death. While octopuses are venomous, they are generally not aggressive toward humans and will only use their venom as a last resort.
However, it is still important to exercise caution when interacting with octopuses, particularly those with bright or iridescent coloring, such as the blue-ringed octopus, which is known to be particularly dangerous to humans.
Ingestion of the venom of a blue-ringed octopus can lead to death in as little as 30 minutes, making it one of the most venomous animals in the world and highlighting the importance of respecting these incredible creatures in their natural habitat.
Are Octopus Ink Sacks and Venom Glands Different?
Yes, an octopus’s ink sacks and venom glands are indeed different and are not linked in any way. While both can elicit a defensive response from predators, they serve different purposes within the octopus’s arsenal of defence mechanisms.
The ink sacks of an octopus are primarily used as a distraction to allow the creature to quickly escape from a potential predator. When threatened, the octopus releases a cloud of ink into the water, creating a dark, diaphanous cloud that temporarily obscures the predator’s vision and allows the octopus to quickly get away.
On the other hand, the venom gland of an octopus is used as a defensive weapon to deter predators and also to capture prey. The octopus’s venom contains a mix of toxins that can be lethal to smaller animals and cause severe pain and inflammation in larger predators or humans.
Is Octopus Ink Edible?
Yes, octopus ink is edible and is considered a delicacy in many parts of the world. | yes |
Marine Conservation | Are offshore wind farms harmful to marine life? | yes_statement | "offshore" "wind" "farms" are "harmful" to "marine" "life".. "marine" "life" is negatively impacted by "offshore" "wind" "farms". | https://sinay.ai/en/does-offshore-wind-affect-marine-life/ | Does Offshore Wind Affect Marine Life? | Does Offshore Wind Affect Marine Life?
Table of content
Encouraging the use of offshore wind farms as a source of renewable energy is an important step toward becoming a greener, more sustainable world and fight climate change. While offshore wind farms are a big step in this direction, the development of this potential energy source can have negative environmental effects on marine life and its ecosystem. In this article, we will explore the sources and impacts that offshore wind farms have on underwater species.
But first, what is offshore wind energy?
Offshore wind energy is an alternative way to generate electricity that helps to reduce fuel and carbon dioxide emissions when producing energy, both harmful for the environment.
In offshore wind farms (OWF), the electricity is harvested by way of wind turbines or windmills that convert the kinetic energy of the wind into electrical energy. The wind farm turbines are installed in areas of open water, where there are far less barriers, so the wind blows stronger and more consistently and thus creating more energy.
Wind farms are usually located in shallow waters, approximately 20 km away from the coast, with an average depth of 30 m. However, choosing the right place to set up a wind farm may depend on various factors, such as the depth of the water, wind, and the sea floor, as well as environmental disruptions to marine species living in that area.
Your environmental monitoring at your fingertips!
Optimize your environmental monitoring, become smarter and more sustainable with a unified system that gathers all your environmental sensors & data in one place.
Sources and impacts of offshore wind farms on the marine habitat
Offshore wind farms have a big impact on underwater pollution. From their construction to their deployment, offshore wind farms, with their turbines and metallic foundations, generate noise and vibrations below the sea surface (called “anthropogenic noise” because it is unnatural and human-made) that disturb marine life and flora, especially for the underwater mammals that rely on sound (like echolocation or vocalization) to survive in the ocean.
We can identify several sources and effects derived from offshore wind farms that can have an impact on marine life:
1. Wind turbine vibration
Wind turbines are significant contributors to noise pollution as they cause underwater acoustic vibrations that are transmitted at low-frequency noises though the water. Although the windmill’s blades are above the surface, the vibrations go down the mast and into the base closer to the seabed, thus introducing a new source of noise pollution in the water. These sounds can interfere with marine mammals’ behaviors, as it alters their way of communicating, feeding, reproducing, and navigating the oceans. On some occasions, these changes can lead to injury or even death.
The impacts of underwater noise pollution on marine life can vary from species to species and long-term effects are yet to be confirmed, as there isn’t much data available, or many studies conducted on this problematic.
2. Construction site activites
The bulk of noise pollution events occurs when wind turbines are installed in the ocean. Certain construction activities —such as pile driving, wedging, and increased vessel traffic— include the use of heavy machinery that emit noise vibrations when installing the wind turbines’ foundations and other offshore platforms, as well as securing the power cables (which conduct the energy back to the mainland) to the sea floor.
The effects of offshore wind farm construction activities will differ among species. For some, construction areas might interfere with marine mammals’ navigation routes or disrupt the fishes and other marine animals’ natural habitat and drive them away. Furthermore, they can alter the construction area as pile driving used to lock turbine foundations in place can affect the sea floor ecosystem, forcing many native species away. Amongst other impacts, anthropogenic noise can also cause permanent hearing damage in fish and risk of collision and spatial displacement in other species, such as turtles.
3. Electromagnetic fields
The underwater power cables carry the renewable energy from the offshore wind farms to the mainland grid emit electromagnetic fields (EMFs.) These artificial magnetic fields can interfere with, and even mask, the natural EMFs present in the ocean. Certain fishes and marine mammals are more sensitive than other species to this natural magnetic compass, which allows them to navigate their environment to search for food, communicate, stay orientated and migrate, locate resources and predators, etc.
When the ocean’s natural EMFs are disrupted by OWFs, some changes in behaviors were observed in fish and other marine mammals, particularly those with a higher affinity to detect power cables or fish that move closer to the ground. For example, studies have shown that species like skates and rays displayed shifts in their exploratory and foraging behavior when in proximity to an EMF. Eels were shown to be able to cross over a cable but slowed down; fish such as salmon smolts could pass a cable EMF in the migration route, but a degree of misdirection was detected.
4. Metal pollution
Offshore wind farms are constructed on piles upon piles of metallic structures. Seawater is a powerful corrosive agent and prolonged exposure can cause the introduction of pollutants into the underwater environment. A less known consequence of installing offshore wind farms below the sea is that the metal foundations’ corrosion can change the quality of the water, and this can lead to fish poisoning as substances and chemicals are released into the water.
5. Artificial reef effect
The construction of offshore wind farms can lead to a phenomenon known as “artificial reef effect” in which, if a wind farm is closed to fishing or is less frequented, an effect similar to a nature reserve can occur. A negative impact is that invasive species can find home in these new habitats in the artificial reefs and become a harmful environmental influence. On the other hand, the protections put in place in the wind farms’ foundations to avoid this very issue can lead to new habitats that compensate for the lost one, but they will depend on the nature and location of the reef and the characteristics of the marine population.
Your environmental monitoring at your fingertips!
Optimize your environmental monitoring, become smarter and more sustainable with a unified system that gathers all your environmental sensors & data in one place.
Conclusion about how offshore wind affect marine life
Although wind energy and offshore wind farms can have beneficial impacts on the environment, as seen in this article, they can still present negative effects as construction and operation activities in at sea can disrupt the life of underwater species and its ecosystem. It is therefore important to keep monitoring the impacts of offshore wind farmsso that we can better understand and protect our oceans.
Underwater noise is bad because it harms marine animals by preventing them from hearing natural ocean noises, pushing them away from their natural habitat, and even changing their migration patterns. This in turn impacts the ocean environment and natural ecosystem.
Human activities such as shipping, sonars, construction activities such as dredging, platforms, drilling and installing oil rig platforms, as well as seismic surveys contribute to underwater noise pollution. | Does Offshore Wind Affect Marine Life?
Table of content
Encouraging the use of offshore wind farms as a source of renewable energy is an important step toward becoming a greener, more sustainable world and fight climate change. While offshore wind farms are a big step in this direction, the development of this potential energy source can have negative environmental effects on marine life and its ecosystem. In this article, we will explore the sources and impacts that offshore wind farms have on underwater species.
But first, what is offshore wind energy?
Offshore wind energy is an alternative way to generate electricity that helps to reduce fuel and carbon dioxide emissions when producing energy, both harmful for the environment.
In offshore wind farms (OWF), the electricity is harvested by way of wind turbines or windmills that convert the kinetic energy of the wind into electrical energy. The wind farm turbines are installed in areas of open water, where there are far less barriers, so the wind blows stronger and more consistently and thus creating more energy.
Wind farms are usually located in shallow waters, approximately 20 km away from the coast, with an average depth of 30 m. However, choosing the right place to set up a wind farm may depend on various factors, such as the depth of the water, wind, and the sea floor, as well as environmental disruptions to marine species living in that area.
Your environmental monitoring at your fingertips!
Optimize your environmental monitoring, become smarter and more sustainable with a unified system that gathers all your environmental sensors & data in one place.
Sources and impacts of offshore wind farms on the marine habitat
Offshore wind farms have a big impact on underwater pollution. From their construction to their deployment, offshore wind farms, with their turbines and metallic foundations, generate noise and vibrations below the sea surface (called “anthropogenic noise” because it is unnatural and human-made) that disturb marine life and flora, especially for the underwater mammals that rely on sound (like echolocation or vocalization) to survive in the ocean.
| yes |
Marine Conservation | Are offshore wind farms harmful to marine life? | yes_statement | "offshore" "wind" "farms" are "harmful" to "marine" "life".. "marine" "life" is negatively impacted by "offshore" "wind" "farms". | https://marinemadnessdotblog.wordpress.com/2020/08/14/the-effects-of-offshore-wind-farms-on-marine-life/ | The effects of offshore wind farms on marine life – Marine Madness | Shining a light on the weird & wonderful creatures of our oceans and the important issues they face in a changing world
The effects of offshore wind farms on marine life
Increasing awareness about anthropogenic climate change and mounting public pressure has led to many countries committing to reduce their use of fossil fuels and increase their development of renewable energy sources. Although the switch to renewable energy will have an overall positive effect on the global climate and natural world, the construction of renewable energy installations can have both positive and negative effects on local ecosystems. One such example is offshore wind farms (OWFs) which can have some significant effects on the marine environment. This article focuses on some of these effects and the individual marine species that can be impacted by them, which need to be properly considered before new OWFs are installed.
On the surface OWFs seem like a positive environmental solution, but look beneath the waves and there is more to consider than you might think
Negative impacts
Environmental damage from construction
One of the first effects OWFs have on their nearby surroundings is physical damage caused by their construction. The main effects of this are seafloor habitat destruction and sediment suspension in the water column, caused by the disruption of sand and silt from the seafloor. Sediment suspension is likely to have a negative impact on fauna by increasing turbidity, mobilising contaminants and smothering sessile suspension-feeding animals, such as corals, sponges and anemones. A reduction in visibility from sediment suspension can also affect photosynthesis in algae and disrupt key behaviours in visual animals.
Different types of OWF foundations
Different types of OWF foundations will have different levels of effect on the seabed. Monopile foundations consist of a single tube which is hammered into the seabed. This type of foundation can only be used in shallower waters up to a depth of 30 m due to hydrodynamic forces. Tripod (three-legged) and jacket (four-legged) foundations are more stable and therefore can be used at greater depths, but their construction will have a greater environmental impact due to the amount of material that is driven into the seabed. However there are alternative solutions, such as floating structures which can be anchored to the seabed and reduces the need for pile-driving.
Noise pollution
One of the main issues caused by the construction and operation of OWFs is that they emit a lot of noise into the marine environment. Known as marine noise pollution, this can affect the behaviors of marine animals as well as potentially causing serious injury. Pile-driving during the construction of OWFs can generate noise up to 200 dB, while the operation generates up to 120 dB. This noise is mainly generated above the water but transmits through the tower and is then radiated into the surrounding water, adding to pre-existing noise from other sources. This can affect animal behaviour, particularly those that are more sensitive to sound, that rely on their use of vocalization for communication and those that use echolocation for navigation, such as cetaceans.
Subtidal noise pollution of an OWF in operation
In salmon, bass and harbour porpoises, it has been found that 90 dB is the level of noise that causes avoidance behaviours. This amount of noise is not instantly harmful to these animals, but prolonged exposure for around 8 hours could equal exposure to 130 dB which can cause either temporary or permanent hearing damage. An example of an affected species is harbour porpoises who have been found to initiate avoidance responses within 20 km of pile-driving activity, whilst pile-driving noise has also been shown to disrupt their vocalizations which can take up to 72 hours to return to normal.
Noise mitigation systems such as bubble curtains (bubbles produced by hoses on the sea floor around the base of the turbine) have been shown to reduce the level of disturbance in harbour porpoises by 90%. However the hindrance that OWF construction poses to their communication could have a negative impact on their social interactions and migrations. This could also be the same for other cetaceans, but there is a lack of research in other species.
Other affected species include cod and herring, who can detect pile-driving noise from up to 80 km away. Additionally, cod and sole have been found to have behavioural responses to pile-driving noise, which included initial avoidance, higher swimming speed and habituation after time.
Harbour porpoises are just one of the many cetacean species that can be negatively impacted by noise pollution
When exposed to the sound produced by an air gun (222.6 dB which is 10% louder than the noise produced by pile-driving) pink snappers have also been found to have sustained ear damage, meaning they and many other fish species are likely to be impacted as well.
Meanwhile seismic surveys (used by OWF construction vessels to map the seafloor) have also been shown to cause avoidance behaviors in humpback whales. Although the noise output from seismic surveys may differ from that of OWF construction/operation, it is still worth noting the effects that noise pollution can have on animal behaviour, biodiversity and ecosystem functioning.
Electromagnetic fields
Another potential issue with OWFs is electromagnetic fields (EMFs), which are generated by the transportation of the acquired energy through electric cables that are built into the seabed. They could have an effect on the behaviour or physiology of fauna which use electroreception for detecting prey or conspecifics such as sharks and rays. A study on the small-spotted catshark found that they showed avoidance behaviours when exposed to electric fields which were equal to the maximum output of undersea cables, despite being attracted to much smaller electric currents which mimicked their prey. Thornback rays and spurdogs have also shown avoidance behaviour in the presence of EMFs, but more research is needed.
Non-indigenous species
One of the more overlooked issues associated with OWFs is the introduction of non-indigenous and invasive species, which presents a threat to biodiversity. Artificial structures (including OWFs, oil rigs, breakwaters and ports), are known to promote the spread of non-indigenous species, which can disrupt trophic webs and cause shifts in the populations of native species, normally with a negative impact on the overall ecosystem.
Japanese skeleton shrimp are one of the invasive species found on OWFs
Between 2008 and 2011, nine non-indigenous species were found on monopiles in the Egmond aan Zee OWEZ wind farm off the coast of Holland, eight of which have been described as invasive. One of the species, the Pacific oyster, even increased in abundance during the survey. On the Thorntonbank wind farm in the Belgian part of the North Sea (BPNS), ten non-indigenous species were found, seven of which are potentially invasive. The same invasive species were found in the C-Power and Belwind wind farms on Bligh bank, also in the BPNS. At the Horns Rev OWF the species dominance in a benthic community was found to have changed after its construction, in favour of the invasive amphipod Jassa marmorata. These are all examples of how OWFs can encourage invasive species and with more sites being constructed throughout the world’s oceans this is a problem that is likely to only get worse.
Positive impacts
On the flipside of the invasive species problem, OWF construction also introduces new habitats for indigenous species as well. It does so by introducing three-dimensional, hard substrate structures which act as artificial reefs in what would usually be a vast and flat seabed. It has also been argued that OWF construction only temporarily causes damage (through things like sediment suspension) and in the long-term can actually create up to 2.5 times more habitat space, which in turn has more potential for colonization than the original sediment based habitats. Not only does the foundation pile have the potential for harbouring intertidal, sessile organisms, but the scour protection at the foundation can also provide habitat for fish as well (see below). As more research is done in this area future OWFs may even be designed to maximise their ability to create new habitats and provide an even bigger positive benefit.
This diagram shows the potential for designing wind turbines that can provide protection for fish below them
Biodiversity
Another potential benefit of OWFs is an increase in biodiversity, which is one of the main indicators of ecosystem health. This is believed to be the result of a reduction in sediment grain size and an increase in organic matter in the vicinity. As well as the introduction of ‘ecosystem engineers’, such as the polychaete worm Lanice conchilega, which have been shown to increase biodiversity in their ecosystems.
On the Nysted OWF in the Danish part of the Baltic Sea, more biological activity was recorded further up a turbine where blue mussels were most abundant. This species modifies habitats by filtering organic matter from the surrounding water and acts as a secondary hard substratum which promotes biodiversity. The Egmond aan Zee OWEZ wind farm has also increased the diversity and abundance of benthic organisms and attracted higher abundances of certain fish, mammals (including harbour porpoises) and in some cases even birds. Biodiversity at the Horns Rev OWF in the Danish North Sea was found to be higher and also had a 7% increase in biomass resulting in 50 times more food available for local fish populations.
Refuges for commercially exploited species
It has been suggested that OWFs can also act as a safe space for many commercially targeted species due to the increased abundance of food available and protection from fishermen, who tend to avoid OWFs for fear of entanglement. Research has shown that cod and pouting have benefitted from the construction of OWFs, as well as some species of crab. This is an area of research that again requires further work, but it shows another potential benefit that OWFs can provide. In the future it may even be possible to combine OWFs and Marine Protected Areas (MPAs) as a way of further protecting commercial species whilst revitalizing threatened ecosystems.
In addition to being potentially harmful to marine wildlife research also shows OWFs can provide a safe haven for some species too
Conclusion
For now, evidence from the reviewed literature shows that OWFs have an overall negative impact on local ecosystems, mainly due to the damaging effects of marine noise pollution from construction and operation of turbines. However, it is impossible to determine to what extent these negative effects can be counteracted by the positive effects. For example, OWF construction temporarily destroys habitat but in the long-term creates more habitat once the structure has been colonised. Additionally, OWFs have been found to increase local biodiversity but at the cost of facilitating the colonisation of invasive species. More research is needed on this topic so that it can be determined whether the expansion of renewable energy industries should focus on alternative sources of renewable energy such as solar energy to mitigate the ongoing climate crisis, especially if we are to protect the marine environment at the same time.
Owen is a BSc (Hons) Marine Biology student at Swansea University. He is currently interning with Swansea University’s Coastal Ecology Research Group, assessing megafauna biodiversity and conservation priorities in anthropogenically disturbed habitats with hopes to enhance marine renewable energy structures. You can follow him on Instagram @oceanmaster93 or get in contact with him on LinkedIn here. | Shining a light on the weird & wonderful creatures of our oceans and the important issues they face in a changing world
The effects of offshore wind farms on marine life
Increasing awareness about anthropogenic climate change and mounting public pressure has led to many countries committing to reduce their use of fossil fuels and increase their development of renewable energy sources. Although the switch to renewable energy will have an overall positive effect on the global climate and natural world, the construction of renewable energy installations can have both positive and negative effects on local ecosystems. One such example is offshore wind farms (OWFs) which can have some significant effects on the marine environment. This article focuses on some of these effects and the individual marine species that can be impacted by them, which need to be properly considered before new OWFs are installed.
On the surface OWFs seem like a positive environmental solution, but look beneath the waves and there is more to consider than you might think
Negative impacts
Environmental damage from construction
One of the first effects OWFs have on their nearby surroundings is physical damage caused by their construction. The main effects of this are seafloor habitat destruction and sediment suspension in the water column, caused by the disruption of sand and silt from the seafloor. Sediment suspension is likely to have a negative impact on fauna by increasing turbidity, mobilising contaminants and smothering sessile suspension-feeding animals, such as corals, sponges and anemones. A reduction in visibility from sediment suspension can also affect photosynthesis in algae and disrupt key behaviours in visual animals.
Different types of OWF foundations
Different types of OWF foundations will have different levels of effect on the seabed. Monopile foundations consist of a single tube which is hammered into the seabed. This type of foundation can only be used in shallower waters up to a depth of 30 m due to hydrodynamic forces. Tripod (three-legged) and jacket (four-legged) foundations are more stable and therefore can be used at greater depths, but their construction will have a greater environmental impact due to the amount of material that is driven into the seabed. | yes |
Marine Conservation | Are offshore wind farms harmful to marine life? | yes_statement | "offshore" "wind" "farms" are "harmful" to "marine" "life".. "marine" "life" is negatively impacted by "offshore" "wind" "farms". | https://www.mdpi.com/2077-1312/4/1/18 | Expected Effects of Offshore Wind Farms on Mediterranean Marine ... | Notice
Notice
All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
Abstract
Current climate policy and issues of energy security mean wind farms are being built at an increasing rate to meet energy demand. As wind farm development is very likely in the Mediterranean Sea, we provide an assessment of the offshore wind potential and identify expected biological effects of such developments in the region. We break new ground here by identifying potential offshore wind farm (OWF) “hotspots” in the Mediterranean. Using lessons learned in Northern Europe, and small-scale experiments in the Mediterranean, we identify sensitive species and habitats that will likely be influenced by OWFs in both these hotspot areas and at a basin level. This information will be valuable to guide policy governing OWF development and will inform the industry as and when environmental impact assessments are required for the Mediterranean Sea.
1. Introduction
The global demand for energy supply continues to increase rapidly [1]. Accelerated demographic and economic growth [2], modifications in energy usage as a result of climate change [3], and rising demands for rural electrification in many Middle East and North Africa (MENA) countries [4] have dramatically increased the energy demands of the Mediterranean region, a trend that is set to continue. Consequently, problems concerning the security of energy supply, and the impact of global warming and ocean acidification as a result of CO2 emissions, have stimulated research and development into environmentally sustainable energy. This drive is reflected in the Horizons 2020 EU Renewables Directive (2009/28/EC), with member states required to obtain 20% of their energy consumption from renewable energy sources by 2020 [5]. Non-EU Mediterranean countries have also recognized the need to decrease reliance on hydrocarbons and most have adopted similar policies [4].
Europe is seeing a rapid expansion of the wind energy sector on land; however, higher mean winds speeds due to a reduction in offshore surface roughness [6], and comparatively lower visual and noise pollution than onshore wind farms [7], has led to a recent expansion of marine wind farms with further planned developments particularly within the North Sea and Baltic regions [8]. As of 1 July 2014, the EU had a combined offshore capacity of 7343 MW, with a further 30,000 MW expected by 2020 [5,9]. Currently, the Mediterranean Sea has no operational offshore wind farms (OWFs) yet this is expected to go ahead imminently [10].
The environmental effects of OWF construction in the Mediterranean are as yet unknown. The Mediterranean has particular characteristics including minimal tidal ranges, high levels of biodiversity and endemism [11], and a high potential for range extension of alien species [12]. The region is also exposed to a suite of coastal pressures including pollution, busy shipping lanes, eutrophication, urban development, habitat degradation, and overfishing [13]. The effects of existing OWFs may not be directly applicable to the Mediterranean, highlighting the need for site-specific analyses before the commencement of large scale offshore developments. The aim of this paper is to systematically assess the biological effects of existing OWFs in Northern European Seas and consider those in relation to the unique conditions of the Mediterranean basin, to horizon-scan for the potential environmental effects and solutions if construction goes ahead.
2. Methods
The most important technical criteria for the identification of a suitable OWF sites are wind resource availability and bottom depth. Evidently, for a rational candidate site identification, additional technical criteria should be also considered, such as distance from shore, bottom morphology and type of sediments, electrical grid infrastructure, etc.; however, the most important criteria are wind resource availability and bottom depth.
The wind speed threshold levels and the depth criteria were set in accordance with the EEA (2009) recommendations regarding the economic vitality and the distance for the minimum optical nuisance requirements of the OWFs, respectively. Since the current fixed-bottom wind turbine technology (monopile, gravity-based, jacket and tripod foundations) is limited to water depths up to 50 m, the depth range considered herewith is 20–50 m, and the lower threshold for the mean annual wind speed at 80 m above mean sea level was set to 5 ms−1 [6]. According to the above restrictions, using 10-year results (1995–2004) obtained from the Eta-Skiron model: [14,15,16] and the General Bathymetry Chart of the Oceans global relief [16,17], potential wind energy sites (model grid points) were identified, while regions with high densities of such point locations were highlighted as offshore wind energy hotspots. The Eta-Skiron mesoscale meteorological model is a modified version of the non-hydrostatic Eta model and is used for the dynamical downscaling of the ECMWF Era-40 reanalysis data [17] and the ECMWF operational forecasts, with a fine spatial (0.10° × 0.10°) and temporal resolution (3 h) [16]. An evaluation of the Eta-Skiron model performance as regards the wind power density estimation for the Mediterranean Sea is presented in Soukissian, Papadopoulos [18].
Biological effects resulting from the construction, and operation, of OWFs were identified in a review of studies in Northern European Seas. Peer reviewed literature took precedence and primary literature was obtained from several databases including CAB abstracts, Google Scholar, Web of Science, Science Direct, and Scopus. Relevant grey literature was also included in the compilation of information, and expert opinions were sought from several research institutes and industry experts (references herein). For clarity, impacts were separated via taxa (e.g., birds, marine mammals, fish, benthos and plankton). Detailed evidence obtained via the literature review is presented in a table format within the supplementary information (Table S1–S6); where available, specific evidence regarding the impacts at OWF hotspots will be highlighted.
3. Results and Discussion
Although many Mediterranean coastlines seem poorly suited to OWF development, some large areas have exploitable potential, including the coasts of the Gulf of Lyons, the North Adriatic Sea, the entire coastal area of the Gulfs of Hammamet and Gabès in Tunisia, off the Nile River Delta, and the Gulf of Sidra in Libya (Figure 1). The five hotspots spatially cover the width and breadth of the Mediterranean Sea. Here, we consider the potential effects on birds, marine mammals, fish, benthos and plankton throughout the Mediterranean and, where available, the possible impacts of OWFs within the specific hotspot regions.
3.1. Potential Effects of Mediterranean Offshore Wind Farms on Birds
Wind farms affect resident and migrating birds, through avoidance behaviors, habitat displacement, and collision mortality, but such impacts are difficult to monitor offshore [19]. Seabirds that use the marine environment for foraging or resting may be displaced by OWFs [20]. The Mediterranean has a low diversity of seabirds, but these species tend to be long-lived with low fecundity, traits that often make species vulnerable to abrupt environmental change [11,20] (Table 1). Fortunately, most Mediterranean marine birds are listed as “least concern” on the IUCN red list, although the Audouin's gull is listed as “near threatened,” the Yelkouan shearwater as “vulnerable,” and the Balearic shearwater as “critically endangered” [21]. All 16 Mediterranean countries have made commitments to protect these species at a national level [22]. With the exception of shearwaters [23], Mediterranean seabird populations appear to be increasing, particularly the yellow-legged gull [11,24,25]. These increases have been attributed to fish discards and improvements in coastal conservation [26,27,28], but changes to fishery discard practices following the reform of the Common Fisheries Policy may reverse this [29].
Studies of Northern European seabird populations have developed vulnerability indices to indicate seabirds likely to be affected by the presence of OWFs [30,31,32]. Using parameters that are heavily weighted on the risks of collision mortality (flight altitude, flight manoeuvrability, percentage of flight time, nocturnal flight altitude, disturbance by wind farm structures, ship and helicopter traffic, and habitat specialization), the North/Baltic Sea-based studies assessed 18 of the 29 Mediterranean seabirds. Notable exclusions to the list are the endemic species of the Mediterranean, which pose a greater conservation risk due to their small population sizes [33]. Garthe and Hüppop [13] identify the Black and Red-throated diver, the Sandwich tern, and the great Cormorant as the most sensitive of the Mediterranean seabirds within their index, and rated the Black-legged kittiwake, and the Black-headed gull as the least sensitive when all parameters were combined. Advancing this approach, Furness et al. [31] separated the hazards due to collision risk and habitat displacement. They identified the lesser black-backed gull and the Northern gannet as marine species sensitive to collision risk, and both the red and black-necked divers as most susceptible to long-term habitat displacement. These approaches lack any evaluation of species-specific OWF avoidance behavior and thus have large caveats attached to their findings. The approach of identifying at risk species via vulnerability indices is useful for the planning stages of OWFs; however, it does not determine if construction will have a detectable change in seabird population trends. Focus should preferably be given to understanding any direct effects OWFs will have on foraging success, e.g., diving behavior and prey characteristics, which in turn will impact reproductive success, juvenile survival and population trends [20].
Threats to Mediterranean bird populations are also directed towards migratory species. Worldwide, migratory species are declining in greater numbers than resident populations [34], and the Mediterranean basin is a major transit route for Saharan-Eurasian migration, as evidenced by both the Mediterranean-Black Sea flyway and the Adriatic flyway [35,36]. Many long-distance bird migrants, e.g., raptors and storks, rely on land lift via thermal upwelling for long-distance flight [37,38] and avoid broad fronts such as the Mediterranean Sea and the Saharan desert [37], creating bottlenecks at narrow passages of the Mediterranean Sea, including Gibraltar, the Straits of Sicily, Messina (Italy) and the Belen pass (Turkey) [39]. Wetlands around the Mediterranean provide suitable stopover sites for long-distance migrants to feed, rest and molt [40]. Some of the main wetlands around the Mediterranean are located within close proximity of potential OWF hotspots, particularly the Po Delta in the Northern Adriatic Sea the Nile Delta, the Gabès Delta and the Camargue Delta in the Gulf of Lion (Figure 2). Due to the bathymetry of the Mediterranean, and the steep continental slope of most coastlines, deltas provide feasible sites for wind farm constructions. High densities of avian habitat use in these regions means that OWF resource overlap will be a key factor in Mediterranean marine spatial planning in regard to OWFs.
High collision levels of migrating terrestrial birds at well-lit observing platform during periods of bad weather and poor visibility [42] indicate that wind farms located near the coast, or prominent migration bottlenecks, may pose a significant risk to migrating birds. More recent evidence also shows alternative crossing options for some passerine species, including non-stop crossings over the Mediterranean Sea [43]. This indicates that species-specific migrations are not fixed either temporally or spatially, and individual route decisions are due to risk analysis of many parameters including energy reserves, weather conditions, and genetic disposition [44,45]. Until large-scale migration routes across the Mediterranean Sea are better understood, developers face large difficulties in wind farm spatial planning in the region. Obtaining this information is an essential task for potential OWF developers in the Mediterranean.
Aside from identifying crucial regions for migrating birds, one of the most poorly understood aspects about OWF effects on birds is avoidance behavior. Turbine avoidance tactics employed by a species may apply to both resident seabirds and long-distance migrants; however, any changes to migratory routes are extremely difficult to monitor and may have large, indirect effects [42]. Avoidance behavior is possible at several scales, which are typically classified into micro, meso, and macro strategies. Micro-avoidance is the behavioral response to actively avoid rotating blades. Meso-avoidance is classified as behavior whereby species that fly at rotor height within the wind farm and avoid the whole rotor swept zone and macro-avoidance being the behavioral alteration of a flight path due to the presence of a wind farm [46]. Macro-avoidance behavior strategies have been shown in some migrating individuals: The common eider Somateria mollissima, for example, exhibited avoidance behaviors of a wind turbine at a range of up to 500 m during the day [47]. The long-term consequences of employing avoidance techniques remain unclear. Among other parameters, real impacts to population trends of migrating birds will be highly dependent on the specific life histories of a species, expenditure of avoidance strategies, energy reserves, and weather conditions during migratory periods.
There are several possible measures to reduce the effects wind farms will have on Mediterranean avian populations, e.g., shutdown orders and changes to the phototaxis level of structures [48,49]. However, preventative initiatives are much more effective, i.e. ensuring planning and placement of OWFs are not in the vicinity of large population of species that have been identified as high risk within the Mediterranean. The sensitive species suggested here due to collision or habitat vulnerability include the lesser black-backed gull, the Northern gannet, and the red- and black-throated divers, while the endemic bird species and species whose populations are declining in recent decades (Shearwaters) are identified due to their conservation importance (Table 1). More understanding of the cumulative effects of all impacts, at all potential development sites, is needed. Until then, all future approaches in regard to OWF spatial planning in the Mediterranean should be of a cautionary nature.
3.2. Potential Effects of Mediterranean Marine Mammals
Marine mammals are often high profile, charismatic species and have the potential for high socio-economic value in their natural habitats [50]. It is therefore essential to understand the effects OWFs will have on Mediterranean populations. The Mediterranean Sea is home to both resident and visiting marine mammals, of which most are experiencing a decline in population trends, with the exception of visiting humpback whales whose numbers have appeared to increase [11,51]. At a basin level, total population numbers are difficult to assess with several species being classified as “data deficient” by the IUCN red list [20]. Nonetheless certain regions have been identified as important habitats for marine mammals. Monitoring programs show a high percentage of fin whale sightings within the Ligurian Sea in comparison with other regions of the Mediterranean Sea [52]. The Alborean Sea has been shown to be an important for long-finned pilot whale populations [53], and there is also evidence that due to the East-West basin migration of Sperm whales, either the Strait of Sicily, or the Strait of Messina are critical areas which enable migration [54].
In regard to OWF development, several resident marine mammals frequently use the coastal marine environment earmarked for potential developments including the critically endangered Mediterranean monk seal, the common Bottlenose dolphin, and visiting Humpback whales [51,55,56]. When assessing the combined species density of the resident marine mammals, the Gulf of Lion OWF hotspot displays the highest densities of resident marine mammals and as such can be considered as the most sensitive in regard to OWF development. The Gulfs of Hammamet and Gabès, the Gulf of Sidra, and the Nile Delta hotspots appear to support low populations of resident marine mammals (Figure 3).
The distribution of specific species of marine mammals is also of interest to developers. This is particularly true within the Northern Adriatic OWF hotspot. The Mediterranean monk frequently uses the coast of Croatia (Figure 1B, [57]), and the Bottlenose dolphins regularly sighted from the coast of Trieste and Kvarneric (Figure 1B, [55]). Other important areas for individual species include the the coast of Senigallia, and the Gulf of Gabes for the Humpback whale (Figure 1A,C) [51]. These regions will require particular attention during spatial planning stages of developers.
Through monitoring programs conducted in the Northern European seas, marine noise, and in particular pile driving during construction, has been identified as the biggest impact to marine mammals [58]. Increased motorized vessel shipping during the operational phase of wind farms also increases noise levels to the area, and so is also identified as an impact; however, this is not at a level expected to significantly affect marine mammals [59]. Depending on the hearing ranges of the species, pile driven construction has the ability to produce hearing impairment, although for most species, hearing reactions are as yet undetermined [60].
A study measuring the propagation of sound during the construction phase of an offshore site in the NE of Scotland implied Bottlenose dolphins would suffer auditory injury within a 100 m range of the site and behavior disturbances up to 50 km away [61]. With the use of T-POD porpoise detectors, acoustic monitoring during the construction and into the operational phase of the Nysted OWF indicated a possible change in habitat use by the harbor porpoise (Phocoena phocoena), with a reduction in the level of echolocation signals produced by the porpoises [62]. A long-term study (10 years) at the same wind farm also showed a decline from baseline levels of echolocation signals [63].
A similar study at the Dutch wind farm, Egmond aan Zee, after construction measured significantly higher acoustic activity inside the farm in comparison with a control site [64], and this trend was mirrored in a recent study of harbor seal (Phoca vitulina) foraging which suggested an increase in habitat utilization at two operational wind farms (Alpha Ventus and Sheringham Shoal) [65]. The repeated grid-like movements indicated for the first time, successful foraging behavior by an apex predator within an OWF. The apparent differences between probable habitat uses may be due to local-scale ecological differences, local population habituation of wind farms, or differences in construction type of wind farms [64]. Due to critical population levels in the Mediterranean, the observed increases in seal foraging behavior around wind farms may not be relevant to the Mediterranean monk seal [56,65].
In regard to the impacts of noise levels in the Mediterranean, the semi-enclosed Mediterranean also suffers from some of the highest volume of shipping routes in the world [66] (Figure 4). In general, noise from wind farms is influenced by water depth, wind speed, turbine type, wind farm size, and substratum type [67]; due to the high levels of existing background noise from maritime traffic in the Mediterranean, risks having a cumulative effect in masking communicative abilities of marine species [68]. When assessing the spatial density of traffic routes from 2013, the OWF hotspots of the Gulf of Lion, the North Adriatic Sea, and the Nile Delta show an already high density of vessels within the area (up to 140 m vessels·km−2·day−1); thus, high levels of background noise can be expected in these regions. The Gulf of Hammamet and Gabès, and the Gulf of Sidra suggest much lower levels of background noise stress. The use of underwater noise propagation models by policy makers will be required to understand the combined influence of OWF construction, operation and maintenance shipping with current levels of background noise at site-specific locations.
3.3. Potential Effects on Mediterranean Fish Communities
Mediterranean coastal communities depend on fishing-related activities, particularly artisanal fishing, throughout the basin [69]. Of the 513 species and 6 subspecies of fish in the Mediterranean, 8% are currently classified as threatened by the IUCN [70], and there has been an alarming decline of Mediterranean fish stocks, with the largest declines in demersal fish stocks [71].
The principal impacts to fish populations caused by wind farms are noise, electro-magnetic fields, and novel habitat gain. Recent studies have shown that the noise generated by pile driving during the construction phase of OWF farms can generate acute stress responses in juvenile fish species. Although the responses were recorded as acute, it is possible that repeated and prolonged exposure in the wild may lead to a decrease in fitness [72]. During the operational stages of OWFs, evidence indicates fish permanently avoid wind turbines only at a range of up to 4 m under high wind speeds (13 ms−1), and that their ability to communicate and utilize orientation signals is masked [67]. Increased background noise and seabed vibration from operational OWFs and associated marine traffic will also influence fish detection distances [73,74] (Figure 4). Greater numbers of experimental studies on individual fish species are needed before we can fully comprehend the impact of anthropogenic noise on fish [75].
Electromagnetic fields occur around intra-turbine, array-to-transformer and transformer-to-shore cables. The electro-sensitivity of many marine species is unknown, and there is a dearth of peer-reviewed information regarding the effects of electro-magnetic fields. Elasmobranchs are thought to be especially sensitive, due to their electro-sensory organs [76]. Several shark and ray species react to wind farm cables [77], but whether this has any affect at the population level is unknown. Magnetic fields could influence geomagnetic patterns used by some migratory marine species for navigation [78], and reports also show that electro-magnetic fields from OWFs may affect fish migration. Gill et al. [79] identified eight migratory fish species sensitive to electromagnetic fields, including the European eel Anguilla anguilla, the Atlantic salmon Salmo salar and the Yellowfin tuna Thunnus albacares. Limited in situ data are available about the actual effects on fish, which rely on magnetic fields for migration; however, several studies in the Baltic Sea have indicated that the swimming speed of European eels is reduced in the vicinity of underwater electric cables [78,80]. However, due to the limited availability of empirical studies, it is difficult to theorize the extent of the impacts that electro-magnetic fields have on Mediterranean marine species and their fecundity.
The most direct influence on fisheries due to the presence of OWFs is likely the addition of novel, vertical habitat to an area previously void of hard substratum. Numerous studies have found greater abundances in fish around OWFs than in surrounding areas, such as Atlantic cod Gadus morhua, pouting Trisopterus luscus and several species of gobies [81,82,83]. Currently, most empirically measured effects due to operational wind turbines are temporally limited, and not at the scale of OWFs [60]. Several studies have implied that the new habitat provides increased foraging for both primary and secondary food resources, and protection grounds from currents [84]. There is much discussion between ecologists over whether changes in species biomass will be due to attraction or production [85]. Stomach content analysis and energy profiling have shown that OWFs are suitable feeding grounds for both Atlantic cod and pouting species [86,87], indicating that there is extra biomass available at the sites. Juvenile recruitment of Atlantic cod has also been observed at wind farms in the Belgium part of the North Sea [87]. Changes in prey densities may also be masked by predation rates [65,88], and will potentially strengthen predator avoidance behaviors like diel migration, further complicating the relationship between attraction-production dynamics [77]. While the mechanics of fish abundance at OWFs is not yet fully understood and requires additional analysis, it is becoming increasingly obvious that any ecological benefits will only be attained if fishing practices are banned within the wind farms [89].
For Mediterranean fish species, it is difficult to state the effects based on the findings of Northern European studies as there is a limited availability of information, and the majority of existing monitoring programs focus on species that are not generally present in Mediterranean waters, e.g., Atlantic cod [82,85,88]. That being said, there is evidence that suggests a yield increase of fish populations at wind farms as opposed to the simple attraction model previously hypothesized [85,86]. This is of particular importance for the Mediterranean, as levels of fishing are unsustainable, and most fish stocks are in decline [71]. The possibility for creating de facto marine protected areas (MPAs) due to fishing restrictions imposed within OWFs is an interesting aspect in the developments of OWFs in the Mediterranean Sea. It is clear that well protected MPAs in the Mediterranean result in significantly higher biomass than those with no or minimal protection [90], although many Mediterranean MPAs lack adequate protection [91]. Enforcement of fishing restrictions in Mediterranean MPAs is a difficult issue, but the benefit associated with designating an area within a series of fixed structures is that fishing regulations may be easier to enforce. Benthic fishermen are less willing to drag their trawling gear within turbines as they risk entanglement, and there is a potential to monitor fishing activity of static and recreational fishermen by using fixed cameras. It is worth noting, however, that displacement of fishing effort is a serious concern for the management of reduced fishing effort regions [92]. The production of sound by many fish species for communication during spawning periods means it is also essential for policy makers to identify fish spawning grounds during environmental impact assessments, with the aim of restricting OWF construction in these noise-sensitive areas [93].
3.4. Potential Effects of Mediterranean Benthic Communities
The Mediterranean harbors many important benthic habitats including vermetid reefs, coralligenic concentrations, shallow sublittoral rock, seamounts, deep-sea coral reefs, and abyssal plains [94,95]. The shallow sub-littoral sediment is a particularly valuable habitat for the Mediterranean benthos, as it is the preferred habitat of the endemic seagrass Posidonia oceanica, listed as a priority natural habitat under Annex 1 of the EC Habitats Directive (2/43/EEC on the Conservation of Natural Habitats and of Wild Fauna and Flora), due to its endemism, high productivity and provision of ecosystem services [96,97]. Favorable substrate conditions for OWF construction throughout Europe is typically soft sediment areas, and the piling and drilling of foundations and monopile jackets constitutes the most direct impact to the marine environment. As they favor similar habitats required for the construction of OWFs, Posidonia beds are at risk from direct physical destruction, sedimentation, and changes in hydrographic regimes. Conversely, construction may prevent local trawling and therefore decrease the amount of destructive fishing methods that typically reduces Posidonia coverage [98]. Any plans for OWFs in the Mediterranean Sea will have to be carefully designed around the distribution of Posidonia to ensure the correct conservation practices for this endemic, priority species.
Other impacts to soft sediment communities from pre-construction states include changes in regional current regimes, causing shifts in macro-benthic assemblages on a localized scale [99,100]. Studies have shown negative correlations between distance from turbines and grain sizes in the vicinity of turbines (measured from a distance from turbine of 15 m to 200 m) in the Belgian part of the North Sea, which were also positively correlated with increases in organic matter and a shift in species assemblages. The closer to the turbine the soft sediment community samples were taken, the greater the increase in macrobenthic density and diversity [99].
Although changes in soft-sediment in-fauna are likely to be associated with wind farm construction, the most direct affect is the addition of hard substratum to the environment. Ecosystem shifts occur from changes in the existing soft sediment shifts and the addition of hard substratum in a habitat previously void of available settlement substratum. Recruitment and colonization of novel artificial habitats provided by turbine foundations increases the structural complexity and productivity of an environment previously low in in-fauna diversity and density [101,102,103,104,105,106,107,108,109,110].
Research at an offshore research platform in the German Bight indicated that 35 times more macro-zoobenthos biomass was associated with the additional hard substratum than the equivalent area of soft benthic sediment [81]. The increase in macro-zoobenthos biomass may appear beneficial in regard to productivity, yet in many cases species assemblages associated with artificial structures differ from the environment replaced and show lower levels of diversity than natural rock equivalents [81,110]. Long-term effects of ecosystem shifts are unknown, and species assemblages are influenced by many parameters including material and texture of offshore structures, larval supply, oceanographic conditions, temperature, salinity and water depth [111]. The number of defining parameters involved in influencing the spatial and temporal colonization of offshore artificial structures highlights the need for extensive area-specific research and long-tem in situ experiments to fully understand regional implications of OWFs.
With regard to the Mediterranean, the limited investigative work into epibenthic colonization has focused on concrete artificial reefs [112,113,114] or rock anthropogenic structures [115,116]. Most studies areas focus on the Northwestern region of the Mediterranean with the exception of two studies in Turkey and Israel [117,118]. Only two studies have investigated an offshore steel structure in the Mediterranean [117,118]. Dominating species of epibenthic assemblages varied depending on the location and duration of monitoring program, which ranged from 11 months to 20 years. Most studies cited the initial establishment of Hydozoa, Bryozoa and Serpulidae [112,117,118,119,120]. In several studies, initial colonization was followed with the establishment and dominance of the commercially farmed Mytilus galloprovincialis [112,115,119,120]. However, the establishment of mussel beds on artificial structures in the Mediterranean may be highly localized, as several artificial structures showed no such dominance [117,121,122,123] or highly variable results [124]. The only long-term data set on a concrete artificial reef (20 years) reported five distinct phases of species assemblage: dominance of pioneer species, mussel dominance, mussel regression, mussels absence, and finally dominance of bryozoan bio-constructions [125]. Differences in the material used for offshore structures may have a significant effect on climax community composition; the two offshore, steel study sites in the Mediterranean offshore steel structures were both dominated by bivalves after 52 and 70 months [121,119].
The susceptibility of the Mediterranean Sea to non-indigenous species [126] and the colonization of artificial substrata in the Mediterranean by alien species [122,127] mean that wind farms may also act as benthic “stepping stones” that facilitate range extension of alien species within the Mediterranean marine environment, which in turn may potentially reduce the biodiversity of the basin [111,128]. Due to the apparent locality factor of benthic colonization communites, small-scale pilot studies are essential for understanding whether windfarms will proliferate alien species at potential wind farm locations. The use of before-after, control-impact studies by policy makers is strongly recommended.
3.5. Potential Effects of Mediterranean Planktonic Communities
The oxygen rich, oligotrophic waters of the semi-enclosed Mediterranean produce a low nutrient availability that is, however, generally intensified along both a west-east and north-south direction [129]. There is, nevertheless, a high spatial heterogeneity in the distribution of plankton throughout the Mediterranean Sea due to complex hydrodynamic circulation patterns forming multiple gyres and upwelling systems [130]. Most marine invertebrate and fish species have a planktonic larval stage; therefore, it is crucial to understand any effects OWFs may have on planktonic communities. There has been much speculation about the impacts caused by OWFs in relation to plankton [6,107,110]. Evidently, at some scale any offshore construction will have an effect on local hydrographic regimes [131]; however, the extent that this will affect upwelling/downwelling episodes, and thus potentially phytoplankton blooms, is currently unknown. Analytical models indicate that OWFs affect surface gravity waves underneath rotor blades [132,133], and there is much speculation in the literature as to the impacts any hydrographic changes will have on plankton/nekton aggregations [19]. OWFs may also affect planktonic populations by providing hard substrata that facilitate planktonic connectivity through larval settlement during dispersal processes [134,135,136]. The presence of available hard substratum from wind turbines for the recruitment and settlement of pelagic larvae have the potential to extend passive larval connectivity across biogeographic boundaries [134]. It is noted that there is a scarcity of information in the literature regarding the impacts OWFs will have on planktonic communities.
4. Conclusions
As the likelihood of Mediterranean OWFs increases, there is an ever-growing need to assess the biological costs and benefits of OWFs in the region. The aims of this horizon-scanning review were two-fold: firstly, to identify areas likely to be considered for the development of OWFs within the region, and, secondly, to identify the biological impacts of these developments.
The five OWF hotspots are identified as the Gulf of Lion, the North Adriatic Sea, the Gulfs of Hammamet and Gabès, the Gulf of Sidra, and the Nile Delta. Understanding the species, habitats and taxa that are likely to be affected by the construction and operation of OWFs in these regions and the wider Mediterranean region will assist developers and policy makers with future spatial planning decisions regarding OWFs within the Mediterranean. Furthermore, the advancement and implementation of floating wind turbine technology may diminish many of the above mentioned effects, which, from this point of view, is a very promising perspective for the near future.
Supplementary Materials
Supplementary File 1
Acknowledgments
This work was supported by project “Towards Coast to Coast NETworks of marine protected areas (from the shore to the high and deep sea), coupled with sea-based wind energy potential” (COCONET) from European Community’s Seventh Framework Programme (FP7/2007–2013) under Grant Agreement No. 287844 project.
Author Contributions
L.B. conceived the investigation; E.V. and T.S. analyzed model output data; L.B., S.D., S.R., C.A., and M.V.-L. performed and analyzed data from the literature review; L.B., M.J.A., and J.M.H.-S. interpreted the results. L.B., M.J.A., and J.M.H.-S. wrote the paper.
Kerckhof, F.; Norro, A.; Jacques, T.; Degraer, S. Early colonization of a concrete offshore windmill foundation by marine biofouling on the Thornton Bank (southern North Sea). In Offshore Wind Farms in the Belgian Part of the North Sea: State of the Art after Two Years of Environmental Monitoring; Royal Belgian Institute of Natural Sciences: Brussel, Belgium, 2009. [Google Scholar]
Kerckhof, F.; Rumes, B.; Norro, A.; Jacques, T.G.; Degraer, S. Seasonal variation and vertical zonation of the marine biofouling on a concrete offshore windmill foundation on the Thornton Bank (southern North Sea). In Offshore Wind Farms in the Belgian Part of the North Sea: State of the Art after Two Years of Environmental Monitoring; Royal Belgian Institute of Natural Sciences: Brussel, Belgium, 2010. [Google Scholar]
Follow MDPI
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely
those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or
the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas,
methods, instructions or products referred to in the content. | 1A,C) [51]. These regions will require particular attention during spatial planning stages of developers.
Through monitoring programs conducted in the Northern European seas, marine noise, and in particular pile driving during construction, has been identified as the biggest impact to marine mammals [58]. Increased motorized vessel shipping during the operational phase of wind farms also increases noise levels to the area, and so is also identified as an impact; however, this is not at a level expected to significantly affect marine mammals [59]. Depending on the hearing ranges of the species, pile driven construction has the ability to produce hearing impairment, although for most species, hearing reactions are as yet undetermined [60].
A study measuring the propagation of sound during the construction phase of an offshore site in the NE of Scotland implied Bottlenose dolphins would suffer auditory injury within a 100 m range of the site and behavior disturbances up to 50 km away [61]. With the use of T-POD porpoise detectors, acoustic monitoring during the construction and into the operational phase of the Nysted OWF indicated a possible change in habitat use by the harbor porpoise (Phocoena phocoena), with a reduction in the level of echolocation signals produced by the porpoises [62]. A long-term study (10 years) at the same wind farm also showed a decline from baseline levels of echolocation signals [63].
A similar study at the Dutch wind farm, Egmond aan Zee, after construction measured significantly higher acoustic activity inside the farm in comparison with a control site [64], and this trend was mirrored in a recent study of harbor seal (Phoca vitulina) foraging which suggested an increase in habitat utilization at two operational wind farms (Alpha Ventus and Sheringham Shoal) [65]. The repeated grid-like movements indicated for the first time, successful foraging behavior by an apex predator within an OWF. The apparent differences between probable habitat uses may be due to local-scale ecological differences, local population habituation of wind farms, or differences in construction type of wind farms [64]. | yes |
Marine Conservation | Are offshore wind farms harmful to marine life? | yes_statement | "offshore" "wind" "farms" are "harmful" to "marine" "life".. "marine" "life" is negatively impacted by "offshore" "wind" "farms". | https://www.umces.edu/wind-energy | Wind Energy & Environmental Impacts | University of Maryland ... | Wind Energy & Environmental Impacts
Concerns about the impacts of climate change have led to efforts to reduce our carbon dioxide emissions. This requires a switch from the production of electricity from fossil fuel combustion to renewable sources. Wind energy is Maryland’s most abundant natural energy resource and can provide cleaner, homegrown energy.
Although wind power is an important source of renewable energy, there are some concerns about the environmental impacts of wind turbines. Understanding and mitigating against environmental impacts requires a baseline knowledge about the distribution and abundance of marine species and their habitats. The University of Maryland Center for Environmental Science has a broad range of expertise and a long history of research in the region that makes it exceptionally qualified for studying the environmental impacts of wind energy.
Impact of wind turbines on birds and bats
Risk of death from direct collisions with the rotors and the pressure effects of vortices.
There is also a risk of displacement from the area causing changes in migration routes and loss of quality habitat.
Impact of offshore wind developments on marine life
Noise is produced during the construction and installation of offshore wind farms from increased boat activity in the area and procedures such as pile-driving. The sound levels from pile-driving, when the turbine is hammered to the seabed, are particularly high. This is potentially harmful to marine species and have been of greatest concern to marine mammal species, such as endangered whales.
The noise and vibration of construction and operation of the wind turbines can be damaging to fish and other marine species.
Construction activities at the wind power site and the installation of undersea cables to transmit the energy to shore can have direct effects on the seabed and sediments, which can affect the abundance and diversity of benthic organisms.
Disturbance of the seafloor may also increase turbidity, which could affect plankton in the water column.
There are also potentially some environmental benefits of offshore wind farms. The turbines may act as “artificial reefs” and increase biological productivity in the vicinity. The presence of hard structures can provide habitat for barnacles, sponges, and other invertebrates, which may locally increase fish abundance. These processes can consequently result in attracting predators higher up the food chain.
Experts
The University of Maryland Center for Environmental Science has a broad range of expertise and a long history of research in the region that makes it exceptionally qualified for studying the environmental impacts of wind energy. | Wind Energy & Environmental Impacts
Concerns about the impacts of climate change have led to efforts to reduce our carbon dioxide emissions. This requires a switch from the production of electricity from fossil fuel combustion to renewable sources. Wind energy is Maryland’s most abundant natural energy resource and can provide cleaner, homegrown energy.
Although wind power is an important source of renewable energy, there are some concerns about the environmental impacts of wind turbines. Understanding and mitigating against environmental impacts requires a baseline knowledge about the distribution and abundance of marine species and their habitats. The University of Maryland Center for Environmental Science has a broad range of expertise and a long history of research in the region that makes it exceptionally qualified for studying the environmental impacts of wind energy.
Impact of wind turbines on birds and bats
Risk of death from direct collisions with the rotors and the pressure effects of vortices.
There is also a risk of displacement from the area causing changes in migration routes and loss of quality habitat.
Impact of offshore wind developments on marine life
Noise is produced during the construction and installation of offshore wind farms from increased boat activity in the area and procedures such as pile-driving. The sound levels from pile-driving, when the turbine is hammered to the seabed, are particularly high. This is potentially harmful to marine species and have been of greatest concern to marine mammal species, such as endangered whales.
The noise and vibration of construction and operation of the wind turbines can be damaging to fish and other marine species.
Construction activities at the wind power site and the installation of undersea cables to transmit the energy to shore can have direct effects on the seabed and sediments, which can affect the abundance and diversity of benthic organisms.
Disturbance of the seafloor may also increase turbidity, which could affect plankton in the water column.
There are also potentially some environmental benefits of offshore wind farms. The turbines may act as “artificial reefs” and increase biological productivity in the vicinity. | yes |
Marine Conservation | Are offshore wind farms harmful to marine life? | yes_statement | "offshore" "wind" "farms" are "harmful" to "marine" "life".. "marine" "life" is negatively impacted by "offshore" "wind" "farms". | https://capemaywhalewatch.com/blog/the-effects-of-underwater-noise-pollution-and-offshore-wind-farms-on-marine-mammals/ | The Effects Of Underwater Noise Pollution And Offshore Wind Farms ... | The Effects Of Underwater Noise Pollution And Offshore Wind Farms On Marine Mammals
Underwater or ocean noise can have natural/biological sources or anthropogenic sources, otherwise known as human-made noise. Ocean noise is very important to monitor due to its impact on the environment and the organisms within. Natural sources of ocean noise have physical/geophysical, atmospheric, and geological aspects. These aspects can include wind and precipitation at the ocean surface, storms, seismic activity such as earthquakes, and other natural phenomena. Biological sources include marine mammal sound production as well as sound made by fish and other marine organisms. These sources usually do not harm the species within our oceans; it is anthropogenic sources that impact ocean life negatively, especially marine mammals. Some examples of anthropogenic contributions to ocean noise are vessel traffic, oil/gas industry activities, seismic or sonar profiling, construction, and explosive testing. The combination of all these is known as noise pollution. Since the industrial revolution, noise pollution has increased within our oceans making it hard for marine mammals to escape. This can cause damage to their hearing and their overall bodies (Ocean Noise and Marine Mammals, 2003).
The known effects of anthropogenic noise on marine organisms studied by scientists mainly affect their behavior and health. Marine mammals are the most studied aquatic creature because they have shown side effects to these noises. Mammals, such as whales, rely heavily on hearing to feed, migrate, and mate. When other sounds interfere it can cause confusion. If those sounds have a high enough frequency it can cause internal damage to these creatures. Marine mammals who use echolocation, commonly known in odontocetes, are more sensitive to these noises and show more behavioral changes. Typical changes in behavior are shorter surfacing, shorter dives, fewer blows per surfacing, longer intervals between each blow, changes in growth and reproduction and changes in predator and prey interactions. Movements and migration patterns also change due to the location of the noise, so mammals will purposely swim out of their way to avoid a sound that is hurting them, and even sometimes end up washed ashore to escape the noise. Internally, it causes hearing sensitivity and, for mammals that use echolocation, can alter their abilities to give out and receive signals (Ocean Noise and Marine Mammals, 2003). This then makes it hard for them to understand their surroundings. Stress levels are known to increase in these areas as well. These are all known side effects of ocean noise pollution studied throughout the years. There has been action to decrease the amount within the oceans for the sake of the mammals. These changes include quieter engines, less shipping traffic, less military explosive tests and safer sonar equipment. However, something new could be adding to this noise; offshore wind turbine farms (Williams et al., 2015).
Image 2: The sperm whale is just one species of marine mammal that is affected by noise pollution.
Offshore turbines, or wind farms, are a source of renewable energy. While they are not a new development, more are being made and placed within our oceans to give us alternate energy. The problem is construction generates immense noise and to keep them running could be harmful to marine life, especially marine mammals. To construct these turbines, seismic surveys need to be conducted to understand and map out the seafloor to evaluate where these structures could be properly placed. During these surveys, sound waves are produced and bounce off objects; these waves return back to the source to reveal the image. This is done over the entire sea floor where construction is planned on building these turbines and these sounds can be heard for miles. Pile driving is another method used to build turbines. This is a machine that digs into the soil to provide a strong foundation to support these structures. These machines then drive steel or concrete into the ground. This whole process generates an extreme amount of noise. Once the whole construction process is over, the turbines themselves create noise which can be heard by all marine life miles of the original area. While these machines are being used for alternative energy, their overall construction and use add to the problems of noise pollution (Bailey et al., 2010).
Image 3: Examples of wind turbines with different foundations
Image 4: Example of a pile driver in the ocean
Hopefully in the near future, noise pollution can decrease, making the ocean less abrasive and loud for the marine mammals living within it. If it continues to increase, long term effects may occur and possibly create permanent damage to certain marine species.
-Ashalee Breining
Intern at Cape May Whale Watch and Research Center, Stockton University | Internally, it causes hearing sensitivity and, for mammals that use echolocation, can alter their abilities to give out and receive signals (Ocean Noise and Marine Mammals, 2003). This then makes it hard for them to understand their surroundings. Stress levels are known to increase in these areas as well. These are all known side effects of ocean noise pollution studied throughout the years. There has been action to decrease the amount within the oceans for the sake of the mammals. These changes include quieter engines, less shipping traffic, less military explosive tests and safer sonar equipment. However, something new could be adding to this noise; offshore wind turbine farms (Williams et al., 2015).
Image 2: The sperm whale is just one species of marine mammal that is affected by noise pollution.
Offshore turbines, or wind farms, are a source of renewable energy. While they are not a new development, more are being made and placed within our oceans to give us alternate energy. The problem is construction generates immense noise and to keep them running could be harmful to marine life, especially marine mammals. To construct these turbines, seismic surveys need to be conducted to understand and map out the seafloor to evaluate where these structures could be properly placed. During these surveys, sound waves are produced and bounce off objects; these waves return back to the source to reveal the image. This is done over the entire sea floor where construction is planned on building these turbines and these sounds can be heard for miles. Pile driving is another method used to build turbines. This is a machine that digs into the soil to provide a strong foundation to support these structures. These machines then drive steel or concrete into the ground. This whole process generates an extreme amount of noise. Once the whole construction process is over, the turbines themselves create noise which can be heard by all marine life miles of the original area. While these machines are being used for alternative energy, their overall construction and use add to the problems of noise pollution (Bailey et al., 2010).
| yes |
Marine Conservation | Are offshore wind farms harmful to marine life? | yes_statement | "offshore" "wind" "farms" are "harmful" to "marine" "life".. "marine" "life" is negatively impacted by "offshore" "wind" "farms". | https://ocnjdaily.com/four-congressmen-strongly-criticize-plans-offshore-wind-projects/ | Four Congressmen Strongly Criticize Plans for Offshore Wind ... | Four Congressmen Strongly Criticize Plans for Offshore Wind Projects
Congressman Jeff Van Drew waves to the audience during a March 16 congressional hearing on the possible negative impacts of wind farms.
By DONALD WITTKOWSKI
Four congressmen and a panel of expert witnesses denounced plans for a series of offshore wind energy projects as a mass “industrialization” of the ocean that will cause environmental harm and could seriously damage the Jersey Shore’s tourism industry.
The congressional hearing Thursday at the Wildwoods Convention Center was packed with an overflow crowd of about 450 people, while dozens of others were not allowed to enter the auditorium because of crowd restrictions.
“Let us in. Let us in,” chanted the people who were forced to stand outside while the hearing got underway.
U.S. Rep. Jeff Van Drew, who headed the hearing, vowed that President Joe Biden and Gov. Phil Murphy’s Democratic administrations will face stiff opposition if they continue to support the development of wind farms off New Jersey and other East Coast states.
“This is not the last hearing, I tell you. We are not giving up on this,” said Van Drew, the Republican New Jersey congressman whose district includes the shore communities in Atlantic and Cape May counties.
Video of Congressional hearing of offshore wind projects
U.S. Rep. Chris Smith, another New Jersey Republican congressman, blasted the federal and state regulatory agencies under Biden and Murphy’s control for allegedly ignoring evidence about the possible negative impacts that the wind farms could cause.
“The wind farm approval process has been shoddy at best, leaving unaddressed and unanswered numerous serious questions concerning the extraordinarily harmful environmental impact on marine life and the ecosystems that allow all sea creatures great and small to thrive,” Smith said.
In even stronger comments, Van Drew accused the Danish energy company Orsted, which plans to build a wind farm 15 miles offshore from Atlantic City to Stone Harbor, of misleading the public about the impacts of the project.
The hearing follows the deaths of 29 whales along the East Coast – including nine in New Jersey since early December – that wind farm critics suspect may have been caused by preliminary surveying work of the seabed for the offshore projects.
However, the National Oceanic and Atmospheric Administration, the New Jersey Department of Environmental Protection and the Marine Mammal Stranding Center in Brigantine are among the government agencies or organizations that dispute any connection between the wind farms and the dead whales.
“As of March 2023, no offshore wind-related construction activities have taken place in waters off the New Jersey coast, and DEP is aware of no credible evidence that offshore wind-related survey activities could cause whale mortality,” the DEP said in a statement this week.
NOAA and the Marine Mammal Stranding Center have concluded that most of the whale deaths were caused by vessel strikes after examining the carcasses and finding the types of injuries consistent with collisions with ships.
More than 400 people pack the auditorium at the Wildwoods Convention Center.
But during the congressional hearing, Van Drew cited a recent letter from a scientist at the Bureau of Ocean Energy Management that warned BOEM of the dangers that wind farms can pose to whales. BOEM is one of federal regulatory agencies overseeing wind farm projects.
“This is their own scientist warning BOEM that offshore wind will severely impact the whales, yet they moved ahead and completely ignored the few people in their group who would tell the truth,” Van Drew said.
Van Drew added that he believes “there has been a lot of not telling the truth going on” at the federal agencies that regulate the offshore wind projects.
U.S. Reps. Andy Harris, of Maryland and Scott Perry, of Pennsylvania, were two other Republican congressmen who took part in the hearing. They both questioned the willingness of federal agencies to investigate the impacts of offshore wind projects and whether there is any connection to the whale deaths.
“The fix was in,” Harris said, alleging that BOEM simply ignored the potential dangers of wind farm technology in order to rush the projects through the regulatory approval process.
As part of their congressional inquiry of wind farm technology, Van Drew and the other Republicans are calling for a moratorium on all of the offshore projects until an investigation is done to establish whether there are any links to the whale deaths.
Some of the criticism of the federal agencies by Van Drew and the other Republicans was met by applause and cheers from the audience. Members of the audience were not allowed to speak during the congressional hearing, but a panel of expert witnesses gave testimony about the wind farms.
Testimony is heard from a panel of expert witnesses.
One witness, Michael Donohue, an attorney who is representing Cape May County in its legal fight against Orsted’s proposed wind farm, warned of the possible negative impacts of the project on tourism at the Jersey Shore.
Donohue said the impact on Cape May County’s nearly $7 billion per year tourism industry could be “devastating.” Citing statistics from the Cape May County Department of Tourism, Donohue said tourism could fall 15 percent across the board, resulting in a $993 million decline in tourism spending.
“A decline of 15 percent would be devastating to our economy,” he said.
“We live and thrive on tourism,” Donohue added. “It’s the lifeblood of our economy. We survive only because of tourism.”
Other witnesses focused on the possible harmful impacts of the wind farms on the environment, marine life, the commercial fishing industry and the cost of electricity.
Cindy Zipf, executive director of the environmental group Clean Ocean Action, predicted that wind farms would lead to the mass industrialization of the ocean.
“We are creating a vast power plant in the ocean, which will have catastrophic impacts,” Zipf said.
Bob Stern, former director of the Office of Environmental Compliance at the U.S. Department of Energy, testified that there have been an unusually high number of whale deaths in New Jersey since December.
So far, nine dead whales have washed up on New Jersey beaches in the last three months, compared to an annual average of seven deaths, he said.
Stern said “it doesn’t take a rocket scientist” to figure out that the whale deaths could be caused by preliminary work on the wind farms.
Seated at right, U.S. Rep. Chris Smith of New Jersey speaks to the audience.
Stern also warned of the aesthetic drawbacks of having nearly 100 towering wind turbines located 15 miles off the coast as part of the Orsted project.
“We’re looking at this as the destruction of the shore experience,” said Stern, who is also part of Save Long Beach Island, a grassroots group in Ocean County that is raising public awareness about wind farms.
Two representatives of the seafood and commercial fishing industries testified that the wind farms could greatly harm their operations. They said fishing boats would have to find new fishing grounds and would also have to travel farther distances to avoid the offshore wind turbines.
The result could be higher seafood prices, food shortages and job cuts because of the extra demands placed on their industries, they said.
Offshore wind projects are the “single greatest threat” to commercial fishing operations, said Meghan Lapp, liaison to Seafreeze, the largest trader of frozen seafood on the East Coast.
In yet more criticism leveled at the wind farm projects, another witness predicted that electric bills will increase if offshore wind is developed in New Jersey.
Gov. Murphy, a strong supporter of offshore wind technology, wants New Jersey to become a leader in green energy. So far, New Jersey has approved three offshore wind farms and is looking to add more. Murphy’s goal is to have offshore wind farms producing 11,000 megawatts of power in New Jersey by 2040.
David Stevenson, director of the Center for Energy and Policy at the Caesar Rodney Institute, testified at the hearing that electric bills will rise an estimated $100 each year for New Jersey households that have their power supplied by offshore wind projects.
Taking a long-term perspective, Stevenson said that would equate to a $2,000 increase in annual electricity bills for New Jersey households after 20 years of offshore wind power.
Ocean City resident Suzanne Hornick, of the group Protect Our Coast NJ, wears a T-shirt in opposition to offshore wind projects.
Latest News
When Lola DeMarco was crowned Miss Ocean City 2024 on Saturday night, she thought of one person, someone who was the guiding light in her life, her mom, Kim.
“My mom always pushed me to do my best and she loved watching me perform and dance,” Lola, of Ocean City, recalled in an interview Sunday.
Kim, a businesswoman and entrepreneur, lost a long, courageous battle against cancer in July 2022. She was 49.
On Saturday, Lola danced before a crowd at the Ocean City Music Pier to a song dedicated to her mom. | “We live and thrive on tourism,” Donohue added. “It’s the lifeblood of our economy. We survive only because of tourism.”
Other witnesses focused on the possible harmful impacts of the wind farms on the environment, marine life, the commercial fishing industry and the cost of electricity.
Cindy Zipf, executive director of the environmental group Clean Ocean Action, predicted that wind farms would lead to the mass industrialization of the ocean.
“We are creating a vast power plant in the ocean, which will have catastrophic impacts,” Zipf said.
Bob Stern, former director of the Office of Environmental Compliance at the U.S. Department of Energy, testified that there have been an unusually high number of whale deaths in New Jersey since December.
So far, nine dead whales have washed up on New Jersey beaches in the last three months, compared to an annual average of seven deaths, he said.
Stern said “it doesn’t take a rocket scientist” to figure out that the whale deaths could be caused by preliminary work on the wind farms.
Seated at right, U.S. Rep. Chris Smith of New Jersey speaks to the audience.
Stern also warned of the aesthetic drawbacks of having nearly 100 towering wind turbines located 15 miles off the coast as part of the Orsted project.
“We’re looking at this as the destruction of the shore experience,” said Stern, who is also part of Save Long Beach Island, a grassroots group in Ocean County that is raising public awareness about wind farms.
Two representatives of the seafood and commercial fishing industries testified that the wind farms could greatly harm their operations. They said fishing boats would have to find new fishing grounds and would also have to travel farther distances to avoid the offshore wind turbines.
The result could be higher seafood prices, food shortages and job cuts because of the extra demands placed on their industries, they said.
| yes |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://www.businessbecause.com/news/in-the-news/8868/online-degree-losing-favor-employers | Online Degrees Losing Favor Among Employers, Survey Reveals | Online Degrees Losing Favor Among Employers, Survey Reveals
Worldwide employers value online degrees less than last year, however perceptions of skillsets and overall value differ across regions
By Laura Wise
Tue Jul 18 2023
Global employers are less likely to view graduates of online and in-person programs equally in their organization in 2023 than before, according to the Graduate Management Admissions Council's (GMAC) Corporate Recruiter Survey.
While approximately half of global employers say their organizations value online and in-person degrees equally, nearly two-thirds of employers report graduates from in-person programs tend to have stronger leadership, communication, and technical skills than those from online programs.
Despite the global drop in renown for online programs, opinions of graduates vary significantly between regions.
In the US, just 27% of employers said they valued online and in-person degrees equally, down from 29% in 2022. However, 43% said they valued the technical skills of in-person graduates over online graduates–roughly 17% below the global average–suggesting US employers were relatively ambivalent when it comes to the skills difference between studying online and in-person.
In Central and South Asia employers largely valued online and in-person degrees equally, with 90% agreement among respondents from those regions. A total of 71% of recruiters from East and Southeast Asia valued the formats the same. However, around three in four of those employers said in-person graduates' leadership, communication, and technical skills were superior to those of online graduates.
Employers from Africa, Latin America, the Middle East, and Western Europe followed the global trend, except on the perception that in-person candidates have stronger leadership and communication skills. Employers from the Middle East were more likely to agree with this view and Western European employers were less likely to agree with it.
At an industry level, only 32% of employers in consulting said they viewed online and in-person degrees equally, while less than half said in-person graduates bring more technical skills than online graduates.
Online degrees saw a huge rise in popularity during and after the Covid-19 pandemic, as people were restricted to working from home and found new ways of elevating their leadership and business skills. | Online Degrees Losing Favor Among Employers, Survey Reveals
Worldwide employers value online degrees less than last year, however perceptions of skillsets and overall value differ across regions
By Laura Wise
Tue Jul 18 2023
Global employers are less likely to view graduates of online and in-person programs equally in their organization in 2023 than before, according to the Graduate Management Admissions Council's (GMAC) Corporate Recruiter Survey.
While approximately half of global employers say their organizations value online and in-person degrees equally, nearly two-thirds of employers report graduates from in-person programs tend to have stronger leadership, communication, and technical skills than those from online programs.
Despite the global drop in renown for online programs, opinions of graduates vary significantly between regions.
In the US, just 27% of employers said they valued online and in-person degrees equally, down from 29% in 2022. However, 43% said they valued the technical skills of in-person graduates over online graduates–roughly 17% below the global average–suggesting US employers were relatively ambivalent when it comes to the skills difference between studying online and in-person.
In Central and South Asia employers largely valued online and in-person degrees equally, with 90% agreement among respondents from those regions. A total of 71% of recruiters from East and Southeast Asia valued the formats the same. However, around three in four of those employers said in-person graduates' leadership, communication, and technical skills were superior to those of online graduates.
Employers from Africa, Latin America, the Middle East, and Western Europe followed the global trend, except on the perception that in-person candidates have stronger leadership and communication skills. Employers from the Middle East were more likely to agree with this view and Western European employers were less likely to agree with it.
At an industry level, only 32% of employers in consulting said they viewed online and in-person degrees equally, while less than half said in-person graduates bring more technical skills than online graduates.
| yes |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://www.aeaweb.org/research/do-employers-frown-on-for-profit-degrees | Do employers frown on for-profit colleges and online degrees? | This website uses cookies.
By clicking the "Accept" button or continuing to browse our site, you agree to first-party and session-only cookies being stored on your device to enhance site navigation and analyze site performance and traffic. For more information on our use of cookies, please see our Privacy Policy.
Do employers frown on for-profit colleges and online degrees?
Measuring the labor market value of these increasingly common credentials
Students study at Suzzallo Library at the University of Washington, November 28, 2007. Some for-profit colleges eschew the physical campus entirely in favor of online instruction.
Dmitry Burlakov/Bigstock
For-profit colleges saw a major growth spurt during the past decade, attracting millions of new enrollees (and billions of dollars in federal education funding) as they expanded online course offerings and worked to attract a range of low-income and non-traditional students that were poorly served by traditional nonprofit colleges.
But over the past year, the for-profit college education sector has come under fire. The U.S. Department of Education has vowed to crack down on for-profit colleges that are accepting federal education funding but offering students little in return. One major company went bankrupt after paying millions to settle claims of fraudulently advertising inflated graduation and placement rates -- and several prominent for-profit university companies are under investigation for similar deceptive practices.
Critics charge that some for-profit colleges are scooping up federal funds designed to help poor kids pay for college, only to offer mediocre instruction and dismal graduation rates. But there is an argument that for-profit colleges are more responsive and innovative than stodgy universities that have been around for decades or centuries, and are nimble enough to accommodate nontraditional students and design career-oriented programs like criminal justice and health technology.
Are students being duped into spending money on worthless degrees or are these institutions innovating and serving an important need-filling role? Evaluating the worth of a degree can be very difficult, but one way to judge the value of a degree would be to ask recruiters who routinely hire college graduates for entry-level positions how they feel when they see a for-profit or online college on a student’s resume. An article appearing in this month’s issue of the American Economic Review uses an audit study to approach the question.
For-profit and online colleges attract and enroll very different study bodies; while the typical new enrollee at a public or private nonprofit college is an 18-year-old student who just graduated from high school, the for-profit student body is much more diverse. Students might be older, returning to school after years in the workforce, or may have GEDs rather than high school diplomas.
Because the student bodies have such different backgrounds, comparing the employment rates or starting salaries of groups from different schools may not give a fair indication of the relative value of each type of degree.
The rise of the for-profit college during the ‘00s
Total student enrollment at U.S. two-year and four-year postsecondary programs by sector, 2002–2014. The for-profit sector saw huge growth during the period as online degree programs came into their own.
To get around this problem, the authors use an audit study, which in this case involves submitting a range of fictitious resumes to companies offering entry-level job openings. The virtue of the audit study is that these resumes can be designed to be exactly identical except for the educational history. By sending out fictitious resumes with different credentials and tracking the response from hiring firms, they can determine how employers feel about these various types of institutions.
The authors in this study adapted this approach by creating sets of identical resumes and then altering features of each resume including the educational history. To make sure the resumes were realistic, they created templates based on actual resumes posted on a major online job board. Then they carefully added a fictional work experience here or an externship there to match their fake applicants as closely as possible to the real applicants applying to these types of jobs.
Then they submitted those resumes online, providing phone numbers and email addresses that the researchers monitored for responses from employers. They focused on jobs in business and health that required little or no work experience so they could focus on job openings where the educational background would be of primary importance to recruiters. They used differences in the callback rates as the indicator of how interested recruiters were in each imaginary applicant.
The authors find substantial callback effects for business job openings that require a bachelor’s degree. When the researchers sent off “applicants” with BA’s from public colleges, the callback rate was about 8.5% on average. But applicants to the same job openings who had degrees from online for-profit colleges got called back 25% less often. These callback rates are adjusted for differential callback rates across different cities, job types, and firms, so they are a reflection of recruiters’ attitudes about each fake applicant’s degree type only.
The callback penalty was less, only about 10%, for resumes with degrees from local, brick-and-mortar for-profit colleges. This may reflect greater recruiter trust in for-profit schools with an established presence in the community that may have produced good employees in the past. However, the authors emphasize that most of the enrollment growth in the for-profit sector over the past decade has come at schools with a big online presence and nationwide reach and not at local for-profits.
Jobs in the healthcare sector that did not require any sort of certificate or license showed a similar pattern -- applicants with certificates from public schools on their resumes got significantly more callbacks than applicants with certificates from for-profit schools. But for more advanced jobs that did require a certificate or an occupational license, like postings for vocational nurses and pharmacy technicians, the effect mostly disappeared. The authors hypothesize that when employers have an outside, objective source of information about a candidate (like a licensing exam score), college credentials are less important.
The findings do not support the notion that a for-profit degree is a good investment relative to one from a public institution. We cannot easily translate a difference in callback rates into a difference in wages. But because yearly tuition at a for-profit college typically greatly exceeds that at a public university and for-profit degrees seem to be less valued by employers, the for-profit degree appears to be the less attractive investment.
Deming et al. (2016)
The responses for jobs with no bachelor’s requirement were even more disheartening for the for-profit “applicants.” In many cases, a for-profit associate’s degree did not result in higher call back rates than for applicants with no college experience listed at all.
Of course, all of the applicants in this study were fictitious, so the researchers can’t say anything definitive about whether real-life students with degrees from for-profit or online colleges are poorly prepared for the job market, and there is no way to tell what the eventual placement and salary outcomes would have been had the fake applicants somehow been allowed to progress farther through the job application process.
The authors also caution that differential callback rates could still partially be a reflection of recruiters’ beliefs about the types of people who attend for-profit colleges in the first place (rather than the quality of education to be found there). Even so, the results suggest that some graduates may face an uphill battle going on the job market with a for-profit college on their resume, especially if they have no other credential, test result, or occupational license to validate them. ♦
“The Value of Postsecondary Credentials in the Labor Market: An Experimental Study” appears in the March 2016 issue of the American Economic Review. | They used differences in the callback rates as the indicator of how interested recruiters were in each imaginary applicant.
The authors find substantial callback effects for business job openings that require a bachelor’s degree. When the researchers sent off “applicants” with BA’s from public colleges, the callback rate was about 8.5% on average. But applicants to the same job openings who had degrees from online for-profit colleges got called back 25% less often. These callback rates are adjusted for differential callback rates across different cities, job types, and firms, so they are a reflection of recruiters’ attitudes about each fake applicant’s degree type only.
The callback penalty was less, only about 10%, for resumes with degrees from local, brick-and-mortar for-profit colleges. This may reflect greater recruiter trust in for-profit schools with an established presence in the community that may have produced good employees in the past. However, the authors emphasize that most of the enrollment growth in the for-profit sector over the past decade has come at schools with a big online presence and nationwide reach and not at local for-profits.
Jobs in the healthcare sector that did not require any sort of certificate or license showed a similar pattern -- applicants with certificates from public schools on their resumes got significantly more callbacks than applicants with certificates from for-profit schools. But for more advanced jobs that did require a certificate or an occupational license, like postings for vocational nurses and pharmacy technicians, the effect mostly disappeared. The authors hypothesize that when employers have an outside, objective source of information about a candidate (like a licensing exam score), college credentials are less important.
The findings do not support the notion that a for-profit degree is a good investment relative to one from a public institution. We cannot easily translate a difference in callback rates into a difference in wages. But because yearly tuition at a for-profit college typically greatly exceeds that at a public university and for-profit degrees seem to be less valued by employers, the for-profit degree appears to be the less attractive investment.
Deming et al. (2016)
| yes |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://thebestschools.org/magazine/employers-online-degrees/ | Will Employers Take My Online Degree Seriously? | Will Employers Take My Online Degree Seriously?
Evan Thompson is an education and careers writer with The Best Schools. He was previously a journalist with bylines in the Seattle Times, Tacoma News Tribune, and Everett Herald. His beats have included education, sports, business, outdoors, and life...
If an online degree comes from a regionally or nationally accredited school, employers will know that it is reputable.
TheBestSchools.org is an advertising-supported site. Featured or trusted partner programs and all school search, finder, or match results are for schools that compensate us. This compensation does not influence our school rankings, resource guides, or other editorially-independent information published on this site.
Are you ready to discover your college program?
The pandemic is leading more students to ask: Will employers take my online degree seriously? We have the answer.
What do employers really think about online degrees? Does it matter if they didn't come from traditional classroom settings? Will employers take them seriously?
With the onset of coronavirus and traditional schools moving online, these questions are more relevant than ever. Online colleges are becoming more popular, but prospective students may still worry about their credibility.
We're here to set the record straight: An online degree holds just as much weight as a traditional degree. In fact, data shows that most employers don't even differentiate between the two types of degrees.
In 2018, 15% of all college students in the United States studied exclusively online, according to the National Center for Education Statistics. The most popular online programs? Business, healthcare, education, and computer/information science.
In 2018, 15% of all college students in the United States studied exclusively online.
A big benefit of online learning is greater access to education. Other perks include studying from home, a flexible schedule, and work-life balance.
However, factors such as accreditation, program length, and degree level may influence what employers think. Whether you're a prospective or current student, the following advice should help reassure you of the value of an online degree.
Top Online Bachelor's Degree Programs
Is Your Online Degree Serious?
How do you know if a school is up to snuff compared to other colleges and universities? That's where accreditation comes in.
Accrediting agencies evaluate the quality of education at colleges and universities to ensure they meet specific standards. Evaluation metrics include things like educational standards, graduation rates, and professional outcomes for students. Reputable accreditation agencies are recognized by the U.S. Department of Education or the Council for Higher Education Accreditation.
Generally, regionally accredited colleges are more highly valued than nationally accredited colleges, which tend to be vocational schools or for-profit institutions. Only students at accredited schools can access federal financial aid.
During your research into online programs, look for a stamp of approval from a recognized accrediting agency — preferably a regional one. If an online degree comes from a regionally or nationally accredited school, employers will know that it is reputable.
Regional Accrediting Agencies
Middle States Commission on Higher Education
The institutional accrediting sector is divided into regional and national accrediting agencies. Prospective online students should look for a stamp of approval from one of the following regional accrediting agencies:
There are plenty of red flags that indicate when a school is selling you a worthless diploma that isn’t accredited by a recognized agency.
Best Online Master's Programs
Which Professions Are Best Suited for Online Degrees?
The most accessible online degrees deliver coursework asynchronously and have no on-campus or in-person requirements. These factors provide maximum flexibility for distance learners. However, some areas of study don't adapt well to this format.
You should choose an online degree that fits your intended career path, and some educational trajectories just don't work as well online. For example, you can earn an associate degree in psychology without ever leaving your home, but you'll need to complete an in-person graduate program if you plan to practice at the clinical level.
You should choose an online degree that fits your intended career path, and some educational trajectories just don't work as well online.
On the flip side, online accounting programs are widely accessible for students regardless of their location or professional obligations. Because the career is largely theoretical, students can gain relevant experience without having to participate in labs, practicums, or in-person clinical practice. Accountants have many potential career options depending on their degree level, including auditing clerk, loan officer, and financial advisor.
Other degrees that adapt well to an online format include medical assisting, computer science, and healthcare administration.
What Level of Degree Have You Earned?
Many employers care more about your level of degree than whether you obtained it online or through traditional programs. Before you hit the job market, you should know precisely how far your degree level can take you. Deciding which to pursue — an associate, bachelor's, or master's degree — depends on your career goals.
Degree level is often directly tied to your potential for advancement or earnings. In many fields, a two-year associate degree limits you to entry-level and assistive roles with little opportunity for upward mobility. Even if you earned it from a traditional school, a two-year program limits your opportunities.
However, enrolling in an online degree-completion program increases your career prospects. If a bachelor's degree is required for your chosen field, you should find an accredited online college that offers a four-year program. Online master's programs are equally valuable.
A two-year program could limit your opportunities, whether it was online or not. The higher your degree, even if you earned it online, the better your career prospects.
What Else Makes You Stand Out?
No matter where your degree came from, your experiences and skills are what really matter to employers. They care about the projects you worked on in school, the times you applied your skills, and personal connections you made.
No matter where your degree came from, your experiences and skills are what really matter to employers.
An online degree from a reputable institution proves the validity of your education. Now it’s time to present yourself as an ideal candidate.
Put thought and effort into each cover letter, prepare well for interviews, and find ways to highlight your unique skills and passions — both academic and personal. Your resume, interview skills, and personal presentation matter just as much as a diploma.
The Final Word
Do online degrees get the same level of respect as traditional degrees? Yes, but do your homework.
As long as you attend a regionally or nationally accredited institution, consider the factors that employers care about, and put effort into expanding your experience, you should have no problem finding the right career path with your online degree
Evan Thompson is a Washington-based writer for TBS covering higher education. He has bylines in the Seattle Times, Tacoma News Tribune, Everett Herald, and others from his past life as a newspaper reporter. | Will Employers Take My Online Degree Seriously?
Evan Thompson is an education and careers writer with The Best Schools. He was previously a journalist with bylines in the Seattle Times, Tacoma News Tribune, and Everett Herald. His beats have included education, sports, business, outdoors, and life...
If an online degree comes from a regionally or nationally accredited school, employers will know that it is reputable.
TheBestSchools.org is an advertising-supported site. Featured or trusted partner programs and all school search, finder, or match results are for schools that compensate us. This compensation does not influence our school rankings, resource guides, or other editorially-independent information published on this site.
Are you ready to discover your college program?
The pandemic is leading more students to ask: Will employers take my online degree seriously? We have the answer.
What do employers really think about online degrees? Does it matter if they didn't come from traditional classroom settings? Will employers take them seriously?
With the onset of coronavirus and traditional schools moving online, these questions are more relevant than ever. Online colleges are becoming more popular, but prospective students may still worry about their credibility.
We're here to set the record straight: An online degree holds just as much weight as a traditional degree. In fact, data shows that most employers don't even differentiate between the two types of degrees.
In 2018, 15% of all college students in the United States studied exclusively online, according to the National Center for Education Statistics. The most popular online programs? Business, healthcare, education, and computer/information science.
In 2018, 15% of all college students in the United States studied exclusively online.
A big benefit of online learning is greater access to education. Other perks include studying from home, a flexible schedule, and work-life balance.
However, factors such as accreditation, program length, and degree level may influence what employers think. Whether you're a prospective or current student, the following advice should help reassure you of the value of an online degree.
| no |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://schiller.edu/blog/do-employers-value-online-degrees-equally | Do employers value online degrees equally? | Schiller University | Do employers value online degrees equally?
In the last decade, online education programs have become an increasingly popular option for students around the world. With easy access to technology and the convenience of being able to study from anywhere, online programs offer an attractive alternative for those who want to earn a degree without having to leave their jobs, their hobbies, or other responsibilities. However, one of the most common concerns that students have about online programs is whether employers value them in the same way as traditional degrees.
The short answer is yes, employers value online degrees in the same way as traditional degrees, as long as the program is accredited and meets the same academic standards. In fact, more and more companies are hiring graduates from online programs, and they view these degrees as a sign that the candidate has skills such as self-discipline, the ability to work independently, and time management, all skills that are highly valued in today's job market.
However, many students still wonder if an online degree is worth it in the eyes of potential employers. With the increase in popularity of online education, it's a question worth exploring. In this post, we will discuss the value of online degrees and whether or not employers view them as equivalents to traditional degrees.
The value of online degrees
Online degrees have become increasingly popular in recent years due to their flexibility and accessibility. However, many students are still unsure about their value. Here are some reasons why online degrees are valuable:
Flexibility: online degrees allow you to study at your own pace and on your own schedule, which is ideal for working professionals and those with busy schedules.
Access to quality education: they offer access to the same high-quality education as traditional degrees, with the added benefit of being able to study from anywhere in the world.
Skill development: distance learning degrees often require students to develop skills such as time management, self-discipline, and technology proficiency, which are highly valued by employers.
Affordability: they are often more affordable than traditional degrees, which can be a deciding factor for students who are looking to save money.
Do employers really value online degrees equally?
Despite the benefits of online degrees, some employers may still view them as inferior to traditional degrees. However, this perception is changing. According to a recent survey by the Society for Human Resource Management, 79% of employers have hired someone with an online degree in the past year. Additionally, a study by Northeastern University found that employers rated the quality of online degrees as equal to or better than traditional degrees.
What employers look for
Regardless of the type of degree, employers value certain qualities in candidates. Here are some of the top qualities that employers look for:
Relevant skills and experience: employers want to see that candidates have the skills and experience necessary to perform the job.
Strong work ethic: they value candidates who are hardworking and dedicated to their work.
Good communication skills: effective communication is essential in any workplace, so employers look for candidates who can communicate clearly and effectively.
Adaptability: employers want to see that candidates can adapt to changing situations and are open to learning new things.
The benefits of online degrees from The Global American University, Schiller
The Global American University, Schiller, offers a range of online degree programs that provide you with a hands-on, experiential learning experience. Our programs are based on real-world challenges, allowing you to apply what you have learned to real-life situations. Here are some of the benefits of our online degree programs:
Tailor-made: our online degree programs allow you to study at your own pace and on your own schedule, making it easy to balance work, internships, travel, or any other commitments.
High-quality education: our programs are based on the methodology of one course per month, in order to be able to study in a focused and immersive way, and they are taught by experienced faculty members who are experts in their fields, providing you with a high-quality education that is on par with traditional degrees.
Skill development: all our programs, both online and on-site are designed to help you develop the skills you need to succeed in your career, including critical thinking, problem-solving, and communication skills.
Learn by living: we are committed to a method of experiential learning, where you learn through challenges and real-life experiences, allowing you to apply what you have learned to real-life situations. It's a method that is also widely followed in distance education.
Global employability: we have multiple international partners and a huge global network of connections that will help you develop the skills you need to achieve a bright professional future, wherever you want.
So online degrees are highly valuable and they are becoming increasingly accepted by employers. However, it's important to remember that employers value skills and experience above all else.
The Global American University, Schiller, provides several online degree programs that provide students with the experience that employers are demanding right now. Whether you're looking to advance your career or start a new one, we encourage you to explore our programs and see how we can help you achieve your goals.
Today, distance is no longer an obstacle, and together we can achieve anything we set our minds to! | Do employers value online degrees equally?
In the last decade, online education programs have become an increasingly popular option for students around the world. With easy access to technology and the convenience of being able to study from anywhere, online programs offer an attractive alternative for those who want to earn a degree without having to leave their jobs, their hobbies, or other responsibilities. However, one of the most common concerns that students have about online programs is whether employers value them in the same way as traditional degrees.
The short answer is yes, employers value online degrees in the same way as traditional degrees, as long as the program is accredited and meets the same academic standards. In fact, more and more companies are hiring graduates from online programs, and they view these degrees as a sign that the candidate has skills such as self-discipline, the ability to work independently, and time management, all skills that are highly valued in today's job market.
However, many students still wonder if an online degree is worth it in the eyes of potential employers. With the increase in popularity of online education, it's a question worth exploring. In this post, we will discuss the value of online degrees and whether or not employers view them as equivalents to traditional degrees.
The value of online degrees
Online degrees have become increasingly popular in recent years due to their flexibility and accessibility. However, many students are still unsure about their value. Here are some reasons why online degrees are valuable:
Flexibility: online degrees allow you to study at your own pace and on your own schedule, which is ideal for working professionals and those with busy schedules.
Access to quality education: they offer access to the same high-quality education as traditional degrees, with the added benefit of being able to study from anywhere in the world.
Skill development: distance learning degrees often require students to develop skills such as time management, self-discipline, and technology proficiency, which are highly valued by employers.
Affordability: they are often more affordable than traditional degrees, which can be a deciding factor for students who are looking to save money.
Do employers really value online degrees equally?
Despite the benefits of online degrees, some employers may still view them as inferior to traditional degrees. However, this perception is changing. | no |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://www.pointloma.edu/resources/undergraduate-studies/5-reasons-why-you-should-earn-your-bachelors-degree-online?market_source=vp | 5 Reasons Why You Should Earn Your Bachelor's Degree Online ... | Breadcrumb
5 Reasons Why You Should Earn Your Bachelor’s Degree Online
There are a lot of benefits to pursuing a bachelor’s degree as a working professional. The qualifications and knowledge that you earn can be instrumental when it comes to the positions you can apply for, overall career advancement, and higher salaries.
There are also challenges, however. How do you juggle school, work, and personal life? In-person curriculum and class schedules may not always make it feasible to get a degree while working a full-time job. One great solution to this problem is getting your bachelor’s degree online.
Not only will it provide you with the flexibility you need to do it all, but it will also provide you with the same exact career benefits that an in-person degree program will.
Whether you are a working professional seeking to boost your career or someone looking for a career change, getting an online bachelor’s degree can help you achieve your goals while also maintaining all of your other responsibilities.
Here are 5 reasons why you should earn your bachelor’s degree online
1. An online degree allows you to study from anywhere
With an online bachelor’s degree, you will have the freedom to study from anywhere you’d like. Maybe it is most convenient for you to study from home, or perhaps you enjoy a bit of study time at your local coffee shop or library. You can also chip away at your schoolwork while on vacation or while traveling. Many online programs are asynchronous, which means there isn't a designated time you need to sit in on a lecture. An asynchronous schedule adds to the flexibility of an online bachelor's program and the ability for you to work at your time and convenience.
Whichever type of online degree program you choose, whether it’s fully online or hybrid, you’ll be able to study wherever you’d like for either the entire time or part of the time.
Online student tip #1:
When it comes to locations of study, be sure to find a quiet study area that allows you to focus on your classes and assignments. Ensure that your study area is clean and organized. This helps create an optimum learning environment.
2. An online degree gives you more flexibility
According to a 2022 BestColleges survey, 65% of the online students surveyed held full-time or part-time jobs. In addition, 91% of those surveyed had children under 18 living in their household.
Perhaps your life is quite busy as well. Fitting an in-person academic schedule in with life, work and family would most certainly be challenging.
Thankfully, getting an online degree will give you more leeway in terms of the hours you dedicate to studying and taking classes. Most programs are self-paced, giving you the ability to determine your own hours and have more flexibility when it comes to your schedule.
Online student tip #2:
When coming up with a personalized schedule, be sure to use a planner. Use it not only to schedule hours to study and take classes but also to plan for times of leisure, breaks, or fun activities. Life is already quite busy; at some point, you have to care for yourself too!
3. An online degree gives you more university options
Choosing to study online allows you to have your pick from a wide variety of universities around the country. In addition, you won’t have to worry about relocation costs or having to up-end your life to pursue a bachelor’s degree at a university you are interested in. Instead, you’ll receive the same quality of education but without having to leave your city or home.
According to Best Colleges, more than “1 in 10 post-secondary institutions offer courses primarily online” and “2.8 million students (15%) attend primary online colleges.” This means that you have a variety of options at your disposal.
One of those options might be PLNU, located in sunny San Diego. It offers prospective students various accredited online bachelor’s degree programs that are flexible and begin every 8 weeks. These online bachelor’s degree programs can help take your career to the next level with an easy-to-use format.
4. An online degree enhances your resume and gives you more career opportunities
There are several advantages to earning an online bachelor’s degree but one of the main ones is a higher salary. Statistically, those with an advanced education tend to earn more than those without one.
According to the U.S. Bureau of Labor Statistics, those with a bachelor’s degree earn an average of $1,334 a week compared to those with a high school diploma who earn an average of $808 a week.
Earning a bachelor’s degree online can also help you compete for higher positions in management or leadership. What’s great about studying online is that you don’t have to pause your professional life to do so and can study and work simultaneously.
Online student tip #4:
Consider your future goals when choosing a program. What is your ultimate objective for your career? Use that as a guide when looking for the right online bachelor’s degree program for you.
5. An online degree teaches you discipline and self-management
Most online programs are self-paced, which means that you will get the opportunity to increase your discipline and self-management.
This is a great asset to have as you work to build your career.
Online student tip #5:
Take some time to discover more about your learning style so you can be aware of your strengths and weaknesses and adapt your study method accordingly. Are you a visual or audio learner? Do you prefer to learn by repetition or associating what you’re learning with physical activity of some kind? Whatever it is, take the time to explore that and enhance your learning experience.
What are common online learning myths?
If you are still on the fence as to whether pursuing an online bachelor’s degree is the right thing for you, here are some common online degree misconceptions that could help answer any questions you might have.
1. Online degree programs are not valued in the job market
One common misconception is that employers will value online degrees less than in-person ones. That is not necessarily true, especially if you choose an accredited degree from an accredited university.
In addition, the fact that you took the time to seek any type of education in the first place while working, will show your employers that you care about your career and about improving your skills.
2. Online degree programs are easier than in-person programs
Given that online degree programs can be completed at your own pace and in a shorter amount of time, people often get the impression that they are somehow easier. In fact, most online programs can be just as demanding as in-person ones if not more.
An added challenge is that students have to be self-disciplined and motivate themselves to finish their degree at a reasonable time.
3. Online degree programs take away the opportunity to network
Another common misconception about online degree programs is that students lose the opportunity to network and connect with other students and their professors. Actually, many online degree programs offer students an opportunity to virtually connect with one another and with professors. As you begin the program be sure to reach out to your professors to stay connected and perhaps even organize virtual meet-ups among students.
Next steps - start your online degree program today
PLNU offers a vast array of online bachelor’s degree programs with flexible class schedules. Start dates are scheduled for every 8 weeks, and you are free to learn at your own pace.
PLNU’s online bachelor’s degree programs will give you the opportunity to pursue education and still get to do your favorite things in life. | Labor Statistics, those with a bachelor’s degree earn an average of $1,334 a week compared to those with a high school diploma who earn an average of $808 a week.
Earning a bachelor’s degree online can also help you compete for higher positions in management or leadership. What’s great about studying online is that you don’t have to pause your professional life to do so and can study and work simultaneously.
Online student tip #4:
Consider your future goals when choosing a program. What is your ultimate objective for your career? Use that as a guide when looking for the right online bachelor’s degree program for you.
5. An online degree teaches you discipline and self-management
Most online programs are self-paced, which means that you will get the opportunity to increase your discipline and self-management.
This is a great asset to have as you work to build your career.
Online student tip #5:
Take some time to discover more about your learning style so you can be aware of your strengths and weaknesses and adapt your study method accordingly. Are you a visual or audio learner? Do you prefer to learn by repetition or associating what you’re learning with physical activity of some kind? Whatever it is, take the time to explore that and enhance your learning experience.
What are common online learning myths?
If you are still on the fence as to whether pursuing an online bachelor’s degree is the right thing for you, here are some common online degree misconceptions that could help answer any questions you might have.
1. Online degree programs are not valued in the job market
One common misconception is that employers will value online degrees less than in-person ones. That is not necessarily true, especially if you choose an accredited degree from an accredited university.
In addition, the fact that you took the time to seek any type of education in the first place while working, will show your employers that you care about your career and about improving your skills.
2. Online degree programs are easier than in-person programs
Given that online degree programs can be completed at your own pace and in a shorter amount of time, people often get the impression that they are somehow easier. | no |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://www.goodwin.edu/enews/are-online-degrees-respected-credible/ | Are Online Degrees Respected and Credible? | Goodwin University | Are Online Degrees Respected? Finding a Credible, Online Degree Program
Online education is here to stay. For years, colleges have been integrating online courses and degree options into their program rosters—and when the COVID-19 pandemic hit, online learning became the norm. Today, we are more equipped than ever to handle online education. We have the technology, the resources, and now the practice to successfully complete college courses online. We also acknowledge, perhaps more than ever, the benefits of doing so. Online learning allows college students to continue working their jobs and managing their home life, all while achieving a college degree. The question is, do employers recognize and respect the value of these online offerings? Are online degrees respected – and seen as legitimate credentials – on a resume?
In short, the answer is yes.
Employers are recognizing the credibility of online degrees as their popularity grows.
The Growing Credibility of Online Degrees: By The Numbers
New federal statistics show that, during the 2019-20 academic year, roughly 52 percent of postsecondary students in the United States took at least one online course. This number does not include courses that were moved online on an emergency basis, due to the pandemic. In other words, more than half of college students chose to enroll in an online course. About 23 percent, or 5.8 million college students, were enrolled in a fully-online degree program that same year. This number is up from 15% of fully online college students in the year 2018.
With these figures in mind, the nation—including the employers and recruiters within it—are recognizing the value of online degree programs. On top of this, more colleges and universities have embraced online education. Today, some of the most trusted institutions are now offering online degrees, as well as hybrid degree programs, to provide students with flexibility.
This isn’t surprising, as research has shown the benefits of online learning in the past. One study from the U.S. Department of Education, last updated in 2010, found that online higher education is more effective than traditional, face-to-face learning alone. More notably, hybrid learning (a blend of online and on-campus courses) was found to be the most advantageous format for college students.
That same year, a survey from CareerBuilder.com found that 83% of executives believe “an online degree is as credible as one earned through a traditional campus-based program.” Employers also reported that certain factors make an online degree more credible, including:
Accreditation of the college, university, or program
The reputation of the institution awarding the degree
The quality of its graduates
It is normal for prospective students to have hesitations about earning a degree online. However, as the above statistics show, the nation is shifting its perspective of online learning as a whole. With the benefits of online degree programs clear, the respect of online colleges has grown substantially. As long as you choose an online school that is reputable, accessible, and supportive of its students, you can count on your future success.
What Do Employers Think of Online Degrees?
Pursuing a degree online does not mean you have to sacrifice a quality education. In fact, an online degree program can help you prepare for your career and provide you with invaluable skillsets outside of your core major. According to U.S. News, many employers find that graduates of online degree programs have strong time management skills, decision-making skills, and commitment to their field.
Below are just some examples of what employers might think of your online degree:
1. You have great time management skills.
Many students pursue an online degree because they have other priorities, such as a full-time job. Taking classes and working full-time, therefore, requires balance and good time management. Many employers will find this an attractive quality in candidates. Online degree holders have the ability to balance school alongside work and other obligations, and find success in their college coursework.
2. It shows you are a practical decision maker.
As noted above, students often choose an online degree because of the flexibility it provides. However, as you apply for jobs, you may be asked, “Why did you pursue a degree online?” This is your chance to highlight your rationale and in turn your decision-making skills. Did you choose to pursue an online degree because you wanted to maintain your career? Was it a financial decision? Was it because of family obligations? As cited by U.S. News, “Answering that question can reveal a candidate’s decision-making abilities, particularly about working in different types of settings.”
3. Many view online education as a doorway to new opportunities.
Online education can have its perks when applying for jobs, but it can also come into play in your current role. If you are in a career field that you love, but are looking to advance your title or skillsets, an online degree can be a great solution. And many employers agree. Employers recognize that online degree programs can help their employees further their education, enhance their career skills, and bring more to the table in their job. In fact, it’s reported that 60% of online college students had access to employer reimbursement for their tuition.
Despite the benefits above, it is unlikely that employers will make their decision based on whether a degree was earned in-person or online. In fact, many do not look at the format of degree at all, but rather the degree itself. Does the degree you earned apply to your field? Did it provide you with the skills, the knowledge, and the credentials needed to practice in your line of work? Was the degree program accredited? Was the degree earned from a reputable and trusted school? These are questions your employers will ask when assessing your education. This brings us to the next section:
What Should You Look for in an Online Degree Program?
As you evaluate your online degree options, there are certain qualities to look for to ensure that the program or school is legitimate. The following factors will help to ensure that your online degree will be credible, respected, and valued after graduation day.
1. Accreditation
Accreditation is the process in which an outside entity evaluates a school or program and ensures it meets set standards of quality and rigor. The accrediting body will assess a college’s success rates, faculty, curricula, and more to determine whether it is a high-grade institution. Both schools as a whole and individual programs (whether online, on-campus, or hybrid) can be accredited.
There is also regional and national accreditation. Generally speaking, regional accreditation is considered to be top-notch and is therefore most widely recognized. This is because regional accrediting bodies have more rigorous standards when evaluating colleges and universities. Regional accreditation is also important for transfer students, as credits easily transfer between regionally-accredited schools.
2. Student Success
If you are unsure about an online school or program, speak with their admissions team about student success, graduation, and job placement rates. You can also research what employers think of the school’s graduates. Additionally, ask the school about their student support and career services. Even if you are pursuing a fully online degree program, your institution should be there for you throughout your educational journey. This means guiding you through the process of online learning, job searching, and applying for potential careers. If your school does not offer support services, consider this a red flag.
3. On-Campus Options
Pursuing a fully online program is great choice for many students who need flexibility. However, knowing that your college or university has a brick-and-mortar campus, as well, can be a source of comfort. According to Edsmart.org, schools with physical campuses are viewed as more credible, with a more widely known reputation, than fully online schools. Additionally, the campus option is a nice-to-have, if you decide to pursue hybrid online/on-campus courses down the road.
4. Non-Profit College
Finally, consider whether your college or university is a for-profit or not-for-profit institution. Historically, there has been a great stigma associated with for-profit online schools. While they have been improving in recent years, for-profit colleges and universities (particularly those online) have faced criticism about low graduation rates, low quality standards, questionable admissions processes, and high student debt. With that in mind, it can help your marketability to choose a reputable, non-profit college or university that offers your desired online degree. Non-profit colleges and universities are also more likely to be regionally accredited, have strong student support services, and high student success rates.
At the end of the day, however, it’s most important to look for an online degree program that is accredited and credible, that aligns with your career goals, and that offers you the support you need. The best online degree for you will be one that meets all your needs, through graduation day. When you find this program, you can rest assured that your online degree will be respected and valued by employers, as well as yourself.
Goodwin University is a leader in online education, with fully online as well as hybrid degree programs available to students. Whether you are transferring schools, going back to school, or simply looking for a more flexible degree option, explore our online programs here. We understand students have other obligations. We understand you need a degree that works with your schedule, not against it. We believe you should not have to sacrifice a quality education for flexibility. You can earn a credible, respectable, official college education from the comfort of your own home. | ) was found to be the most advantageous format for college students.
That same year, a survey from CareerBuilder.com found that 83% of executives believe “an online degree is as credible as one earned through a traditional campus-based program.” Employers also reported that certain factors make an online degree more credible, including:
Accreditation of the college, university, or program
The reputation of the institution awarding the degree
The quality of its graduates
It is normal for prospective students to have hesitations about earning a degree online. However, as the above statistics show, the nation is shifting its perspective of online learning as a whole. With the benefits of online degree programs clear, the respect of online colleges has grown substantially. As long as you choose an online school that is reputable, accessible, and supportive of its students, you can count on your future success.
What Do Employers Think of Online Degrees?
Pursuing a degree online does not mean you have to sacrifice a quality education. In fact, an online degree program can help you prepare for your career and provide you with invaluable skillsets outside of your core major. According to U.S. News, many employers find that graduates of online degree programs have strong time management skills, decision-making skills, and commitment to their field.
Below are just some examples of what employers might think of your online degree:
1. You have great time management skills.
Many students pursue an online degree because they have other priorities, such as a full-time job. Taking classes and working full-time, therefore, requires balance and good time management. Many employers will find this an attractive quality in candidates. Online degree holders have the ability to balance school alongside work and other obligations, and find success in their college coursework.
2. It shows you are a practical decision maker.
As noted above, students often choose an online degree because of the flexibility it provides. However, as you apply for jobs, you may be asked, “Why did you pursue a degree online?” This is your chance to highlight your rationale and in turn your decision-making skills. Did you choose to pursue an online degree because you wanted to maintain your career? Was it a financial decision? Was it because of family obligations? | no |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://www.umassglobal.edu/news-and-events/blog/taking-classes-online | Debunking 4 Myths About Online Learning | Why UMass Global
UMass Global is a fully accredited, private, nonprofit university designed for working adults seeking to improve their careers through education. With many programs offered online, UMass Global is here to help you reach your educational goals.
4 Lies you've heard about taking classes online
Online learning, distance education, eLearning — there are many names for taking classes online, but it’s an indisputable fact that the number of students choosing this modality has rapidly grown over the years. The National Center for Education Statistics (NCES) reports that while overall college enrollment dropped by almost 90,000 students from 2016 to 2017, the number of students who took at least some of their courses online grew by more than 350,000.
Fortunately, high-quality college courses have become more accessible than ever. Online learning has played an instrumental role in bringing world-class educational resources to those who can’t realistically commute to campus, to those who may feel more comfortable in a virtual learning environment and even to those who reside in developing nations.
Despite the fact that eLearning offers unmatched flexibility, many prospective students are still skeptical. Misconceptions related to distance education may be holding students like you back. Read on to discover the truth about taking classes online.
4 Common misconceptions about online learning
Best Colleges’ 2019 Trends in Online Education Report found that the primary concern online students have about participating in distance education was the quality of instruction and academic support. This was followed by a number of other concerns, such as worries about employers’ perception of online degrees and the supposed lack of community in a virtual environment.
We examined the research and looked to University of Massachusetts Global’s online learning resources to get to the bottom of these misconceptions.
In truth, distance learning can be challenging for those who aren’t used to autonomy. The most successful online learners often share the following abilities:
They know how to prioritize time management
They have the ability to stay motivated
They know when to ask for help
They are able to articulate themselves well through writing
They’re confident in their abilities as a student
They have basic technology skills
As long as students are able to keep up with the assigned coursework, the learning outcomes between online and face-to-face learning shouldn’t differ. In fact, a recent study from the U.S. Department of Education found that students who received face-to-face instruction had no advantage over online learners. In fact, the online students exhibited modestly stronger learning outcomes than their classroom counterparts.
There are other insights we’ve learned over time as well. Also consider that online learning often allows students to consume information in more digestible portions. That can make it easier to learn the material and understand how different concepts intertwine. And when technology is integrated into learning models, students are more likely to remain interested in the content, stay focused on their assignments and retain the information.
2. Your online degree will be less valued by potential employers
It’s clear that you stand to learn just as much in an online program as you would if you were attending courses on campus. Still, there’s a natural follow-up question: Will future employers take my online degree seriously? Completing an online program doesn’t matter to hiring managers as much as you might think.
The 2019 Trends in Online Education Report revealed that 38 percent of surveyed employers believe online learning is equal to on-campus learning. Only 10 percent believe it to be inferior. What’s more important to the hiring managers and human resources professionals who view your resume is that you received your degree from an accredited institution. Accreditation signifies that a college meets or exceeds the expected standards for a particular program, regardless of whether it’s on-campus or online.
The Best Colleges survey also revealed that graduates of online programs fare well in the workforce. Among surveyed graduates, 85 percent maintain that their online education yielded a positive return on investment (ROI). After their experiences with distance education, 89 percent would recommend it to others.
Earn a degree on your terms.
For 20+ years UMass Global has helped students like you complete their education online.
3. You’ll be isolated and forced to learn by yourself
When asked to envision the online classroom experience, many people conjure images of students sitting in front of their laptops at home with little to no interaction with their classmates or instructors.
The reality is that part of the surge in students enrolling in online programs is due to the flexibility they offer. Students can often get their work done when it best fits into their own schedules — even at their own pace for some programs. While that can result in more independent work time than a traditional classroom environment, you can still be a part of an interactive learning community when you study online.
Not all online courses are the same, but most of them will include an interactive component. This might be conducted through video conferencing or through discussion board participation. Both of these options allow students to engage with one another, bounce ideas off classmates, offer counterpoints and ask questions.
It’s also pretty common for online courses to include group projects that involve the use of tools like Google Docs or Zoom. This is an important aspect of eLearning that employers love, because collaborative communication in a digital environment has become a vital part of most industries.
If you ever find that you’re struggling or have additional questions regarding one of your lessons or assignments, online instructors at most institutions will hold virtual office hours. They make themselves available for students to reach out with questions or for additional resources as needed.
4. You won’t receive help when you need it
One of the top concerns students have is that an online classroom environment can’t offer sufficient academic support. Not all institutions are equally equipped to support online students. The key is finding a college or university that designed its programs with students like you in mind.
Online students also have access to multimedia writing and design support, which can help them discover innovative ways to present projects and other data to their classmates — and ultimately to their colleagues in the workforce. This resource enables students to browse through a curated collection of on-demand tutorials and other materials related to graphic design, videography, podcasting and more.
Keep in mind that most online programs are designed with the understanding that not every student is a tech whiz. If you find you’re struggling with one or more of the tools used in your virtual classroom, there should be technology support services readily available to help walk you through everything.
Find success taking classes online
The online learning experience may seem a little overwhelming at first, but examining the facts makes it clear that you could receive the same high-quality learning outcomes you’d expect from a traditional environment. And if you’re hoping to juggle your coursework with a number of other responsibilities and commitments, the flexibility offered by distance education is unmatched.
If you’re still unsure whether you’re cut out for the eLearning experience, you may want to learn more about what it takes to be successful. Find out if you have what it takes by heading over to our article “6 Signs you’re ready to conquer the online classroom.”
About UMass Global
We value your privacy
By submitting this form, I agree that UMass Global may contact me about educational services by voice, pre-recorded message and/or text message using automated technology, at the phone number provided, including wireless numbers. I understand that my consent is not required to attend University of Massachusetts Global. Privacy Policy | In fact, a recent study from the U.S. Department of Education found that students who received face-to-face instruction had no advantage over online learners. In fact, the online students exhibited modestly stronger learning outcomes than their classroom counterparts.
There are other insights we’ve learned over time as well. Also consider that online learning often allows students to consume information in more digestible portions. That can make it easier to learn the material and understand how different concepts intertwine. And when technology is integrated into learning models, students are more likely to remain interested in the content, stay focused on their assignments and retain the information.
2. Your online degree will be less valued by potential employers
It’s clear that you stand to learn just as much in an online program as you would if you were attending courses on campus. Still, there’s a natural follow-up question: Will future employers take my online degree seriously? Completing an online program doesn’t matter to hiring managers as much as you might think.
The 2019 Trends in Online Education Report revealed that 38 percent of surveyed employers believe online learning is equal to on-campus learning. Only 10 percent believe it to be inferior. What’s more important to the hiring managers and human resources professionals who view your resume is that you received your degree from an accredited institution. Accreditation signifies that a college meets or exceeds the expected standards for a particular program, regardless of whether it’s on-campus or online.
The Best Colleges survey also revealed that graduates of online programs fare well in the workforce. Among surveyed graduates, 85 percent maintain that their online education yielded a positive return on investment (ROI). After their experiences with distance education, 89 percent would recommend it to others.
Earn a degree on your terms.
For 20+ years UMass Global has helped students like you complete their education online.
3. | no |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://blog.efmdglobal.org/2023/08/07/3-factors-employers-recruiters-say-make-business-school-graduates-highly-competitive/ | 3 essential factors employers say make business school graduates ... | According to more than 1,000 corporate recruiters and staffing firms worldwide, business schools are on the right track with the skills they are developing among their students—with some exceptions.
For more than two decades, the Corporate Recruiters Survey from the Graduate Management Admission Council™(GMAC™) has provided the world’s graduate business schools and employers with data and insights to understand current trends in hiring, compensation, skill demand, and perceptions of MBA and business master’s graduates.
The Corporate Recruiters Survey – 2023 Summary Report explores which skills employers think will characterise the future workplace—and how prepared they view graduate management education (GME) candidates to be. The report also examines how macroeconomic conditions are influencing hiring and salary decisions across industries and around the globe.
GMAC, together with survey partners European Foundation for Management Development (EFMD) and the MBA Career Services and Employer Alliance (MBA CSEA), conducted the survey from January to March of 2023 in association with the career services offices at participating graduate business schools worldwide.
Despite reported macroeconomic headwinds, employers remain confident in graduate business school’s capabilities to prepare future business leaders. Here are three key takeaways from employers to ensure graduate business schools are equipping their graduates with the skills for the workplace of tomorrow.
Employers say communication, data analysis, and strategy are currently among the most important skills for GME graduates—and most say their importance will continue to grow.
In the next five years, employers think their organisations will look increasingly global, hybrid, and dependent on different mediums of effective communication skills across cultures. To help GME graduates succeed, employers don’t think business schools need to start teaching a completely different set of skills. Instead, most say currently, valuable skills like communication, data analysis, and strategy will grow in importance for GME graduates in the next five years.
Given the current and future importance of communications and its wide range of associated skills, the survey digs deeper into what specifically recruiters are looking for. If a respondent indicated communication is an important skill for current GME graduates, they were asked to evaluate the current and future importance of more specific communication skills ranging from active listening to negotiating to writing.
Eighty-one per cent of these employers cited cross-cultural competence as becoming much or slightly more important in the next five years; 77% cited multilingualism; and 75% cited active listening.
The growing importance of communication skills like these indicates that employers consider the future workplace to be more intercultural and dependent on different mediums of effective communication.
To gain more insights into the specific technology skills valued by employers, the survey asked the global employers who indicated technology, software, and programming are currently important about more specific tech capabilities.
Eighty per cent of these employers cited Web3, blockchain, and virtual reality (VR) as becoming much or slightly more important; 75% cited cloud-based technology; and 74% selected artificial intelligence (AI) and machine learning.
There was a small difference in how tech-concerned employers view the future importance of these specific skills, meaning there is an opportunity for business schools to cultivate a wide range of technological talents among GME graduates.
Finance and tech employers have some concerns about GME graduates’ preparedness to thrive in the future intercultural workplace, but consulting employers feel more confident.
According to GMAC’s Prospective Students Survey, consulting, technology, and finance/accounting are the most desired industries to work in after graduation. After the Corporate Recruiter Survey respondents indicated which specific communication skills are currently important to GME graduates, they were asked about the future importance of the selected skill and GME graduates’ preparedness to use the skill in the workforce.
Less than half of communication-concerned finance and accounting employers believe GME graduates are currently prepared to leverage skills they say are growing in importance to the future intercultural and hybrid workplace, such as cross-cultural competence, multilingualism, and active listening.
Employers in the technology and consulting sectors similarly cited the growing importance of cross-cultural competence and multilingualism; fortunately, they were a bit more optimistic than their finance and accounting counterparts on GME graduates’ preparedness.
More than half of communication-concerned tech sector employers believe candidates are adequately or very well prepared with multilingual skills, though slightly fewer than half say their cross-cultural competence is sufficient.
Seventy-eight per cent of these consulting employers say GME graduates are very well or adequately prepared with their multilingual skills, and 66% say their cross-cultural competence is up to par.
Regionally, some Western European employers questioned GME graduates’ cross-cultural competence. US recruiters were the most likely to say GME graduates could be better prepared across a range of skills to meet specific communication or technological needs within their organisations.
Employers continue to value talent from in-person programmes over those with online degrees or micro-credentials only, though perceptions vary by region.
About half of employers globally view online and in-person degrees equally, though most tend to believe employees from in-person programmes have stronger leadership, communication, and technical skills.
This indicates that some employers value online and in-person degrees equally but think in-person degrees equip graduates with stronger skills. This inconsistency between the overall perception of online degrees and the skills they impart is observed at the regional level, too.
Employers from Africa, Latin America, the Middle East, and Western Europe generally do not statistically deviate from the global trend—though Western European employers are more likely to be ambivalent about whether a candidate’s communication or leadership skills are developed in online or in-person programs.
The standing of online degrees remains lowest in the US—only 27% of American recruiters say their organisation views in-person and online degrees equally.
But while US employers are less likely to say they value in-person and online degrees equally, they tend to be ambivalent about the source of talent’s technical skills.
In Asia, the opposite is true—employers say they value online and in-person degrees equally.
Still, they also believe in-person programmes equip their graduates with better leadership, communication, and technical skills. This is an opportunity for graduates of online degrees to talk about their credentials differently depending on the employer—employers in Asia are more likely to value the degree itself. In contrast, US employers would rather hear about specific skills candidates attained.
Likewise, employers worldwide believe talent with a GME degree are more likely to be successful in their organisation than employees with micro-credentials only. This means micro-credentials in and of themselves are less likely to impress employers compared to GME degrees. However, similar to online degrees, some employers may appreciate the skills that can be developed when pursuing micro-credentials.
In summary, the overall picture for candidate preparedness for the future intercultural and hybrid workplace is positive.
At a global level, most employers think GME graduates are prepared to deliver the most important communication and technology skills of the future. As they predict growth in communication and technology needs, business schools can deepen their students’ capacity to understand and communicate within the changing conditions of the future workplace—even if many of the skills they need to wield are tried and true.
Read the full report for more employer perspectives, actionable insights for programme and recruitment consideration, and regional and degree snapshots of hiring and salary trends.
Archives
EFMD Global is not responsible for any errors or omissions in the content of this site provided by third party or authors. The information contained in this site is provided on an “as is” basis with no guarantees of completeness, accuracy, usefulness or timeliness.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Cookie
Duration
Description
_ga
2 years
This cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors.
_gat_gtag_UA_122321808_3
1 minute
This cookie is set by Google and is used to distinguish users.
_gid
1 day
This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected including the number visitors, the source where they have come from, and the pages visted in an anonymous form.
CONSENT
16 years 4 months 14 days 9 hours
These cookies are set via embedded youtube-videos. They register anonymous statistical data on for example how many times the video is displayed and what settings are used for playback.No sensitive data is collected unless you log in to your google account, in that case your choices are linked with your account, for example if you click “like” on a video.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Cookie
Duration
Description
_lfa
2 years
This cookie is set by the provider Leadfeeder. This cookie is used for identifying the IP address of devices visiting the website. The cookie collects information such as IP addresses, time spent on website and page requests for the visits.This collected information is used for retargeting of multiple users routing from the same IP address.
IDE
1 year 24 days
Used by Google DoubleClick and stores information about how the user uses the website and any other advertisement before visiting the website. This is used to present users with ads that are relevant to them according to the user profile.
test_cookie
15 minutes
This cookie is set by doubleclick.net. The purpose of the cookie is to determine if the user's browser supports cookies.
VISITOR_INFO1_LIVE
5 months 27 days
This cookie is set by Youtube. Used to track the information of the embedded YouTube videos on a website.
YSC
session
This cookies is set by Youtube and is used to track the views of embedded videos. | This indicates that some employers value online and in-person degrees equally but think in-person degrees equip graduates with stronger skills. This inconsistency between the overall perception of online degrees and the skills they impart is observed at the regional level, too.
Employers from Africa, Latin America, the Middle East, and Western Europe generally do not statistically deviate from the global trend—though Western European employers are more likely to be ambivalent about whether a candidate’s communication or leadership skills are developed in online or in-person programs.
The standing of online degrees remains lowest in the US—only 27% of American recruiters say their organisation views in-person and online degrees equally.
But while US employers are less likely to say they value in-person and online degrees equally, they tend to be ambivalent about the source of talent’s technical skills.
In Asia, the opposite is true—employers say they value online and in-person degrees equally.
Still, they also believe in-person programmes equip their graduates with better leadership, communication, and technical skills. This is an opportunity for graduates of online degrees to talk about their credentials differently depending on the employer—employers in Asia are more likely to value the degree itself. In contrast, US employers would rather hear about specific skills candidates attained.
Likewise, employers worldwide believe talent with a GME degree are more likely to be successful in their organisation than employees with micro-credentials only. This means micro-credentials in and of themselves are less likely to impress employers compared to GME degrees. However, similar to online degrees, some employers may appreciate the skills that can be developed when pursuing micro-credentials.
In summary, the overall picture for candidate preparedness for the future intercultural and hybrid workplace is positive.
At a global level, most employers think GME graduates are prepared to deliver the most important communication and technology skills of the future. As they predict growth in communication and technology needs, business schools can deepen their students’ capacity to understand and communicate within the changing conditions of the future workplace—even if many of the skills they need to wield are tried and true.
| yes |
Online Learning | Are online degrees valued less by employers? | yes_statement | "online" "degrees" are "valued" less by "employers".. "employers" "value" "online" "degrees" less. | https://www.brookings.edu/articles/a-silver-lining-for-online-higher-education/ | A silver lining for online higher education? | Brookings | David FiglioDean - The School of Education and Social Policy at Northwestern University
More On
The recent explosion of computing capacity and speed, coupled with the rapidly-rising cost of higher education, have created a “perfect storm” in which traditional institutions are more keen to offer classes online than ever before and where demand for online higher education is high. Meanwhile, the 2006 decision by the U.S. Department of Education (ED) to lift the “50 percent rule” of the Higher Education Act of 1992, thereby allowing institutions offering more than half of all classes through distance education to distribute Title IV student aid, has permitted online only postsecondary institutions to proliferate.[i] By 2013, according to calculations from the ED’s Integrated Postsecondary Education Data System, 11 percent of all U.S. undergraduate degree-seeking students studied in online-only programs, and 27 percent of students took at least some of their classes online.[ii]
Online delivery is clearly a major part of American higher education today, so it is important to know whether online education helps or hurts the students it serves, and whether society benefits from its ubiquity. Deming and collaborators showed that prices are lower in online higher education: The higher the share of students taking all courses online, the lower the tuition and fees charged to students in both public and private-sector institutions.[iii] (Higher shares of students taking some of their courses online do not appear to be associated with lower tuition and fees.)
But is the quality of the online coursework equivalent to that which students would have experienced in face-to-face instruction? It is certainly possible to leverage online or hybrid platforms to deliver remarkably rich and deep higher education; indeed, some of my Northwestern University colleagues and others around the country are working on exactly these types of curricular innovations. However, these innovative uses of technology are by and large not what we’re talking about when we think about the proliferation of low-cost online education circa 2016. High-quality online courses are expensive to deliver—at least as expensive, if not more, to develop and staff than traditional face-to-face instruction.[iv]
The labor market seems to recognize some of the shortcomings of online education. A recent field experiment created fictitious resumes and randomly assigned degrees from for-profit online institutions and nonselective public institutions to “applicants” for real-world position vacancies in business and health fields.[v] Deming, together with a different set of collaborators, found that employers were dramatically less likely to call back applicants with a business bachelor’s degree from a for-profit online institution than those with a degree from a nonselective public institution. For jobs in healthcare, employers were substantially less likely to call back applicants with credentials from a for-profit online institution than those from a public institution—but, importantly, only in cases where the job doesn’t require an external indicator of quality such as a professional license. In cases where this external credential is necessary, the gap in the callback rate between those with for-profit online and nonselective public credentials was dramatically smaller. Taken together, these results suggest that employers are still wary about the skills of those with online higher education credentials (at least those from for-profit institutions).
The employers contacted in that field experiment might be worried about differential selection of students into online education, or they might be concerned about pedagogical shortcomings of online education, or both. There is a growing literature on the second point, and the evidence from experimental studies of online versus face-to-face delivery of the types of courses that are widespread suggests that online delivery may be inferior to face-to-face delivery of the same classes, at least for vulnerable populations.
In my own joint research with Rush and Yin, students in a large introductory microeconomics course at a major public research institution were randomly assigned to face-to-face and online-only settings.[vi] In general, those receiving online instruction fared worse than those receiving face-to-face instruction, but the differences were modest. For Hispanic students, male students, and relatively poorly-prepared students, however, the online disadvantage was considerably greater. Another study at a selective public research university by Alpert, Couch, and Harmon—this time, assigning students at the point of expressing interest in enrolling in the course rather than following enrollment—also showed that students assigned to an online-only introductory microeconomics course performed worse than those assigned to face-to-face instruction.[vii] The institution in the Alpert study also randomly assigned some students to a hybrid environment in which face-to-face instruction was halved, and in the hybrid case there appeared to be little reduction in learning outcomes relative to those in face-to-face-only instruction.
A third study of an introductory statistics course offered at six public universities by Bowen and collaborators also found that students taking a hybrid online/face-to-face course fared no worse than those taking the same course in a face-to-face environment,[viii] but another study by Joyce and collaborators investigating hybrid courses versus face-to-face courses in introductory microeconomics at a selective public institution found evidence of modest performance reductions in the hybrid case.[ix] All of these experiments concern performance in the course in question; another recent study at a selective public university demonstrates that students also perform worse in follow-on courses when their prerequisite course was taken online versus face-to-face.[x] The pattern of reduced outcomes in online-only courses versus face-to-face courses is also present in community colleges,[xi] and, as found by Hart and collaborators, there exists dramatic variation in the (generally negative) effects of online course-taking across different California community colleges.[xii] The patterns are similar in the private sector too: On average, as found by Bettinger and collaborators, DeVry University students perform worse in online settings than in face-to-face instruction.[xiii]
The weight of the evidence, therefore, presents a modestly pessimistic picture of the likely outcomes of students enrolled in online-only instruction. Online delivery, while providing opportunities for increased flexibility and personalization of education, might also exacerbate some of the behavioral barriers that many students face in navigating college. The magnitudes are not trivial: they imply that students who take only online courses would have grade point averages about ten percentage points lower in the distribution than those who took only face-to-face courses, all else equal. But there are, of course, other arguments for online education, even if the quality of this education may be at times inferior.
For instance, it’s possible that online education teaches students skills that are valued in the labor market but are not as often conferred by face-to-face instruction. Right now, we don’t know the answer to that question.
In addition, online programs may increase access to higher education for students for whom—due to spatial or time constraints—traditional face-to-face instruction may be prohibitive. It’s possible that if a very well-regarded institution provides an online-only degree program, that program might substantially increase access while at the same time conferring reputational quality signals that are highly valued in the labor market.
A recent entry into the market for graduate education in computer science provides an opportunity to investigate this possibility in part. In spring 2014, Georgia Tech’s esteemed computer science department began enrolling students in a fully-online version of its top-ranked master’s degree program. Tuition for this program is just one-sixth of what out-of-state students typically pay for the face-to-face version of the program, and the degree conferred is publicly identical to the face-to-face degree.
A just-released study by Goodman, Melkers, and Pallais investigated whether this program increased access to computer science master’s degrees, and the answer appears to be a resounding yes.[xiv] Importantly, they could find no evidence that this program crowded out enrollment elsewhere, suggesting that this program was really satisfying largely unmet demand for graduate work in computer science. They found that the program principally served mid-career Americans, and suggested that this program alone will increase the annual number of American computer science master’s degrees by seven percent. While it’s impossible to know at present whether Georgia Tech’s online-only instruction is as high-quality as its face-to-face instruction—and this is something that, given the extant literature at the undergraduate level, may still be a real concern—this study makes clear that one of the promises of online education (expansion of access to satisfy unmet demand) may well be realized if enough highly-regarded institutions enter the online education marketplace.
Another possible benefit of online education is its potential to provide competition for incumbent educational institutions. There is reason to believe that this might be the case: for instance, in prior research at the K-12 level, Hart and I found that offering school vouchers to economically disadvantaged families improved the productivity of traditional public schools.[xv]
Another just-released study considered whether postsecondary institutions became more efficient following the 2006 ED regulatory change that facilitated the rapid expansion of online-only institutions.[xvi] Deming, Lovenheim, and Patterson compared the post-2006 productivity of local postsecondary institutions with low pre-2006 competition levels to those with higher pre-2006 competition levels. They found evidence the online competition led institutions to shift resources toward instructional expenditures, at least in the public sector and four-year schools, suggesting that online institutions modestly improved the productivity of brick-and-mortar institutions. Interestingly, the authors did not find evidence that incumbent institutions compete on price—indeed, they estimated that tuition increases with online competition. The authors suggested that competition on quality may be more salient than price competition, at least at the tuition levels in question, in this market.
Online higher education is on the rise and rapidly expanding. So what is the evidence regarding this sector?
Students tend to learn less from online courses than they do from equivalent courses with at least some face-to-face content—especially those from vulnerable populations.
The labor market appears to value online degrees less than it does face-to-face degrees from nonselective public institutions.
A highly-regarded institution can substantially increase access with online offerings.
Therefore, while the overall picture regarding online education is mixed, these new papers present some cause for optimism, especially if we can figure out ways to successfully monitor and certify the quality of online education. However, as Deming and I point out in a recent essay, providing accountability for higher education institutions is easier said than done.[xvii]
The author(s) were not paid by any entity outside of Brookings to write this particular article and did not receive financial support from or serve in a leadership position with any entity whose political or financial interests could be affected by this article.
The Brookings Institution is a nonprofit organization based in Washington, D.C. Our mission is to conduct in-depth, nonpartisan research to improve policy and governance at local, national, and global levels. | Deming and collaborators showed that prices are lower in online higher education: The higher the share of students taking all courses online, the lower the tuition and fees charged to students in both public and private-sector institutions.[iii] (Higher shares of students taking some of their courses online do not appear to be associated with lower tuition and fees.)
But is the quality of the online coursework equivalent to that which students would have experienced in face-to-face instruction? It is certainly possible to leverage online or hybrid platforms to deliver remarkably rich and deep higher education; indeed, some of my Northwestern University colleagues and others around the country are working on exactly these types of curricular innovations. However, these innovative uses of technology are by and large not what we’re talking about when we think about the proliferation of low-cost online education circa 2016. High-quality online courses are expensive to deliver—at least as expensive, if not more, to develop and staff than traditional face-to-face instruction.[iv]
The labor market seems to recognize some of the shortcomings of online education. A recent field experiment created fictitious resumes and randomly assigned degrees from for-profit online institutions and nonselective public institutions to “applicants” for real-world position vacancies in business and health fields.[v] Deming, together with a different set of collaborators, found that employers were dramatically less likely to call back applicants with a business bachelor’s degree from a for-profit online institution than those with a degree from a nonselective public institution. For jobs in healthcare, employers were substantially less likely to call back applicants with credentials from a for-profit online institution than those from a public institution—but, importantly, only in cases where the job doesn’t require an external indicator of quality such as a professional license. In cases where this external credential is necessary, the gap in the callback rate between those with for-profit online and nonselective public credentials was dramatically smaller. Taken together, these results suggest that employers are still wary about the skills of those with online higher education credentials (at least those from for-profit institutions).
| yes |
Online Learning | Are online degrees valued less by employers? | no_statement | "online" "degrees" are not "valued" less by "employers".. "employers" do not "value" "online" "degrees" less. | https://www.nber.org/digest/mar15/value-postsecondary-credentials-labor-market | The Value of Postsecondary Credentials in the Labor Market | NBER | The Value of Postsecondary Credentials in the Labor Market
There is little evidence that obtaining credentials from for-profit institutions improves the job prospects of workers who would otherwise not attend college.
Employers looking at applicants with bachelor's degrees in business are 22 percent less likely to call back graduates from for-profit online schools than those from non-selective public institutions, according to a new study comparing employer perceptions of public and for-profit institutions of higher education.
"These online, for-profit colleges have been responsible for 21 percent of the growth in all bachelor's degrees and 33 percent of the growth in bachelor's degrees in business over the last decade," write David J. Deming, Noam Yuchtman, Amira Abulafi, Claudia Goldin, and Lawrence F. Katz, authors of The Value of Postsecondary Credentials in the Labor Market: An Experimental Study (NBER Working Paper No. 20528). "Yet it is precisely the bachelor's degrees granted by the fastest-growing set of institutions that are associated with the worst callback outcomes for jobs requiring a bachelor's degree."
Between April and November 2014, the researchers used a large employment website to send 10,492 hypothetical résumés to employers posting business and healthcare-related openings in five major cities: Chicago, Los Angeles, Miami, New York, and the San Francisco Bay Area. For business positions requiring a bachelor's degree, the researchers sent four otherwise identical résumés for each position: two from local public universities varying by selectivity, one from a local for-profit institution and one from a national, online, for-profit school. The authors took an analogous approach in applying to business jobs not requiring a bachelor's degree and to healthcare sector positions. Of all résumés sent to job openings, 8.2 percent received a callback.
The rate of callbacks depended on the type of job being offered. For example, the preference for graduates of public universities over for-profits was clear for business job openings requiring a bachelor's degree. On average, 8.5 percent of applicants with a degree from a public institution received callbacks, in contrast to only about 6.3 percent of applicants with degrees from online for-profit schools. There was some variation among the for-profit schools, too, with employers preferring locally operated for-profit schools (7.8 percent callback rate) to the online for-profits.
For business job openings not requiring a bachelor's degree, there was no statistically significant difference by sector of postsecondary credential, and less than one percentage point difference for online for-profit bachelor's degree-holders relative to those with no postsecondary degree.
Employers offering jobs in the healthcare sector not requiring a certificate were 57 percent less likely to call back applicants with a certificate from a for-profit school than from a community college. For health jobs that did not require a credential, 8.9 percent of applicants with a public certificate got callbacks, compared with 4.2 percent of those with a for-profit certificate and 5.9 percent for those with no postsecondary degree. For jobs requiring a certificate, public certificates modestly outperformed for-profit ones, 5.8 percent to 4.9 percent.
The study found that that having a B.A. degree from a selective public university did not generate more callbacks, on average, than a degree from a non-selective public school. In fact, for low-paid business vacancies, the callback rate was modestly lower for the group with degrees from selective institutions. But for high-paid jobs, it was significantly higher. The authors conclude this indicates that "employers value both college quality and the likelihood of a successful match when contacting job applicants."
"Because yearly tuition at a for-profit college typically greatly exceeds that at a public university and for-profit degrees seem to be less valued by employers, the for-profit degree appears to be the less attractive investment," according to the authors. They note, however, that public colleges are often overcrowded and that for-profits may be able to more rapidly move into expanding fields not well-served by public institutions. In that case, the most appropriate comparison would be between a for-profit credential and no post-secondary credential. The study findings, however, provide little evidence that obtaining a for-profit credential will improve the job prospects of workers who would otherwise not attend college at all. | The Value of Postsecondary Credentials in the Labor Market
There is little evidence that obtaining credentials from for-profit institutions improves the job prospects of workers who would otherwise not attend college.
Employers looking at applicants with bachelor's degrees in business are 22 percent less likely to call back graduates from for-profit online schools than those from non-selective public institutions, according to a new study comparing employer perceptions of public and for-profit institutions of higher education.
"These online, for-profit colleges have been responsible for 21 percent of the growth in all bachelor's degrees and 33 percent of the growth in bachelor's degrees in business over the last decade," write David J. Deming, Noam Yuchtman, Amira Abulafi, Claudia Goldin, and Lawrence F. Katz, authors of The Value of Postsecondary Credentials in the Labor Market: An Experimental Study (NBER Working Paper No. 20528). "Yet it is precisely the bachelor's degrees granted by the fastest-growing set of institutions that are associated with the worst callback outcomes for jobs requiring a bachelor's degree. "
Between April and November 2014, the researchers used a large employment website to send 10,492 hypothetical résumés to employers posting business and healthcare-related openings in five major cities: Chicago, Los Angeles, Miami, New York, and the San Francisco Bay Area. For business positions requiring a bachelor's degree, the researchers sent four otherwise identical résumés for each position: two from local public universities varying by selectivity, one from a local for-profit institution and one from a national, online, for-profit school. The authors took an analogous approach in applying to business jobs not requiring a bachelor's degree and to healthcare sector positions. Of all résumés sent to job openings, 8.2 percent received a callback.
The rate of callbacks depended on the type of job being offered. For example, the preference for graduates of public universities over for-profits was clear for business job openings requiring a bachelor's degree. On average, | yes |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://www.wwf.org.uk/updates/8-things-know-about-palm-oil | 8 things to know about palm oil | WWF | 1. What is palm oil?
It’s an edible vegetable oil that comes from the fruit of oil palm trees, the scientific name is Elaeis guineensis. Two types of oil can be produced; crude palm oil comes from squeezing the fleshy fruit, and palm kernel oil which comes from crushing the kernel, or the stone in the middle of the fruit. Oil palm trees are native to Africa but were brought to South-East Asia just over 100 years ago as an ornamental tree crop. Now, Indonesia and Malaysia make up over 85% of global supply but there are 42 other countries that also produce palm oil.
2. What products is it in?
Palm oil is in nearly everything – it’s in close to 50% of the packaged products we find in supermarkets, everything from pizza, doughnuts and chocolate, to deodorant, shampoo, toothpaste and lipstick. It’s also used in animal feed and as a biofuel in many parts of the world (not in the UK though!).
3. Why is palm oil everywhere?
Palm oil is an extremely versatile oil that has many different properties and functions that makes it so useful and so widely used. It is semi-solid at room temperature so can keep spreads spreadable; it is resistant to oxidation so can give products a longer shelf-life; it’s stable at high temperatures so helps to give fried products a crispy and crunchy texture; and it’s also odourless and colourless so doesn’t alter the look or smell of food products. In Asian and African countries, palm oil is used widely as a cooking oil, just like we might use sunflower or olive oil here in the UK.
As well as being versatile, compared to other vegetable oils the oil palm is a very efficient crop, able to produce high quantities of oil over small areas of land, almost all year round. This makes it an attractive crop for growers and smallholders, who can rely on the steady income that palm oil provides.
4. What is the problem with palm oil?
Palm oil has been and continues to be a major driver of deforestation of some of the world’s most biodiverse forests, destroying the habitat of already endangered species like the Orangutan, pygmy elephant and Sumatran rhino. This forest loss coupled with conversion of carbon rich peat soils are throwing out millions of tonnes of greenhouse gases into the atmosphere and contributing to climate change. There also remains some exploitation of workers and child labour. These are serious issues that the whole palm oil sector needs to step up to address because it doesn’t have to be this way.
5. What solutions are there?
Palm oil can be produced more sustainably and there is a role for companies, governments, and consumers to play. The Roundtable on Sustainable Palm Oil or RSPO was formed in 2004 in response to increasing concerns about the impacts palm oil was having on the environment and on society. The RSPO has production standards for growers that set best practices for producing and sourcing palm oil, and it has the buy-in of most of the global industry. RSPO encourage companies to:
Set robust policies to remove deforestation, conversion of other natural ecosystems, such as peatlands, and human rights abuses from their supply chains
Buy and use RSPO certified palm oil across their operations globally
Be transparent in their use and sourcing of palm oil ensuring they know who they are buying from and where it’s been produced
It is important that the palm oil industry continues to invest in and grow support for and smallholder programmes and sustainable landscape initiatives. WWF is also working with governments in both palm oil using and palm oil producing countries to make sure that national laws are in place to ensure that any palm oil traded is free of deforestation, conversion and exploitation.
6. Why don’t we just switch to an alternative vegetable oil?
Palm oil is an incredibly efficient crop, producing more oil per land area than any other equivalent vegetable oil crop. Globally, palm oil supplies 40% of the world’s vegetable oil demand on just under 6% of the land used to produce all vegetable oils. To get the same amount of alternative oils like soybean, coconut, or sunflower oil you would need anything between 4 and 10 times more land, which would just shift the problem to other parts of the world and threaten other habitats, species and communities. Furthermore, there are millions of smallholder farmers who depend on producing palm oil for their livelihoods. Boycotting palm oil is not the answer. Instead, we need to demand more action to tackle the issues and go further and faster.
7. Can I trust RSPO certified products?
The RSPO is the global standard for the sustainable production of palm oil. When palm oil is produced in adherence to RSPO standards, growers help to protect the environment and the local communities who depend on the crop for their livelihoods, so that palm oil can continue to play a key role in food security, economic development, and food supply chains. We should continue to use RSPO certified sustainable palm oil in products, as replacing it would result in more deforestation and natural habitat conversion. RSPO certified products that use palm oil from ‘Segregated’ or ‘Identity Preserved’ supply chains offer the greatest assurance of sustainable palm oil.
Along with other organisations, WWF plays an active role in influencing and shaping the RSPO standard to make sure it puts in place more safeguards for people and the planet. In November 2018, the RSPO standard was strengthened and it now represents an essential tool that can help companies achieve their commitments to palm oil that is free of deforestation, conversion of other natural habitats like peatlands, and the exploitation of people.
8. What is being done in the UK?
In 2012, the UK Government recognised that we were part of the palm oil problem and could also be part of the solution. They set a commitment for 100% of the palm oil used in the UK to be from sustainable sources that don’t harm nature or people. In 2019, 70% of the total palm oil imports to the UK were sustainable. This is great progress but there is more to be done to get to 100%.
An area that represents a substantial gap in the uptake of certified sustainable palm the use of palm-derived ingredients in animal feed – for chickens, pigs and cows, for example. Much of this palm oil material is unlikely to be certified. This area requires much stronger transparency and ambition from the UK industry, and is going to be critical over the coming years if we are to truly tackle the UK’s palm oil footprint.
What else can I do?
We need systemic change to fix our destructive food system and we can’t rely on voluntary business commitments. Join us in calling for the UK Government to deliver on its promise to make sure UK products aren’t contributing to the destruction of forests like the Amazon and in Borneo. | 3. Why is palm oil everywhere?
Palm oil is an extremely versatile oil that has many different properties and functions that makes it so useful and so widely used. It is semi-solid at room temperature so can keep spreads spreadable; it is resistant to oxidation so can give products a longer shelf-life; it’s stable at high temperatures so helps to give fried products a crispy and crunchy texture; and it’s also odourless and colourless so doesn’t alter the look or smell of food products. In Asian and African countries, palm oil is used widely as a cooking oil, just like we might use sunflower or olive oil here in the UK.
As well as being versatile, compared to other vegetable oils the oil palm is a very efficient crop, able to produce high quantities of oil over small areas of land, almost all year round. This makes it an attractive crop for growers and smallholders, who can rely on the steady income that palm oil provides.
4. What is the problem with palm oil?
Palm oil has been and continues to be a major driver of deforestation of some of the world’s most biodiverse forests, destroying the habitat of already endangered species like the Orangutan, pygmy elephant and Sumatran rhino. This forest loss coupled with conversion of carbon rich peat soils are throwing out millions of tonnes of greenhouse gases into the atmosphere and contributing to climate change. There also remains some exploitation of workers and child labour. These are serious issues that the whole palm oil sector needs to step up to address because it doesn’t have to be this way.
5. What solutions are there?
Palm oil can be produced more sustainably and there is a role for companies, governments, and consumers to play. The Roundtable on Sustainable Palm Oil or RSPO was formed in 2004 in response to increasing concerns about the impacts palm oil was having on the environment and on society. The RSPO has production standards for growers that set best practices for producing and sourcing palm oil, and it has the buy-in of most of the global industry. | yes |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://www.worldwildlife.org/industries/palm-oil | What is Palm Oil? Facts About the Palm Oil Industry | Palm Oil
Overview
Ice Cream
Palm oil is a small ingredient in the U.S. diet, but more than half of all packaged products Americans consume contain palm oil—it’s found in lipstick, soaps, detergents and even ice cream.
Grown only in the tropics, the oil palm tree produces high-quality oil used primarily for cooking in developing countries. It is also used in food products, detergents, cosmetics and, to a small extent, biofuel. Palm oil is a small ingredient in the U.S. diet, but more than half of all packaged products Americans consume contain palm oil—it’s found in lipstick, soaps, detergents and even ice cream.
Palm oil is a very productive crop. It offers a far greater yield at a lower cost of production than other vegetable oils. Global production of and demand for palm oil is increasing rapidly. Plantations are spreading across Asia, Africa and Latin America. But such expansion comes at the expense of tropical forests—which form critical habitats for many endangered species and a lifeline for some human communities.
WWF envisions a global marketplace based on socially acceptable and environment-friendly production and sourcing of palm oil. We aim to encourage increased demand for, and use of, goods produced using such practices.
A new WWF report on global forest cover and forest loss finds that over 160,000 square miles, an area roughly the size of California, were lost in deforestation hot spots around the world between 2004 and 2017. Deforestation puts human health and the health of our planet at risk.
Impacts
Large areas of tropical forests and other ecosystems with high conservation values have been cleared to make room for vast monoculture oil palm plantations. This clearing has destroyed critical habitat for many endangered species—including rhinos, elephants and tigers. Burning forests to make room for the crop is also a major source of greenhouse gas emissions. Intensive cultivation methods result in soil pollution and erosion and water contamination.
Large-scale forest conversion
Many vast monocrop oil palm plantations have displaced tropical forests across Asia, Latin America and West Africa. Around 90% of the world's oil palm trees are grown on a few islands in Malaysia and Indonesia – islands with the most biodiverse tropical forests found on Earth. In these places, there is a direct relationship between the growth of oil palm estates and deforestation.
Loss of critical habitat for endangered species
Large-scale conversion of tropical forests to oil palm plantations has a devastating impact on a huge number of plant and animal species. Oil palm production also leads to an increase in human-wildlife conflict as populations of large animals are squeezed into increasingly isolated fragments of natural habitat. The habitats destroyed frequently contain rare and endangered species or serve as wildlife corridors between areas of genetic diversity. Even national parks have been severely impacted. Forty-three percent of Tesso Nilo National Park in Sumatra—which was established to provide habitat for the endangered Sumatran Tiger—has now been overrun with illegal palm oil plantings.
Burning is a common method for clearing vegetation in natural forests as well as within oil palm plantations. The burning of forests releases smoke and carbon dioxide into the atmosphere, polluting the air and contributing to climate change. Fires in peat areas are particularly difficult to put out. The smoke and haze from these blazes have health consequences throughout Southeast Asia.
A palm oil mill generates 2.5 metric tons of effluent for every metric ton of palm oil it produces. Direct release of this effluent can cause freshwater pollution, which affects downstream biodiversity and people. While oil palm plantations are not large users of pesticides and fertilizers overall, the Indiscriminate application of these materials can pollute surface and groundwater sources.
Erosion occurs when forests are being cleared to establish plantations, and can also be caused by planting trees in inappropriate arrangements. The main cause of erosion is the planting of oil palms on steep slopes. Erosion causes increased flooding and silt deposits in rivers and ports. Eroded areas require more fertilizer and other inputs, including repair of roads and other infrastructure.
The practice of draining and converting tropical peat forests in Indonesia is particularly damaging, as these "carbon sinks" store more carbon per unit area than any other ecosystem in the world. Additionally, forest fires used to clear vegetation in the establishment of oil palm plantations are a source of carbon dioxide that contributes to climate change. Due to its high deforestation rate, Indonesia is the third-largest global emitter of greenhouse gasses.
What WWF Is Doing
With better management practices, the palm oil industry could provide benefits without threatening some of our most breathtaking natural treasures. Oil palm plantations can stop operating at the expense of rainforests by applying stringent production criteria to all stages of palm oil manufacture.
WWF works on a number of fronts to achieve this, including:
Defining, implementing and promoting better practices for sustainable palm oil production through our Roundtable on Sustainable Palm Oil (RSPO), which is a large, international group of palm oil producers, palm oil buyers, and environmental and social groups
Encouraging companies to use certified sustainable palm oil in the products they make and sell
Eliminating incentives for palm oil production that lead to the destruction of forests
World Wildlife Fund Inc. is a nonprofit, tax-exempt charitable organization (tax ID number 52-1693387) under Section 501(c)(3) of the Internal Revenue Code. Donations are tax-deductible as allowed by law. | Palm Oil
Overview
Ice Cream
Palm oil is a small ingredient in the U.S. diet, but more than half of all packaged products Americans consume contain palm oil—it’s found in lipstick, soaps, detergents and even ice cream.
Grown only in the tropics, the oil palm tree produces high-quality oil used primarily for cooking in developing countries. It is also used in food products, detergents, cosmetics and, to a small extent, biofuel. Palm oil is a small ingredient in the U.S. diet, but more than half of all packaged products Americans consume contain palm oil—it’s found in lipstick, soaps, detergents and even ice cream.
Palm oil is a very productive crop. It offers a far greater yield at a lower cost of production than other vegetable oils. Global production of and demand for palm oil is increasing rapidly. Plantations are spreading across Asia, Africa and Latin America. But such expansion comes at the expense of tropical forests—which form critical habitats for many endangered species and a lifeline for some human communities.
WWF envisions a global marketplace based on socially acceptable and environment-friendly production and sourcing of palm oil. We aim to encourage increased demand for, and use of, goods produced using such practices.
A new WWF report on global forest cover and forest loss finds that over 160,000 square miles, an area roughly the size of California, were lost in deforestation hot spots around the world between 2004 and 2017. Deforestation puts human health and the health of our planet at risk.
Impacts
Large areas of tropical forests and other ecosystems with high conservation values have been cleared to make room for vast monoculture oil palm plantations. This clearing has destroyed critical habitat for many endangered species—including rhinos, elephants and tigers. Burning forests to make room for the crop is also a major source of greenhouse gas emissions. Intensive cultivation methods result in soil pollution and erosion and water contamination.
Large- | yes |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://www.iucn.org/resources/issues-brief/palm-oil-and-biodiversity | Palm oil and biodiversity - resource | IUCN | What is the issue?
Palm oil is derived from the oil palm tree (Elaeis guineensis Jacq.), which is native to West Africa and grows best in tropical climates with abundant water. Three-quarters of total palm oil produced is used for food, particularly cooking oil and processed oils and fats. It is also used in cosmetics, cleaning products and biofuel.
Between 1980 and 2014, global palm oil production increased by a factor of 15, from 4.5 million tonnes to 70 million tonnes. This was driven by the high yield and relatively low production costs of palm oil.Industrial-scale oil palm plantations now occupy an area of 18.7 million hectares worldwide (as of October 2017), with smallholder oil palm plantations also occupying a significant area. Palm oil demand is expected to grow at 1.7% per year until 2050.
Most (85%) of global palm oil supply comes from Indonesia and Malaysia, followed by Thailand, Colombia and Nigeria. The bulk of palm oil produced in these countries is exported to the EU, China, India, the US, Japan and Pakistan.
Oil palm produces about 35% of all vegetable oil on less than 10% of the land allocated to oil crops.
Oil palm expansion is a major driver of deforestation and degradation of natural habitats in parts of tropical Asia and Central and South America, behind cattle ranching and local and subsistence agriculture. On the island of Borneo, at least 50% of all deforestation between 2005 and 2015 was related to oil palm development.
The tropical areas suitable for oil palm plantations are particularly rich in biodiversity. Oil palm development, therefore, has significant negative impacts on global biodiversity, as it often replaces tropical forests and other species-rich habitats. Globally palm oil production is affecting at least 193 threatened species, according to The IUCN Red List of Threatened SpeciesTM. It has been estimated that oil palm expansion could affect 54% of all threatened mammals and 64% of all threatened birds globally. It also reduces the diversity and abundance of most native species. For example, it has played a major role in the decline in species such as orangutans and tigers.
Some 10,000 of the estimated 75,000–100,000 Critically Endangered Bornean orangutans are currently found in areas allocated to oil palm. Every year around 750 to 1,250 of the species are killed during human-orangutan conflicts, which are often linked to expanding agriculture. A small number of species can benefit from the presence of oil palm plantations, including species of wild pig, rodents and some snakes.
Why is it important?
Currently about half the people in the world rely on palm oil as part of their diets and it is the dominant oil used in food in Africa and Asia. As the global population grows, palm oil’s role in meeting global food demand will increase.
Oil palm plantations provide jobs and drive national economic development. The industry is an important source of employment in Indonesia and Malaysia. It also contributes to the development of remote areas via provision of infrastructure including roads, hospitals and schools.
However, the way plantations are currently established and managed is damaging to the environment. The expansion of oil palm plantations into natural areas is responsible for greenhouse gas emissions from deforestation and peat drainage, and contributes to regional smoke haze and water pollution. Further expansion of the area occupied by oil palms would most likely occur in Africa and South America, where potential plantation sites are particularly rich in biodiversity.
The oil palm industry also often has negative impacts on local communities. Some communities suffer economically from oil palm development because their loss of access to forests is not sufficiently compensated by economic gains from oil palm cultivation. Human-wildlife conflict often increases with the displacement of species such as orangutans and tigers when forests are cleared for oil palm, resulting in human and animal casualties. Because of high labour requirements, palm oil expansion can also lead to labour shortages for local food production, and labour in-migration from lower income countries or regions.
What can be done?
Palm oil needs to be produced more sustainably. A simple shift from palm oil to other oil crops is not a solution as it may lead to further biodiversity loss. Oil palm produces up to nine times more oil per unit area than other major oil crops, and can help meet global demand for vegetable oils that is estimated to grow from an annual 165 million tonnes now to 310 million tonnes in 2050.
Banning palm oil could result in diminished efforts to produce palm oil sustainably, and an increase in land used for producing other oils (mostly soy, sunflower and rapeseed) which is likely to shift biodiversity impacts to regions where those oils are produced.
To mitigate biodiversity loss, effective policies and programs are needed to stop the clearing of native tropical forests for new oil palm plantations. This includes policies which limit demand for palm oilfor non-food uses (such as the new European Union policies limiting the use of palm oil for biofuel) or which protect forests and other ecosystems in producer countries. Importing country policies need to apply to all vegetable oils, not just palm oil, and must minimise the environmental cost of producing these vegetable oils. Policies in producing countries need to ensure that the production of palm oil abides by national laws and international conventions aimed at avoiding negative environmental impacts, such as the UN Convention on Biological Diversity.
In existing oil palm plantations, producers should also manage their land more responsibly to reduce impacts on biodiversity. Currently, producers mainly do this by setting aside forest and other areas identified as important for biodiversity and carbon, using two main frameworks: the High Carbon Stock and High Conservation Value approaches. However, there is little evidence that these approaches are effective at reducing impacts on biodiversity. Better management of these set-asides is needed to ensure sustainability, and to reduce impacts on biodiversity. | It also contributes to the development of remote areas via provision of infrastructure including roads, hospitals and schools.
However, the way plantations are currently established and managed is damaging to the environment. The expansion of oil palm plantations into natural areas is responsible for greenhouse gas emissions from deforestation and peat drainage, and contributes to regional smoke haze and water pollution. Further expansion of the area occupied by oil palms would most likely occur in Africa and South America, where potential plantation sites are particularly rich in biodiversity.
The oil palm industry also often has negative impacts on local communities. Some communities suffer economically from oil palm development because their loss of access to forests is not sufficiently compensated by economic gains from oil palm cultivation. Human-wildlife conflict often increases with the displacement of species such as orangutans and tigers when forests are cleared for oil palm, resulting in human and animal casualties. Because of high labour requirements, palm oil expansion can also lead to labour shortages for local food production, and labour in-migration from lower income countries or regions.
What can be done?
Palm oil needs to be produced more sustainably. A simple shift from palm oil to other oil crops is not a solution as it may lead to further biodiversity loss. Oil palm produces up to nine times more oil per unit area than other major oil crops, and can help meet global demand for vegetable oils that is estimated to grow from an annual 165 million tonnes now to 310 million tonnes in 2050.
Banning palm oil could result in diminished efforts to produce palm oil sustainably, and an increase in land used for producing other oils (mostly soy, sunflower and rapeseed) which is likely to shift biodiversity impacts to regions where those oils are produced.
To mitigate biodiversity loss, effective policies and programs are needed to stop the clearing of native tropical forests for new oil palm plantations. | yes |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6357563/ | The palm oil industry and noncommunicable diseases - PMC | Share
RESOURCES
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with,
the contents by NLM or the National Institutes of Health.
Learn more:
PMC Disclaimer
|
PMC Copyright Notice
This is an open access article distributed under the terms of the Creative Commons Attribution IGO License (http://creativecommons.org/licenses/by/3.0/igo/legalcode), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In any reproduction of this article there should not be any suggestion that WHO or this article endorse any specific organization or products. The use of the WHO logo is not permitted. This notice should be preserved along with the article's original URL.
Abstract
Large-scale industries do not operate in isolation, but have tangible impacts on human and planetary health. An often overlooked actor in the fight against noncommunicable diseases is the palm oil industry. The dominance of palm oil in the food processing industry makes it the world’s most widely produced vegetable oil. We applied the commercial determinants of health framework to analyse the palm oil industry. We highlight the industry’s mutually profitable relationship with the processed food industry and its impact on human and planetary health, including detrimental cultivation practices that are linked to respiratory illnesses, deforestation, loss of biodiversity and pollution. This analysis illustrates many parallels to the contested nature of practices adopted by the alcohol and tobacco industries. The article concludes with suggested actions for researchers, policy-makers and the global health community to address and mitigate the negative impacts of the palm oil industry on human and planetary health.
Introduction
Public health discourse increasingly focuses on the role of alcohol, tobacco and sugar in the growing burden of noncommunicable diseases. Increasingly this dialogue highlights how, in the pursuit of increased profits, the industries involved in these products aim to shape public and political opinion as well as influence research outcomes to influence policies that endanger public health.1,2 The palm oil industry is missing from this dialogue.
Palm oil is one of the world’s most commonly used vegetable oils, present in around half of frequently used food and consumer products, from snacks to cosmetics.3,4 Worldwide production of the oil has increased from 15 million tonnes in 1995 to 66 million tonnes in 2017. The rapid expansion in use is attributed to yields nearly four times other vegetable oil crops, with similar production costs; favourable characteristics for the food industry (its relatively high smoke point and being semisolid state at room temperature); and strategies aimed at ensuring government policies are supportive to the expansion of palm oil cultivation, production and use.5 While these factors associated with palm oil offer clear advantages for the processed food industry, the oil contains a much higher percentage of saturated fats compared to other vegetable oils.6 Although its negative health impacts are contested,7 a meta-analysis of increased palm oil consumption in 23 countries found a significant relationship with higher mortality from ischaemic heart disease.8 Another systematic review found that palm oil consumption increased blood levels of atherogenic low-density lipoprotein cholesterol.6 As early as 2003, the World Health Organization (WHO) and the Food and Agriculture Organization (FAO) described the evidence linking saturated fat consumption with increased risk of cardiovascular disease as convincing.9
The indirect health impacts of oil-palm cultivation are less contested; clearing land for plantations by slash-and-burn practices has led to recurring episodes of harmful haze in South-East Asia.10 The most recent occurrence, in 2015, led to an estimated 100 000 premature deaths in the region from pollutants and documented increases in respiratory, eye and skin diseases.11 The impact of the industry on planetary health, that is, “the health of human civilisation and the state of the natural systems on which it depends”,12 through the cultivation practices of oil-palm trees has also been well-documented. This entails large-scale deforestation, including loss of up to 50% of trees in some tropical forest areas; endangerment of at-risk species; increased greenhouse gas emissions (due to deforestation and drainage of peat bogs); water and soil pollution; and the rise of certain invasive species.13,14
Estimations suggest that more than two-thirds of the palm produced goes to food products, making the processed food industry’s relationship with the palm oil industry critical.15 With the United States Food and Drug Administration’s ban on trans-fatty acids (TFA) due to their potential adverse health impacts in 2015,16 and a similar recommendation by the WHO in 2018,17 an increase in the use of palm oil as a potential replacement for TFA in ultra-processed foods could be anticipated. This paper aims to describe the relationship between the palm oil and processed food industries and how these interconnect with public and planetary health. Box 1 lists the key terminology in the palm oil industry.
Slash and burn: method of farming where forests are cut and any residue is burnt.
Smoke point: temperature at which oil produces a continuous, clearly visible smoke. Important indicator of the stability of oil, a higher smoke point allows more versatility in cooking.
Trans fatty acids: type of unsaturated fat associated with raising low-density lipoprotein cholesterol that is known to increase the risk for heart disease and stroke.
Ultra-processed foods: processed substances extracted or refined from whole foods, (such as fruits, crops or grains) e.g. oils, hydrogenated oils and fats, flours and starches, variants of sugar, and cheap parts or remnants of animal foods usually with little nutritional value compared to the original whole food.17
Approach
The commercial determinants of health are defined as “strategies and approaches used by the private sector to promote products and choices that are detrimental to health.”19 We adapted a 2016 framework on the commercial determinants of health (Fig. 1) and applied it to the palm oil industry to review the three domains: (i) drivers (internationalization of trade and capital, expanding outreach of corporations and demands of economic growth); (ii) channels (marketing, supply chains, lobbying and corporate citizenship); and (iii) outcomes (on the environment, consumers and health). The environment component was adapted from the initial framework to expand the scope beyond the social environment.
Drivers
Internationalization of trade and capital
Oil-palm plantations cover over 27 million hectares worldwide, an area approximately the size of New Zealand. The industry is estimated to be worth 60 billion United States dollars (US$) and employs 6 million people,7 with an additional 11 million people indirectly dependent on it, particularly in rural areas where jobs can be scarce. In 2014, Indonesia and Malaysia accounted for over 53.3 million (85%) of the 62.4 million tonnes of global palm oil production and have rapidly expanded their farming and exports. Indonesia, for example, increased production from 19.2 tonnes in 2008 to 32.0 tonnes in 2016. The largest importers of palm oil are India, China, the European Union countries, Malaysia and Pakistan.20
The palm oil and processed food industries have mutually benefitted from increased sales and consumption of products through rapid internationalization and trade. This trend is likely to continue as low- and middle-income countries increasingly move from eating fresh, minimally processed foods to ultra-processed products.21 Sales by manufacturers of ultra-processed foods containing palm oil have been expanding.22
Expanding outreach of corporations
Although many companies use palm oil, processing and refining is concentrated in a limited number of corporations. Companies source their supply from their own concessions, from a large number of third-party suppliers and smallholders, both independent and tied through partnership agreements.23 Increasingly, large corporations are expanding palm-oil refining capacity, expanding the scope of industry concentration.24 Indonesia and Malaysia have used government policies, including subsidies and land incentives, to assist industry expansion and facilitate greater investment.23
More than half the plantations in Indonesia are industrial estates of > 6000 hectares owned by private companies, with 40% smallholders with plantations < 25 hectares and 7% state-owned.13 When attempts are made to regulate oil-palm cultivation, industry leaders have highlighted the threat to smallholders’ livelihoods, making palm oil production a controversial political issue.25
Demands of growth
The palm oil industry is projected to reach a production value of US$ 88 billion by 2022.20 The increasing availability of palm oil, alongside increasing numbers of countries banning TFA in processed foods,26,27 means that palm oil will likely remain the food industry’s preferred vegetable oil in ultra-processed foods. With China and India continuing to import palm oil for consumption, the growth in its use is anticipated to continue.
Channels
Marketing
Marketing of palm oil does not occur in the traditional sense. Responding to a backlash against accusations of poor environmental and labour practices, the industry has sought to portray its products as sustainable, while highlighting the contribution to poverty alleviation. For example, in advance of the European Union’s 2020 ban on palm oil as a biofuel, the industry launched advertisements featuring smallholder farmers whose livelihoods would be lost.25 There is also a mutual benefit for the palm oil and processed food industry, with the latter targeting advertisements for ultra-processed foods towards children (similar to efforts by the tobacco and alcohol industries in targeting children and adolescents)28,29 and the palm oil refining industry benefiting from the corresponding increase in sales of foods containing palm oil.30–33
Supply chain
The global palm oil supply chain has many businesses, systems and structures, making it difficult to draw a clear line between the different components and identify the impact of each actor.23 For example, a recent brief by the nongovernmental organization (NGO) Ceres, unpacks the key elements of the supply chain and the American industries and companies linked to them (Fig. 2).34 Unilever PLC, who claim to be the largest user of physically certified palm oil in the consumer goods industry,35 recently published details of its entire palm oil supply chain; this included 300 direct suppliers and 1400 mills used in its food, personal care and biofuel products.26,27 The scale of the supply chain is massive and, even by the company’s own admission, social and environmental issues persist.26 The supply chain demonstrates a strong association between the palm oil and processed food industries. Global food processing corporations are further venturing into palm oil refining, creating blurred lines across the supply chain, making it difficult to hold individual actors accountable for any adverse outcomes.
Lobbying
Apart from establishing a strong lobbying presence in the European Union,1 the palm oil industry has fostered partnerships with policy and research institutes providing policy recommendations against regulation.36 For example, the industry-backed World Growth Institute criticised the World Bank's framework for palm-oil engagement – which seeks prioritisation of smallholders over large corporations and cultivation of plantations on degraded land instead of forested land – as 'anti-poor'.37 The palm oil industry has also sought to influence global health policy-making. For example, during the drafting of the 2003 WHO/FAO report on Diet, Nutrition and Prevention of Chronic Diseases, the Malaysian Palm Oil Promotion Council questioned the palm oil-related health concerns raised by the report and suggested that any efforts to curb consumption would threaten several million peoples’ livelihoods.33 These tactics, establishing lobbying structures in political and economic hubs, fighting regulations, attempting to undermine reliable sources of information and using poverty alleviation arguments, are similar to those pursued by the tobacco and alcohol industries.38,39
Corporate citizenship
Several major companies and countries have joined to create industry associations to showcase their sustainability efforts. These are membership organizations composed of oil-palm growers, palm oil producers, consumer goods manufacturers, retailers, investors and NGOs which certify sustainability and fair labour standards and include entities such as the Roundtable on Sustainable Palm Oil and country-specific groups in Indonesia and Malaysia. In 2017, the Roundtable certified approximately 13.4 tonnes (approximately 20%) of the global production as sustainable. The Roundtable also has partnerships with the United Nations Economic and Social Council, United Nations Environment and United Nations Children’s Fund, aimed at improving its members’ business practices. Twelve of the 16 Roundtable board members are representatives of palm oil processers, manufacturers, retailers, banks, investors or international food processing companies. The sustainability certification effort has been linked to limited amounts of reduced deforestation, with a recent study finding little impact on forest loss and fire detection.40 Other studies have found that the Roundtable’s board members were still associated with companies involved in mass deforestation.41 Investigations by NGOs have found child labour and human rights violations at Roundtable members’ plantations.42
Despite some positive initiatives by the palm oil and processed food industries to cultivate, produce and source palm oil through sustainable, ethical practices, challenges remain. Agencies entering partnerships with industry-led initiatives are at risk of becoming complicit in detrimental practices. Indeed, NGOs such as Palm Oil Investigations withdrew support for the Roundtable after evidence of harmful business practices emerged.43
Outcomes
Given the importance of assessing the outcomes of the palm oil industry, we conducted a rapid review of the literature to better understand the impact on the environment, consumers and health. We made a keyword search initially via the PubMed® online database to identify peer-reviewed articles and subsequently via Google search engine to identify other sources of information (Box 2). The review was conducted in June and July 2018 and updated in October 2018. Of 435 articles identified and scanned, we included 40 peer-reviewed articles and eight articles from the grey literature (Fig. 3; Table 1).
Box 2
Search strategy for the rapid review of the literature on the impact of palm oil on the environment, consumers and health
We made an online search via the PubMed® database using the keyword “palm oil” in conjunction with relevant terms (AND “environment” OR ”pollution” OR ”climate change” OR “consumer” OR “health” OR “disease”). The review was conducted in June and July 2018 and updated in October 2018. The criteria for inclusion were articles published after 2000, in English language, of relevance to human health (through studies on humans or animal studies that drew conclusions on potential implications for human health), consumers or the environment. Articles were excluded if they were linked to animal husbandry practices, speculative in nature (e.g. profitability analyses), primarily aimed at industrial processes (e.g. monetizing palm oil mill effluenta) or drew conclusions of limited relevance to the topic (e.g. zoo-based conservation education).
While five articles initially appeared to be of relevance to palm oil and consumers, on further review, they were excluded. We therefore complemented the “consumer” keyword search with a review of the non-peer-reviewed literature, identified through search by the Google search engine using the same keywords. We limited the search to sources from governments, international agencies, NGOs and trusted media sources. Some of the results for “consumer” also yielded additional references relevant to environment and health, due to the intersection between human and planetary health, consumer practices and palm oil cultivation. Much of the grey literature related to consumers and the environment was focused on advocacy campaigns and calls for palm oil boycotts by NGOs and were therefore excluded as being beyond this paper’s scope.
NGO: nongovernmental organization.
a Highly polluting wastewater by-product of the palm oil production process.
b Denotes grey literature – all other articles are from peer-reviewed sources.
Environment
Forest, peatland and biodiversity losses, increased greenhouse gas emissions and habitat fragmentation as well as pollution are environmental concerns continually linked to the palm oil industry.5,10,12,46,52,53,63,69,75,77 In response, countries including Indonesia and Malaysia are increasing industry regulation, seeking to prevent slash-and-burn practices and restoring peatlands.11 Although the results are limited, companies are attempting to engage in more sustainable palm oil cultivation and production practices.13 Nevertheless, plantations with palm sustainability certification only encompass a fifth of all oil-palm cultivation, certification does not yield the desired benefits and there is limited consumer demand for sustainable palm oil.65
Consumers
In recent years, there have been campaigns by NGOs to increase consumer awareness about palm oil production practices, although success appears limited.65,80 From the processed food industry and health perspective, much work remains to be done. Palm oil derivatives in food, household and cosmetic products can be listed in any one or more of its 200 alternate names.79 Some countries such as Australia and New Zealand only require peanut, sesame and soy oils to be explicitly labelled, while palm oil can fall under a generic category of vegetable oil.79 The World Wildlife Fund lists more than 25 common alternatives to palm oil labelling found in food products (Box 1).18 With its inclusion in many everyday products, unclear food labelling and sometimes conflicting information on health impacts, it can be difficult to know how to identify palm oil in foods. Consumers may be unaware of what they are eating or its safety.
Health
Reports of the health impacts of palm oil consumption in foods are mixed.44,49,51,55,59,61,66,74,76 Some studies link consumption of palm oil to increased ischaemic heart disease mortality, raised low-density lipoprotein cholesterol, increased risk of cardiovascular disease and other adverse effects.6,8,9 Other studies show no negative effects7 or even favourable health outcomes from palm oil consumption.7,45,47,48,50,57,60,67,78 Four of the nine studies in our literature search showing overwhelmingly positive health associations were authored by the Malaysian Palm Oil Board, again drawing parallels with the tobacco and alcohol industries38,39 and calling into question the credibility of claims in favour of increased palm oil consumption. The contested nature of the evidence suggests the need for independent, comprehensive studies of the health impact of palm oil consumption. Countries such as Fiji, India and Thailand have initiated policy dialogues and analyses aimed at better understanding the role of palm oil in diets and best approaches to reducing saturated fats in the food-chain, but these discussions are far from conclusive.54,58,70,72,73
More unequivocally, land-clearing practices for oil palm cultivation have major public health consequences. Since the 1990s, air pollution from slash-and-burn practices have affected the health of populations in South-East Asia, especially the most vulnerable groups of the population, such as infants and children.11,56 Haze episodes, even across country borders, have been linked to premature deaths and increased respiratory illness as well as cardiovascular diseases.62,71 Of major concern is the effect of exposure to particulate matter on fetal, infant and child mortality, as well as children’s cognitive, educational and economic attainment.81,82 The direct and indirect impact of the palm oil cultivation industry on children, including child labour practices, is especially concerning. In Indonesia, around half of 4 million people employed in the industry are estimated to be women. Even when they are not directly employed, children dependent on palm oil workers are adversely affected by inadequate maternity protection, low breastfeeding rates, lack of child-care opportunities, poor maternal health and nutrition, and difficultly in accessing education.64
Discussion
This paper illustrates how the palm oil industry, in close connection with the processed food industry, impacts human and planetary health. The impact also cuts across other sectors, such as education, child protection, as well as having implications for gender-related policies and practices. A limitation of our rapid review is that not all the information from these industries is publicly available and, with limited peer-reviewed materials available on the palm oil industry, we included media reports, environmental activist web sites and other grey literature. This article is not meant to be exhaustive and therefore does not avert the need for an extensive systematic review of the human and planetary health outcomes of the palm oil industry, spanning other sectors such as labour, gender and use as biofuel.
The palm oil industry is an overlooked actor in discussions on noncommunicable diseases. The current widespread use of palm oil draws attention to the ultra-processed unhealthy food system and the need to deepen and expand existing research on the industry. However, we need to carefully consider practical policy options and their implications. For example, encouraging use of oils with lower saturated fat content in ultra-processed foods could have a greater detrimental impact on the environment than palm oil, through further deforestation and loss of biodiversity (given the need for more natural resources to cultivate such crops). Policy-makers may therefore need to consider ways to reduce the demand for oils more specifically and for unhealthy ultra-processed foods more broadly. Such actions would benefit not only the noncommunicable disease agenda, but also human and planetary health as part of the sustainable development goals (SDGs).
Suggestions for action
Addressing the palm oil industry’s impact goes beyond a single industry, product or sector. Taking a multifaceted approach, we suggest three sets of actions for researchers, policy-makers and the global health community (NGOs and international organizations; Box 3).
Box 3
Suggested actions to address the palm oil industry’s impact
Address impact on health
Researchers
Investigate the health impact of ultra-processed foods, including specific ingredients such as palm oil;
study the long-term consequences of daily consumption of unhealthy, ultra-processed foods and their ingredients, including the effects on children; and
research the effect of combinations of ingredients in ultra-processed foods.
Work across SDGs
Design policies that do not sacrifice longer-term health, environmental and social concerns for immediate economic gains and profits.
Global health community
Identify allies across sectors such as environment, child protection, labour and gender that can join in evidence generation and advocacy around the detrimental impacts of palm oil on human and planetary health; and
Understand impact on health
We need to better understand and address the content, health impact and supply chains of palm oil products. The evidence on health remains mixed. Furthermore, the so-called cocktail effect remains unknown; individual ingredients of ultra-processed foods may be harmless alone, but consumed in combination, daily, could be damaging.83 This also includes understanding the associated supply chains and the needed accountability measures aimed at addressing potential determinantal actions from the palm oil and related industries.
Mitigate industry influences
We need to mitigate the influence of the palm oil and related industries on public health policies and programmes. The relationship between the palm oil and processed food industries, and the tactics they employ, resembles practices adopted by the tobacco and alcohol industries. However, the palm oil industry receives comparatively little scrutiny. Palm oil use will likely continue, given the relatively low production costs of palm oil, high profit margins of ultra-processed foods, abundant use of palm oil in processed foods and prevalence of palm oil use in several industries (without a current viable alternative). As seen with recent examples, the public health community, whether multilateral agencies84 or research institutes85, is not immune to industry influence. Political ties to industries merit further exploration.86
Work across the SDGs
Palm oil use in ultra-processed foods follows a long, complex chain. Even as the direct health impact remains unclear, cultivation and production and related practices contribute to environmental pollution, respiratory illnesses and loss of biodiversity. Furthermore, with documented forced and child labour and human rights abuses, as well as gender-related issues, such as inadequate maternity protections in palm oil plantations, understanding and addressing the influence of the palm oil industry cuts across different sectors and different SDGs. Therefore, narrow, health-specific measures cannot be implemented in isolation.
Conclusions
As the most prevalent vegetable oil in food manufacturing, palm oil is an integral component of the food supply chain. While the direct health effects of palm oil remain contested, the indirect health impacts of cultivating this product are many. Commercial determinants play a vital role in a complex system that leads to the production and consumption of foods detrimental to human health. The discourse on noncommunicable diseases and human health can no longer be separated from the dialogue on planetary health. | Palm oil is one of the world’s most commonly used vegetable oils, present in around half of frequently used food and consumer products, from snacks to cosmetics.3,4 Worldwide production of the oil has increased from 15 million tonnes in 1995 to 66 million tonnes in 2017. The rapid expansion in use is attributed to yields nearly four times other vegetable oil crops, with similar production costs; favourable characteristics for the food industry (its relatively high smoke point and being semisolid state at room temperature); and strategies aimed at ensuring government policies are supportive to the expansion of palm oil cultivation, production and use.5 While these factors associated with palm oil offer clear advantages for the processed food industry, the oil contains a much higher percentage of saturated fats compared to other vegetable oils.6 Although its negative health impacts are contested,7 a meta-analysis of increased palm oil consumption in 23 countries found a significant relationship with higher mortality from ischaemic heart disease.8 Another systematic review found that palm oil consumption increased blood levels of atherogenic low-density lipoprotein cholesterol.6 As early as 2003, the World Health Organization (WHO) and the Food and Agriculture Organization (FAO) described the evidence linking saturated fat consumption with increased risk of cardiovascular disease as convincing.9
The indirect health impacts of oil-palm cultivation are less contested; clearing land for plantations by slash-and-burn practices has led to recurring episodes of harmful haze in South-East Asia.10 The most recent occurrence, in 2015, led to an estimated 100 000 premature deaths in the region from pollutants and documented increases in respiratory, eye and skin diseases.11 The impact of the industry on planetary health, that is, “the health of human civilisation and the state of the natural systems on which it depends”,12 through the cultivation practices of oil-palm trees has also been well-documented. | yes |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://kids.frontiersin.org/articles/10.3389/frym.2020.00086 | Can Palm Oil Be Produced Without Affecting Biodiversity? · Frontiers ... | Can Palm Oil Be Produced Without Affecting Biodiversity?
Authors and reviewers
Authors
For the past 3 years, I have been studying the chemical composition of palm oil and how it relates to sustainability and where it was grown. I have developed ways of determining which country palm oil comes from, based on the compounds responsible for its smell. I love everything about nature and love going for long strolls with my dog to see which birds I can spot in their natural habitats. *[email protected]
Prof. Murphy is a Fellow of the Royal Society of Biology and Professor of Biotechnology at the University of South Wales, UK. His research is mainly focused on improvement of oil crops using genomic and biotechnological methods. His group is also working on improved methods to verify the origin and purity of palm oils as part of the requirement for ensuring that such oils are obtained from certified sustainable sources. He is Biotechnology Advisor to the United Nations Food and Agriculture Organization and also advises the government-run Malaysian Palm Oil Board.
Young Reviewers
Centro Educacional Sesi 242
Laurel
I love reading Harry Potter books. My favorite characters are Ginny and Hermione. I also like animals. My favorite subjects in school are art, music, science, and math.
Abstract
Have you ever used or eaten palm oil? You might not think so, but odds are, you have. Palm oil is an ingredient in around half of supermarket products. It is used in ice cream, to help it melt nicely on your tongue, in the soap that you use to clean your dishes, and in most cakes, biscuits, and chocolate. Sounds useful, so what is the problem? Palm oil production is believed to account for up to half of the deforestation in tropical rainforests, leading to loss of biodiversity and many other negative impacts. Furthermore, palm oil production requires fertilizers and pesticides, which can run off into waterways and affect downstream biodiversity. However, most scientists agree that if we boycott palm oil and use other vegetable oils, the environmental impacts may be even worse. This article will discuss the pros and cons of palm oil production and how scientists, industries, and environmental organizations are trying to make palm oil more environmentally friendly.
What Is Palm Oil and Where Does It Come From?
Palm oil is a vegetable oil extracted from the fruits of the oil palm plant, Elaeis guineensis (Figure 1). Oil palm is grown in the tropics, including Indonesia, Malaysia, and Thailand, and accounts for almost 40% of all vegetable oils produced around the world. Oil palm fruits are structured like plums. There is a hard, central nut (also known as a kernel) surrounded by a soft, fleshy layer called the mesocarp. The fruits are heated and crushed to obtain two types of useful oil: palm oil from the mesocarp and palm kernel oil from the kernel.
Figure 1 - An oil palm growing in a plantation.
Near the base of the trunk, you can see a fruit bunch that contains thousands of individual palm fruits, from which palm oil is extracted.
The chemical make-up of the two palm oils differs from other vegetable oils like olive oil and sunflower oil. In temperate countries, palm oils are solid at room temperature (Figure 2), making palm oil an ideal ingredient in pastries, cakes, biscuits, and ice cream. Palm kernel oil is mainly used as the active ingredient in cleaning products like soaps and detergents, as well as in cosmetics.
Figure 2 - (A) Palm fruit structure.
The mesocarp is the part that palm oil is extracted from and the endosperm is the part that palm kernel oil is extracted from. (B) Palm oil at room temperature in the UK. It is orange due to its high vitamin content. (C) Palm kernel oil at room temperature in the UK. It has a lower vitamin content than palm oil, hence its yellow color.
What Is So Bad About Palm Oil?
Unfortunately, like many other big crops, growing a lot of oil palms causes some problems! Historically, oil palm was often grown in areas with lots of different species. Malaysian rainforests have more than 2,000 species of trees, Asian and pygmy elephants, and Malayan tapirs. Indonesian rainforests contain endangered animals like Sumatran tigers and rhinos. Large areas of rainforests have been converted into oil palm plantations. Planting of oil palms accounts for 0.5% of deforestation globally. In areas where oil palm is grown, these crops can be responsible for up to 50% of the deforestation [1].
Laws have been set to limit the amount of forested areas that can be removed. For example, Malaysia has laws, such as the “Protection of Wildlife Act 1972” and the “Land Conservation Act 1960” to protect species and reduce impacts on the environment. Also, growers who are members of an organization called the Roundtable on Sustainable Palm Oil (RSPO) or Malaysian Sustainable Palm Oil (MSPO) are not allowed to clear forests or areas that contain high amounts of biodiversity or fragile ecosystems.
In some cases, illegal logging (chopping down trees) and a method of clearing land called slash-and-burn still take place. Slash-and-burn is the process where forests are logged and then set on fire. Clearing land with fire costs around US $5 per hectare (1 hectare = 2 football pitches). Clearing land legally, using machines and chemicals, costs around US $200 per hectare. As you can imagine, slash-and-burn clearing is really bad for both the environment and the local people, because it results in huge emissions of toxic ash and smoke. When large forest areas are removed by slash-and-burn, something called haze happens. Haze is when the smoke fills the sky and blocks out sunlight. Haze can last for weeks and affect human health.
The most well-known issue of the oil palm industry is its effects on orangutan populations. Orangutans often lose their homes during land clearance and are sometimes killed by farmers who see them as pests. Some reports say that 25 orangutans are killed every day due to palm oil production. But, in reality, over half of all orangutan deaths are caused by local people who hunt orangutans for food [2].
Can We Just Use Different Oils Instead of Palm?
So, if growing oil palms causes such big problems, why cannot we just use other vegetable oils instead? The main issue here is that oil palm is a very efficient crop compared to alternatives like sunflower and olive. Nearly 10 times more land would be needed if oil palm were replaced with olive, soy, rapeseed (also known as canola), or corn crops [3]. Also, production of other vegetable oils would still impact biodiversity because those crops would need land to be cleared, too.
Other oil crops are also more expensive to grow than palm. Olive oil production costs up to six times more than palm oil. If we were to try to put olive oil in food products like pastries or biscuits, the prices of those foods would also increase! The chemical make-up of olive oil is also very different from palm oil. Olive oil is full of chemicals that give it a strong taste, smell, and color. Companies who use palm oil choose it because it can be processed to remove its color, taste, and smell. This means palm oil cannot be tasted in the final food product, but it still provides the fat needed as an ingredient in many foods. Olive oil also goes bad a lot faster than palm oil. This can make the food it is added to taste terrible!
Oil palm production does not need much energy input. Oil palm crops need less fertilizers and pesticides than other vegetable oil crops. This means fewer dangerous chemicals leak out into the environment. Oil palm crops also have a very long lifespan. A farmer can profit from selling the fruits for over 25 years. Oil palm fruits can be harvested regularly year-round. Therefore, these plantations employ people permanently and not just seasonally as they are with other vegetable oil crops. This improves the lives of palm oil farm employees.
Are There Ways to Make Palm Oil More Environmentally Friendly?
Up to 85% of natural forest species are lost when rainforests are converted to oil palm plantations [4]. Scientists are trying to find new ways to improve palm oil production so that it has less impact on the environment. Growers are also taking action to improve their practices. For example, they are following strict rules to reduce the use of fertilizers and pesticides and selecting better seeds to produce more oil from the same area of land.
Wildlife corridors can also help to make oil palm farming more environmentally friendly. These corridors are areas of conserved land or forests within or between plantations. Wildlife corridors help to support biodiversity and allow animals to move around. For example, Borneo elephants normally travel through oil palm plantations and eat the trunks of old oil palms—up to 150 kg per day! This causes the growers to dislike the elephants. So, the growers teamed up with WWF-Malaysia to make a wildlife corridor of 1,067 hectares, linking two forests. They hope this will provide a route for the elephants to travel through the plantations, with different types of food so the oil palms will be saved [5]. Wildlife corridors are not just important for large mammals. Corridors along rivers within oil palm plantations are very important for invertebrates, such as moths and dung beetles [6]. Often, the animals will spend most of their time in the corridors rather than in the oil palm plantations. Wildlife corridors would not stop all the negative impacts that farming has on local animals, but they will help animals move around better and provide different food sources, reducing conflict.
Other scientists found that some areas of wild forest should be kept within plantation areas. These areas of forest should be connected to each other by habitat corridors to maintain biodiversity. The good news is that this recommendation is now being used by big international companies like Unilever and organizations like RSPO.
Growing oil palm on unused farm land could also help to reduce deforestation [7]. Scientists have shown that turning unused farm land into oil palm plantations reduces carbon emissions by 99.7%, compared to clearing rainforests for these farms. The amount of biodiversity on unused farm land is also much lower than that of rainforests, so biodiversity levels may actually increase if oil palms are planted on this unused land [8].
The RSPO is trying to improve the palm oil industry by helping oil palm plantations and biodiversity to co-exist. To become members of RSPO, palm oil production companies must follow seven rules:
Behave ethically and transparently
Operate legally
Optimize productivity, efficiency, and positive impacts
Respect community and human rights, and provide benefits to communities (such as playgrounds, childcare, and schools)
Support smallholder plantations (plantations usually owned by a single family)
Respect workers’ rights and provide a good work environment
Protect, conserve, and improve ecosystems and the environment.
If they stick to these rules, companies can use the RSPO logo (Figure 3), which proves that their palm oil has been produced sustainably. Other sustainable palm oil organizations operate similarly, including MSPO and Indonesian Sustainable Palm Oil (ISPO). They too have their own recognizable logos, which are used on products around the globe (Figure 3).
Figure 3 - In the center, you can see the RSPO-certified sustainable palm oil logo.
This is the logo you will see on products if the palm oil in them is sustainable. Other similar organizations, like ISPO and MSPO, also have their own logos, which you can see on either side of the RSPO logo.
So, Can Palm Oil Be Produced Without Affecting Biodiversity?
In short, the answer is no. Production of palm oil will continue to impact biodiversity in the tropics. Scientists are working very hard to find new ways of reducing the environmental impact of this crop. However, it is unlikely that palm oil (or any other agricultural product) can be produced without impacting biodiversity in some way. However, scientists agree that we have the power to improve palm oil production and make it more sustainable. This will hopefully minimize the impacts of palm oil production on biodiversity.
How Can You Help?
If you want to help make sure palm oil is being produced sustainably, there are several things you could consider. You can become a wildlife hero simply by eating locally produced and wholesome (non-processed) foods. Not only is this healthier for you, but it is also healthier for the environment, because local products do not need to travel around the world before reaching your plate or lunchbox!
Also, if you find out that your favorite chocolate bar does not contain sustainable palm oil, then write a letter or an email to the company and explain to them how disappointed you are and what they could do to help. You could download Act for Wildlife’s Sustainable Palm Oil shopping list (https://www.chesterzoo.org/what-you-can-do/our-campaigns/sustainable-palm-oil/sustainable-palm-oil-shopping-list/) to find replacement foods. Maybe you even feel passionate enough to start a conversation with your friends, to find strategies to boycott products using non-sustainable palm oil. For example, you can ask your local supermarket to replace products using non-sustainable palm oil with other more environmentally friendly options. You and your friends can even help stores by making a list of those products. Last, you could help by learning even more about which companies are using sustainable palm oil or which products contain sustainable palm oil. Here are some resources:
Editor
Science Mentors
Publishing dates
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Additional navigation
Login using
You can login by using one of your existing accounts.
We will be provided with an authorization token (please note: passwords are not shared with us) and will sync your accounts for you. This means that you will not need to remember your user name and password in the future and you will be able to login with the account you choose to sync, with the click of a button. | Companies who use palm oil choose it because it can be processed to remove its color, taste, and smell. This means palm oil cannot be tasted in the final food product, but it still provides the fat needed as an ingredient in many foods. Olive oil also goes bad a lot faster than palm oil. This can make the food it is added to taste terrible!
Oil palm production does not need much energy input. Oil palm crops need less fertilizers and pesticides than other vegetable oil crops. This means fewer dangerous chemicals leak out into the environment. Oil palm crops also have a very long lifespan. A farmer can profit from selling the fruits for over 25 years. Oil palm fruits can be harvested regularly year-round. Therefore, these plantations employ people permanently and not just seasonally as they are with other vegetable oil crops. This improves the lives of palm oil farm employees.
Are There Ways to Make Palm Oil More Environmentally Friendly?
Up to 85% of natural forest species are lost when rainforests are converted to oil palm plantations [4]. Scientists are trying to find new ways to improve palm oil production so that it has less impact on the environment. Growers are also taking action to improve their practices. For example, they are following strict rules to reduce the use of fertilizers and pesticides and selecting better seeds to produce more oil from the same area of land.
Wildlife corridors can also help to make oil palm farming more environmentally friendly. These corridors are areas of conserved land or forests within or between plantations. Wildlife corridors help to support biodiversity and allow animals to move around. For example, Borneo elephants normally travel through oil palm plantations and eat the trunks of old oil palms—up to 150 kg per day! This causes the growers to dislike the elephants. So, the growers teamed up with WWF-Malaysia to make a wildlife corridor of 1,067 hectares, linking two forests. | yes |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://www.healthline.com/nutrition/palm-oil-deforestation | Palm Oil's Environmental Impact: Can It Be Grown Sustainably? | Palm Oil’s Environmental Impact: Can It Be Grown Sustainably?
Palm oil is a type of vegetable oil made from the fruit of the Elaeis Guineensis tree, a palm tree native to parts of Africa.
There’s a good chance that you’ve eaten palm oil or used products made with it. It’s used for cooking and as an ingredient in foods like crackers, butter substitutes, and frozen foods, as well as products like soap, shampoo, makeup, and even biofuel (1).
However, the methods used to produce palm oil are highly unsustainable and wreak havoc on the environment of Southeast Asia.
Nevertheless, the palm oil industry claims that this crop plays a significant role in the food system and provides jobs in the countries where it’s grown.
As a dietitian concerned with the future of our global food system, I want to take an in-depth look at palm oil’s environmental impact, as it’s clear that our current use of palm oil isn’t sustainable long term.
This article reviews some pressing sustainability issues with palm oil and explores a few ways that you can advocate for better production practices.
Many of us don’t realize just how common palm oil is. In 2021, the world produced more than 167 million pounds (75.7 million kg) of it (2).
Palm is already the most used vegetable oil in the world, and demand for it is only expected to grow (3).
This oil rose in popularity during the Industrial Revolution of the 18th and 19th centuries and again over the past few decades as manufacturers began looking for versatile ingredients to replace trans fats in processed foods.
Palm oil not only acts as a preservative but also remains stable under high temperatures and has a mild flavor and smooth texture. Plus, growing and harvesting it is cost-effective.
As the food industry realized palm oil’s perks, its use increased greatly during the 1970s and 1980s. This oil is now used in as many as half of all consumer goods (4).
SUMMARY
Palm oil’s use has grown exponentially over the past few decades. It’s hidden in many more products and foods than we tend to realize due to its versatile uses and effectiveness as a high volume crop.
Just a few counties — mainly Indonesia and Malaysia — produce nearly 85% of the planet’s palm oil (2).
Parts of Southeastern Asia, Africa, and Latin America where palm oil is grown are most affected by its production. Even so, because its impacts on the environment are so significant, the final toll of palm oil production may be much further reaching (5).
Here are some of the most notable environmental concerns involving palm oil:
Deforestation. In some parts of Asia, palm oil is estimated to cause nearly half of all deforestation. Clear-cutting forests for agriculture releases greenhouse gases, leads to the destruction of habits, and threatens biodiversity (5, 6, 7, 8).
Pollution. The large-scale production of an agricultural commodity like palm oil inevitably leads to runoff and pollution of nearby soil and waterways. Deforestation to make way for palm oil crops is also a major source of air pollution (4, 9, 10).
Loss of biodiversity. As a result of deforestation and habitat loss, many bird, elephant, orangutan, and tiger populations are becoming increasingly threatened or endangered in countries that produce palm oil (8, 11, 12, 13).
Unmitigated growth and production. Palm oil demand is projected to keep rising over the next 10 years. Production could grow by 100% or more in some areas, only worsening its environmental toll (5, 7).
Paradoxically, palm oil production is threatened by global warming, too. Not only do some palm varieties grow poorly in warmer temperatures, but flooding from rising sea levels also threatens palm-oil-producing countries like Indonesia (14).
SUMMARY
The palm oil industry is responsible for huge amounts of deforestation, greenhouse gas emissions, and pollution. As the industry continues to grow, these issues may only intensify.
Palm oil production is lightly regulated — and sometimes not regulated at all. This situation gives rise to tensions between corporate interests and consumers or environmental groups demanding changes to how palm oil is made.
Regulating palm oil may lead to higher prices for consumer goods, lower wages, and a loss of work for people who grow palm oil. Yet, excessive carbon emissions, such as those released by deforestation, are a threat to society as we know it (9, 15, 16, 17).
These are just a few issues to consider when it comes to regulating palm oil.
Researchers have proposed reducing the industry’s emissions by only using land that has already been forested for palm plantations, protecting the most carbon-rich lands like peat forests, and better managing carbon-sensitive areas (18, 19, 20, 21).
A few key players
In the private sector, organizations like the European Palm Oil Alliance (EPOA) are making commitments against deforestation, land exploitation, and peat forest development. Grocery stores like Iceland Foods have reformulated store-brand items to remove palm oil (7).
In some instances, governments have stepped in.
The 2015 Amsterdam Declaration aimed to phase out all palm oil that isn’t certified sustainable by 2020. The partnership now includes nine countries, including France and the United Kingdom, and has expanded its commitment to eliminating agricultural deforestation (22).
Despite these efforts, enforcement is challenging due to corporate influence and a lack of resources.
For example, efforts like the Indonesian Palm Oil Pledge (IPOP) were less successful. Advertised as a commitment to stop deforestation and the development of peat forests, the IPOP was signed by Indonesia’s largest palm oil exporters in 2014 (23).
The initiative fell apart just a few years later due to a lack of organization and external pressure from the industry. Some activists criticized the effort as little more than a political advertising stunt that only increased red tape around sustainability efforts.
SUMMARY
Currently, no one regulatory body oversees global palm oil production. Some nations have committed to using only sustainable palm oil, while private groups are advocating for a halt to deforestation and the development of carbon-rich lands.
Still, simply replacing palm oil with other vegetable oils may not be a viable option (5).
That’s because other vegetable oil crops would likely use even more resources — and thus contribute more to climate change — than palm oil does, as palm crops grow efficiently and have a significantly higher output than other oil-producing plants.
What if it’s grown responsibly?
If palm oil were produced ethically and sustainably, it could offer numerous benefits. Aside from being an effective cooking oil, it works well as a soap and fuel. Plus, people have been cooking with palm oil in Africa for thousands of years (1, 24).
Palm oil also has nutritional benefits because it contains healthy fats, numerous antioxidants, and vitamins A and E. Unrefined palm oil, also called red palm oil, may contain the most amount of nutrients since it’s cold pressed rather than heated during processing (25, 26, 27, 28).
Nevertheless, research on palm oil’s nutrients is conflicting. It may be healthiest when used in place of other less healthy fats like trans fats (29, 30, 31, 32).
SUMMARY
Palm oil is rich in healthy fats, some vitamins, and antioxidants. Though it can be part of a healthy diet, some people choose to limit it or use only sustainably grown palm oil due to the industry’s environmental and human rights abuses.
The Indonesian Sustainable Palm Oil (ISPO) Certification. This effort by the Indonesian government certifies sustainable growers in the country.
Still, environmental advocates have questioned the credibility of such programs due to the influence of the palm oil industry (33).
3. Request transparency from the palm oil industry
Don’t be afraid to reach out directly to palm oil producers, distributors, and companies that use palm oil in their products. Ask key players in the industry about their practices and encourage them to move toward sustainable palm oil.
By signing online petitions, sending emails, or joining protests, you can encourage companies that rely on palm oil to adopt sustainability principles.
4. Keep up the pressure
Policies to promote sustainable palm oil
Government policy can be wielded to stop deforestation and promote sustainable palm oil. Specific policies that would lessen palm oil’s environmental impact include:
Stricter trade criteria. Countries could decide to only import palm oil and palm oil products grown in a sustainable manner.
Land use regulation. Governments could mandate that palm plantations only be developed on land that has already been forested for several years.
Was this helpful?
Sustainability promises and certifications are a step in the right direction, but the palm oil industry needs a systematic overhaul to remain viable into the future.
Standing up to a major industry like the palm oil lobby might feel like a daunting task, but you won’t be alone. When ordinary citizens band together for a cause they’re passionate about, they can achieve extraordinary things.
Joining protests. You may be able to find a community group that helps raise awareness of palm oil’s effects. Other ways to advocate include avoiding palm oil or lobbying elected officials on its issues.
Spreading the word. Many people are still unaware of the harmful effects palm oil has on communities and the environment. You can be an advocate for change by helping educate others on palm oil.
SUMMARY
You can advocate for sustainable palm oil by limiting how much you use it, buying products that are certified sustainable, requesting transparency from the palm oil industry, and putting pressure on its main players to find sustainable alternatives.
Palm oil is abundant in the food system and common household products.
However, its environmental impact is profound. Although certain concrete steps, such as halting deforestation and growing palm only on previously forested lands, could reduce palm oil’s environmental impacts, the palm oil industry has so far resisted these changes.
Thus, if you’re worried about the impact palm oil is having on the world around you, you can take action by limiting your palm oil usage and purchasing products that are certified as sustainable.
Just one thing
Try this today: Scan the foods in your pantry, the soaps on your shelves, and the cosmetics in your bag to locate hidden sources of palm oil in your home. Don’t forget to look for ingredients like palmate, glyceryl, stearate, and sodium lauryl sulfate.
Was this helpful?
Last medically reviewed on June 4, 2021
How we reviewed this article:
Our experts continually monitor the health and wellness space, and we update our articles when new information becomes available.
Current Version
Jun 4, 2021
Written By
Cecilia Snyder, MS, RD
Edited By
Gabriel Dunsmith
Medically Reviewed By
Kimberley Rose-Francis RDN, CDCES, CNSC, LD
Copy Edited By
Christina Guzik, BA, MBA
Share this article
Evidence Based
This article is based on scientific evidence, written by experts and fact checked by experts.
Our team of licensed nutritionists and dietitians strive to be objective, unbiased, honest and to present both sides of the argument.
This article contains scientific references. The numbers in the parentheses (1, 2, 3) are clickable links to peer-reviewed scientific papers. | SUMMARY
Palm oil’s use has grown exponentially over the past few decades. It’s hidden in many more products and foods than we tend to realize due to its versatile uses and effectiveness as a high volume crop.
Just a few counties — mainly Indonesia and Malaysia — produce nearly 85% of the planet’s palm oil (2).
Parts of Southeastern Asia, Africa, and Latin America where palm oil is grown are most affected by its production. Even so, because its impacts on the environment are so significant, the final toll of palm oil production may be much further reaching (5).
Here are some of the most notable environmental concerns involving palm oil:
Deforestation. In some parts of Asia, palm oil is estimated to cause nearly half of all deforestation. Clear-cutting forests for agriculture releases greenhouse gases, leads to the destruction of habits, and threatens biodiversity (5, 6, 7, 8).
Pollution. The large-scale production of an agricultural commodity like palm oil inevitably leads to runoff and pollution of nearby soil and waterways. Deforestation to make way for palm oil crops is also a major source of air pollution (4, 9, 10).
Loss of biodiversity. As a result of deforestation and habitat loss, many bird, elephant, orangutan, and tiger populations are becoming increasingly threatened or endangered in countries that produce palm oil (8, 11, 12, 13).
Unmitigated growth and production. Palm oil demand is projected to keep rising over the next 10 years. Production could grow by 100% or more in some areas, only worsening its environmental toll (5, 7).
Paradoxically, palm oil production is threatened by global warming, too. Not only do some palm varieties grow poorly in warmer temperatures, but flooding from rising sea levels also threatens palm-oil-producing countries like Indonesia (14).
SUMMARY
| yes |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://cabiagbio.biomedcentral.com/articles/10.1186/s43170-021-00058-3 | Oil palm in the 2020s and beyond: challenges and solutions | CABI ... | Abstract
Background
Oil palm, Elaeis guineensis, is by far the most important global oil crop, supplying about 40% of all traded vegetable oil. Palm oils are key dietary components consumed daily by over three billion people, mostly in Asia, and also have a wide range of important non-food uses including in cleansing and sanitizing products.
Main body
Oil palm is a perennial crop with a > 25-year life cycle and an exceptionally low land footprint compared to annual oilseed crops. Oil palm crops globally produce an annual 81 million tonnes (Mt) of oil from about 19 million hectares (Mha). In contrast, the second and third largest vegetable oil crops, soybean and rapeseed, yield a combined 84 Mt oil but occupy over 163 Mha of increasingly scarce arable land. The oil palm crop system faces many challenges in the 2020s. These include increasing incidence of new and existing pests/diseases and a general lack of climatic resilience, especially relating to elevated temperatures and increasingly erratic rainfall patterns, plus downstream issues relating to supply chains and consumer sentiment. This review surveys the oil palm sector in the 2020s and beyond, its major challenges and options for future progress.
Conclusions
Oil palm crop production faces many future challenges, including emerging threats from climate change and pests and diseases. The inevitability of climate change requires more effective international collaboration for its reduction. New breeding and management approaches are providing the promise of improvements, such as much higher yielding varieties, improved oil profiles, enhanced disease resistance, and greater climatic resilience.
Introduction
The palms, or Arecaceae, are a family of stem-less, tree-like monocot plants that are highly significant to humans and wider biodiversity, especially in the tropics (Cosiaux et al. 2018). The African oil palm, Elaeis guineensis, is native to West Africa and in terms of agriculture, it is perhaps the world’s most important palm species. Oil palm fruits are available year-round and have served as semi-wild food resources in traditional societies for > 7000 years. In its regions of origin the oil palm plant has great significance to local people and for wider biodiversity (Cosiaux et al. 2018; Reddy et al. 2019; Okolo et al. 2019). Cultivation of oil palm as a crop was originally an informal process mainly confined to the West/Central African coastal belt between Guinea/Liberia and Northern Angola (Corley and Tinker 2015). Globally, the best production levels are achieved in high rainfall areas in equatorial regions between 7° N and 7° S. During the nineteenth century, oil palm seeds were transported to the Dutch East Indies (modern Indonesia), and to the Malay States (modern Malaysia), as part of colonial ventures to grow newly introduced cash crops in the region. During the twentieth century, more systematic oil palm cultivation on plantations gradually became established in the Malay States. In terms of large-scale commercial production, however, oil palm is a relatively recent crop that only emerged into global prominence later in the twentieth century, with an almost linear rise from 1990 to the early 2000s, followed by a plateau after 2007 (Malaysian Palm Oil Production by Year 2020). This was largely due to government initiatives in the 1970s and 80 s aimed at improving the agriculture and economy of the newly independent nation of Malaysia (Corley and Tinker 2015; Murphy 2014). The later rise of the oil palm industry in Indonesia occurred during the twenty-first century when there was a > 5-fold increase in oil production from 8.3 Mt in 2000 to 43.5 Mt in 2020.
Today, oil palm is crucial to the economies of many countries, especially Indonesia and Malaysia, from which large quantities of its products are exported in the form of oil, meal and other derivatives (Murphy 2019). More widely, oil palm is now cultivated in plantations across the humid tropics of Asia, Africa and the Americas, from where its products are exported to global markets. However, despite its increasing cultivation on three widely separated continents, the vast majority of oil palm is still grown in the two adjacent South East (SE) Asian countries of Indonesia and Malaysia (Table 1) that generate about 85% of the entire global production (Murphy 2014, 2015, 2019; Statista 2020; Goggin and Murphy 2018). The major importing regions, collectively responsible for about 60% of total palm oil imports, are the Indian subcontinent (India, Pakistan, Bangladesh) with about 17 Mt, the EU-27 with 6.5 Mt, and China with 5 Mt (Statisa 2020).
There are two contrasting types of oil found in the two principal tissues of palm fruits, namely ‘palm oil’ and ‘palm kernel oil’ (Murphy 2019). Palm oil, extracted from the fleshy mesocarp tissue, is a deep orange-red, semi-solid fluid, whilst palm kernel oil is a white-yellow oil that is extracted mainly from the endosperm tissue of the kernel (seed). These two oils have very different fatty acid compositions (Table 2), which means they are used for different downstream applications in a range of industrial sectors (Goggin and Murphy 2018). In general, the relatively high saturated fat content of palm oil makes it particularly suitable for edible use as a solid vegetable fat (melting point ca. 35 °C). In contrast, palm kernel oil is a less dense product (melting point ca. 24 °C) that is mostly used for non-edible applications (Statisa 2020). A major use of palm kernel oil is as the key functional ingredient in many soaps, detergents and cosmetics. E. guineensis plants bear prolific numbers of oil-rich fruit bunches year-round, each containing between 1000–3000 individual fruits (Corley and Tinker 2015). Mesocarp-derived palm oil makes up about 89% the total fruit oil with the remaining 11% being derived from the seed or kernel. Because palm oil and palm kernel oil are extracted from fruits by different mechanical processes and have very different downstream uses, they enter separate supply chains immediately after extraction in mills.
In terms of annual production, the global oil palm industry is worth about US$ 60 billion, employing 6 million people directly plus an additional 11 million indirectly (Kadandale et al. 2019). Over 81.1 Mt of palm oils were produced globally in 2019–20, of which 72.3 Mt was mesocarp oil (hereafter referred to as ‘palm oil’) while 8.8 Mt was palm kernel oil (Statisa 2020). It is estimated that palm oil or palm kernel oil are present as ingredients in at least half of the products found in a typical supermarket. At least three billion people rely directly on palm oil as a regular part of their diet, and it is a staple cooking oil commonly used in African and Asian food preparation. As global populations increase, the demand for palm oil is likely to continue to rise. Estimates from various industry sources predict that between 93 and 156 Mt palm oil might be required by 2050 (Frost and Sullivan 2017; Harris et al. 2013; Pirker et al. 2016). However, these estimates do not consider the effect of climate change on production, which is likely to reduce the ability of the sector to meet these demands (Paterson 2020a, b, 2021a, b) as discussed later.
In addition to its edible applications, the oil palm crop provides a wide range of non-food products that also include animal feeds. These feeds are derived from the seeds or kernels, which contain a protein-rich meal residue following oil extraction. Palm kernel meal is an often overlooked product of the crop, but is a useful livestock feedstuff that is exported globally. In 2019, about 7.6 Mt palm kernel meal was exported, almost exclusively (98%) from Indonesia and Malaysia (Indexmundi 2021). In order of importance, the major importing countries (75% of total 2019 imports) are the EU, New Zealand and Japan, where the meal is used in a variety of feed formulations, especially for ruminants such as cattle.
The image of oil palm has been adversely affected by detrimental environmental consequences of its cultivation, especially with respect to deforestation and haze creation (Paterson and Lima 2018). There is also great public concern about the plight of iconic species, and particularly the orangutan, in SE Asia. Some of the particular challenges currently faced by the industry include the following:
In all cases these issues will require attention by the industry during the rest of this decade and beyond.
Structure of the oil palm industry
Modern commercial oil palm cultivation began in Malaysia in 1917 (Basiron 2007) and over 88% of palm oil is still produced by Malaysia plus the neighbouring countries, Indonesia and Thailand (Statista 2020). From 2001 to 2016, the expansion of oil palm plantations was particularly marked in this region with a 2.5-fold increase in Malaysia and a 4.2-fold increase in Indonesia (Xu et al. 2020). Over the past decade, oil palm crops have also been grown increasingly outside SE Asia (Murphy 2019), as suitable land in Asia becomes scarce and the changing climate is less conducive to cultivation (Paterson 2021a, b). For example, there is only an estimated 300,000 ha of available land for palm expansion remaining in Malaysia (Villela et al. 2014), with increasing government prohibitions for environmental reasons, on further encroachment onto forest and peatland in Indonesia (Jackson 2019). Continuing increases in global demand over the past five decades have meant that the cultivation of oil palm has been widely regarded by many tropical countries as a method to boost their economies (Arrieta et al. 2007; Ohimain and Izah 2014; Paterson et al. 2015).
In SE Asia, the primary regions for oil palm production in Indonesia are Sumatra and Kalimantan (Paterson et al. 2015; Suryantini and Wilandari 2018), while in Malaysia the peninsula was the historical centre, although considerable expansion has occurred more recently in Sabah and Sarawak. Due their climatic suitability, oil palm cultivation has also spread to other SE Asian countries, especially Thailand and Papua New Guinea, with Myanmar and the Philippines in the initial stages of development where the crop is important to the economies of each of these countries (Corley and Tinker 2015; Suryantini and Wilandari 2018; Pornsuriya et al. 2013; Somnuek et al. 2016; Woods 2015). Due to its profitability, there are also significant emerging oil palm industries in much of tropical Africa with Nigeria, Ghana, Ivory Coast, Cameroon, Sierra Leon, Benin, Angola, and DRC as the main producers (in that order) (Paterson 2021a). However, in most cases African oil palm crops are mainly used for local consumption, with Cameroon and Ivory Coast as the only major palm oil exporters (Corley and Tinker 2015). Nigeria is the fifth highest producer globally, with an annual 1·0 Mt, although this is dwarfed by Indonesia with 42.5 Mt and Malaysia with 18.5 Mt (Statista 2020).
In the Americas, the first oil palm plantations were established in Honduras and Costa Rica and currently the largest industries are in Colombia and Ecuador, although Brazil is also expanding its production (Corley and Tinker 2015; Murphy 2019; Nahum et al. 2020). South and Central America are considered as favourable areas for oil palm development due to their theoretical ability to produce palm oil. There is well over 1.5 Mha of planted oil palm in Latin America with Brazil having the largest future potential, although currently the leading producer is Colombia with an annual 1.5 Mt. Although the environmental consequences of increasing oil palm cultivation require careful consideration (Murphy 2015; Paterson 2020a, b, 2021a), these countries could potentially increase their market share in a sustainable manner, for example by only converting land currently used for pasture or illegal coca cultivation. This will be important as land in Malaysia and Indonesia becomes less available (Paterson and Lima 2018). However, there are also important climate change constraints for a truly sustainable future industry in the Americas, Africa, and SE Asia (Paterson 2021a, b, c, d; Indexmundi 2021; Paterson et al. 2015, 2017).
A recent study shows the global distribution of smallholder and industrial plantations at high resolution (Descals et al. 2020). Smallholders account for 30 to 40% of global land palm oil cultivation (Hambloch 2018; Euler et al. 2016). In SE Asia there are more than three million smallholders, nearly all of whom cultivate individual family-owned and managed plots of less than 50 ha and often as little as 1–2 ha. In Indonesia, which is the largest oil palm producing country, smallholder plots account for 40% of the total crop area, although they only produce 30% of total national output (Euler et al. 2017). However, although the larger commercial plantations tend to be more efficient in terms of oil yield and overall economics, smallholder units serve important social roles in providing income and employment to rural populations (Murphy 2015; Euler et al. 2016). Smallholder units are also more likely to supply palm oil for local consumption rather than for export. This is particularly true for parts of Indonesia and Africa where the crops can be regarded as key elements in local food security and economic wellbeing (Krishna et al. 2017; Kubitza et al. 2018). Interestingly, there is also evidence that smallholdings can have lower environmental impacts (Lee et al. 2014) and higher biodiversity levels than commercial plantations (Razak et al. 2020).
In contrast, commercial plantations tend to be part of large ventures that are often owned by multinational companies that can extend over tens of thousands of hectares, with the largest totalling about one million ha. In terms of global trade, palm oils from commercial plantations are by far the most important contributors. In some cases, the larger plantation companies also own or control many key downstream elements in palm oil supply chains. These include mills, refineries, shipping operations and the distribution networks to processors and retailers in export destinations.
In summary, oil palm cultivation is still highly concentrated in SE Asia, but the focus of future expansion is likely to be elsewhere in the humid tropics, especially in West Africa and northern regions of South America. Therefore, the oil palm industry is a hybrid of large scale, globally focussed, commercial farming and small scale production of a cash crop, often for local consumption. As discussed below, the industry must manage the effects of environmental factors, such as climate change and increased disease incidence on cropping systems, as well as changing consumer sentiments in export destinations.
The environmental context
Oil palm is widely considered as a problematic crop. This has been mainly due to the environmental and ecological impacts of some of the land conversions to oil palm plantations over the past two decades, especially in Indonesia. In many cases these have displaced pristine tropical habitats and affected iconic wildlife species, such as orangutan (May-Tobin et al. 2012; Gaveau et al. 2014). The EU is the second largest global importer of palm-based oils and this consumer-led demand has been one of the drivers of the expansion of recent oil palm cultivation. Since 2000, increased global demand for biofuels and other non-food products (mainly from Europe), and for food (mainly from India and China), were the major factors behind the conversion of land in SE Asia to oil palm cultivation. In Indonesia the area of oil palm cultivation more than trebled from 2.5 Mha to over 8 Mha between 2000 and 2014 (Indonesia: Palm oil expansion unaffected by Forest Moratorium 2013). In some cases this has led to significant habitat loss and reductions in biodiversity as complex ecosystems are replaced with simpler species-poor plantation systems, as well as concerns about increased GHG emissions as land is converted to oil palm (Dislich et al. 2017; Meijaard et al. 2020; Carlson et al. 2012; Barcelos et al. 2015).
Several studies have examined the potential impact of land use and climate change on biodiversity in Borneo, where a great deal of oil palm planting has occurred during the past decade (Scriven et al. 2015; Gaveau et al. 2016). Recommendations from these and other studies, include the need to establish nature reserves in upland areas where climate change will be less severe and also to improve connections between reserves and plantations via wildlife corridors (Scriven et al. 2019). One of the most controversial aspects of new palm cultivation in SE Asia is the use of tropical peatland, especially in Borneo. There are several ongoing studies of the impact of peatland conversion in terms of GHG emissions, and other environmental studies have been carried out in association with the major certification scheme that is run by the Roundtable on Sustainable Palm Oil (RSPO). Examples include the following articles: (Gunarso et al. 2013; Chase et al. 2012; Dalal and Shanmugam 2015; Tonks et al. 2017; Cook 2018).
As a result of such studies, RSPO insist that to gain certification, new plantings since November 2005, must not have replaced primary forest or any area required to maintain or enhance one or more High Conservation Values (HCV). An HCV assessment, including stakeholder consultation, must be conducted prior to any conversions and dates of land preparation and commencement must also be recorded. The HCV assessment process requires appropriate training and expertise, and must include consultation with local communities, particularly for identifying social HCVs. Development should actively seek to utilize previously cleared and/or degraded land. Plantation development should not put indirect pressure on forests through the use of all available agricultural land in an area. In order to improve the participation of smallholders, local certification schemes, such as the Malaysian Sustainable Palm Oil (MSPO) and Indonesian Sustainable Palm Oil (ISPO) initiatives, have been set up, although internationally traded palm oils are almost exclusively certified by RSPO.
Several studies suggest that limited oil palm expansion might be possible on already degraded land, without the need to convert mature tropical forests (Jackson 2019; Wicke et al. 2011), and that smallholdings may have lower environmental impacts than commercial plantations (Lee et al. 2014). Despite these caveats, there is considerable pressure for governments to impose much stricter controls, and even outright bans, on the conversion of tropical peatlands and non-degraded forest to oil palm. Although there have been encouraging statements along these lines from politicians in the two major producing countries, these remain largely aspirational at present and more effective action is required.
Pests and diseases
Oil palm crops are affected by several economically important pests and fungal pathogens, of which several of the most serious diseases will now be considered (Corley and Tinker 2015).
Fungal diseases
Basal Stem Rot (BSR) is caused by the fungus, Ganoderma boninense (Fig. 1), is a serious disease of oil palm, which can reduce yields by 50–80%. It has increased over recent decades due to its spread from infection foci at a greater rate, following repeated cycles of crop planting on infested sites. In Malaysia, BSR is often reported in young plants and seedlings, whereas previously only mature oil palms were infected (Paterson 2019a, 2020c). By the time an oil palm stand is halfway through its ca. 25-year economic lifespan, BSR can kill 80% of plants. Furthermore, expansion of industrial oil palm cultivation began early in Sumatra, where G. boninense adaptation to the environment is most likely to occur. This region contains the highest levels of disease, implying an association between the duration of oil palm cultivation and higher disease concentrations. BSR is found increasingly in inland peninsular Malaysia and Sabah, and in some cases is at high levels in places where it has not previously been detected. BSR was also reported at high levels in oil palm grown on inland lateritic soils and peat soils, irrespective of cropping history, whereas before such soils had been disease-free. By the time of replanting (every 25 years), 40–50% of palms were lost in some fields, with the majority of standing palms showing disease symptoms. This information indicates a trend for increasing BSR with projected climate change. However, the climate for growing oil palm is currently optimal and has been so for many decades. The increase in disease previously reported was from increased virulence of the fungus, rather than increased susceptibility of oil palm due to a less suitable climate.
BSR may increase further by natural selection of more virulent strains and oil palm cannot always adapt rapidly enough to respond to changes in pathogen virulence. The BSR pathogen has the ability to infect oil palm plants at a rate of as much as 80% incidence over half of its economic life span (Corley and Tinker 2015). Ganoderma is a variable genus with poorly defined species concepts and will adapt to climate change more readily than oil palm via natural selection of more virulent strains (Mercière et al. 2017). In Indonesia, BSR is less severe in Kalimantan than in Sumatra, probably due to younger crop rotations (Suryantini and Wilandari 2018; Paterson 2019a, b, 2020d). In Thailand, national BSR incidence is relatively low with a reported rate of 1.53%, although it is more widespread in the south (Basal stem rot of oil palm 2020). In Southern Thailand, BSR incidence may be influenced by proximity to peninsular Malaysia where disease rates are also high (Pornsuriya et al. 2013). In Papua New Guinea, BSR incidence is not as high as in other areas of SE Asia, although rates of 50% have been recorded in some regions. An average of 25% infection is a plausible scenario for this country as the initial incidence is lower than in Malaysia and Indonesia. BSR incidence is probably low in Myanmar as the plantations are more recent and distances between them are large. Myanmar has a distinctly different climate to the rest of SE Asia and is less capable of growing oil palm per se (Paterson 2020b).
Paterson (2020c, d) considered BSR in Malaysia and Indonesia respectively and in the regions of the countries. Disease incidence was much higher in peninsular Malaysia than in Sarawak, and especially Sabah. Sabah may therefore be a more sustainable region from the perspective of BSR incidence. In Indonesia, Sumatra and Java had particularly high incidences compared to other areas such as Sulawesi and Papua. These scenarios indicated which regions may be suitable in terms of future sustainability of the industry. However, the environmental effects, especially from deforestation, should be of prime importance in planning new plantations.
Fusarium oxysporum f.sp. elaeidis (Foe) results in acute and chronic wilt of oil palm particularly in Africa (Paterson 2021e). A major outbreak devastated OP in West and Central Africa where it has a particularly high incidence (Rusli 2017). Foe in Malaysia and Indonesia is controlled by quarantine procedures, although native strains can infect oil palm in vitro. Avoiding introduction from endemic areas is essential to prevent Foe in regions where it is does not normally exist. However, importation of breeding materials from Africa is required to expand genetic diversity in Malaysia and Indonesia, implying a risk from infested seed and pollen. Quarantine procedures in Malaysia and Indonesia are undertaken, although the risk of spread remains, especially because climate change may increase disease (Rusli et al. 2015).
In the Ivory Coast, 20% of palms planted from 1964 to 1967 displayed vascular wilt symptoms, with some crosses at 70% (Cochard et al. 2005). But from 1976 to 1983 vascular wilt rates of < 2% were observed and in the 1990s, it was difficult to find symptoms in plantations. These reductions were attributed to breeding for resistance. Rusli et al. (2015) found that Foe infection of oil palm was frequent in Ghana with incidences of 10.4% and 8.3% and also detected the presence of Foe in ca. 11% of symptomless palms in plantations. Decades of selection and breeding for wilt resistance occurred in Ivory Coast where 20% of palms planted from 1964 to 1967 displayed vascular wilt symptoms, with some crosses at 70% (Cochard et al. 2005). Rusli (2017) demonstrated that Malaysian oil palms were susceptible to infection by Foe strains from Africa.
Phytophthora palmivora is a fungus-like oomycete and a notorious pathogen of oil palm, causing severe damage in Latin American countries, such as Colombia (Corley and Tinker 2015). The disease has recently devastated > 30,000 ha in South West Colombia and > 10,000 ha in the Central Zone, and the rapid increase in the disease may be related to climate change. Acute and chronic forms are found, and it is possible that several different diseases have been described under one name. The acute forms are present in Colombia and Ecuador, with the chronic forms found in Brazil (Corley and Tinker 2015). P. palmivora disease of oil palm is unreported in Malaysia and/or Indonesia, although a similar spear rot of oil palm has been reported in Africa and Thailand, which may involve P. palmivora. Many other hosts for the oomycete exist in Malaysia and Indonesia (e.g. durian) and, in view of a recent extreme outbreak in Colombia, P. palmivora presents a potentially severe threat to Malaysian and Indonesian plantations (Paterson 2020a). More assessments of infectivity are essential given that outbreaks of P. palmivora could cause severe problems for major SE Asian oil palm industries.
Other fungi Several lesser fungal diseases also cause problems for oil palm (Corley and Tinker 2015). Bunch failure is used to describe oil palm fruit bunches that fail to develop from anthesis to harvest, and the disease can be caused by the basidiomycete Marasmius palmivorus. Another basidiomycete, G. philippii, is closely related to G. boninense but is in fact a trunk rot of Acacia trees that is also listed as an oil palm pathogen (Corley and Tinker 2015). This species may become more frequently isolated from oil palm due to climate change. Phellinus noxius is a basidiomycete, partially responsible for upper stem rot of oil palm, occurring together with G. boninense in some cases. Haematonectria haematococca has been implicated in spear rot of oil palm in vitro. Dry basal rot of oil palm is caused by the ascomycete Ceratocystis paradoxa (anamorph = Thielaviopsis paradoxa), which also has been implicated in oil palm fatal yellowing in, for example, Colombia. Cercospora ealidis is widespread throughout Africa and causes Cercospora leaf spot. It is infrequent in Asia and is primarily a disease of nursery seedlings and frequently carried forward to plantations where it can survive for a long time (Corley and Tinker 2015). Glomerella cingulata is responsible for anthracnose disease in oil palm, although it is not severe currently. All these are diseases of oil palm and it is important to assess how they will be affected by climate change in future studies.
Pests
In general, pest species of oil palm do not have as much impact on the crop as diseases, with the possible exception of the rhinoceros beetle, Oryctes rhinoceros, which emerged as the major pest of oil palm in SE Asia in the 1980s. Although chemical insecticides can be effective, they are expensive, they can affect beneficial insects, and the target organisms may develop resistance. This has led to development of biocontrol strategies, the most effective of which are the deployment of two pathogens of the beetle, namely the entomophagous fungus Metarhizium anisopliae and the Oryctes virus (Ramle et al. 2005). Both pathogens are specific to rhinoceros beetles and as such will not affect other insects. The Oryctes virus appears to be endemic in the beetle population, and deliberate augmentation can increase infection levels to > 75%. Metarhizium fungal spores can be applied to areas of infestation as a spray that is highly effective at controlling, but not totally eradicating, the beetles. The combined use of these and other natural pathogens of the rhinoceros beetle have the potential to reduce its harmful impact on the crop, while also minimizing risks of resistance development.
With the projected increase in oil palm replanting over the coming years, it will be important to consider the wider release of such biocontrol agents into areas where rhinoceros beetle incidence is particularly high. These and other forms of integrated pest management are being investigated as primary options in plantations across SE Asia (Ramle et al. 2005; Kalidas 2013). The rapid expansion of high intensity commercial plantations in new regions such as West Africa and South/Central America, plus climatic changes, are likely to result in the emergence of new pests and pathogens. Therefore, it will be important for the public sector and industry to work together in developing improved methods of surveillance and early detection of such threats (Kalidas 2013; Caudwell and Orrell 1997).
Impacts of climate change
The negative impacts and significance of climate change are well documented in the scientific literature and are now broadly accepted by most of the general public. Climate change threatens the sustainability of crop production via factors such as temperature, rainfall and disease patterns (Rosenzweig et al. 2008). However, the likely effects on tropical crops remain less well known, especially in SE Asia, Africa and Latin America (Ghini et al. 2011; Feeley et al. 2017; Sarkar et al. 2020), although recent research has started to address the situation for oil palm (Paterson 2019a, b, 2020b, c, 2021a, b; Paterson and Lima 2018; Paterson et al. 2015, 2017; Sarkar et al. 2020; Shabani et al. 2012), as discussed below. Climate change effects on natural systems require prediction to mitigate consequential changes in diversity and ecosystem function (Feeley et al. 2017). Mapping of plant disease distributions can influence biosecurity planning, specifying areas that qualify for eradication or containment. The CLIMEX model has been developed for current and future species distribution where knowledge about climate change effects on species distributions is essential in mitigating negative impacts (Lenoir and Svenning 2015).
Effects of oil palm cultivation on climate change
Koh and Wilcove suggested that oil palm expansion occurs at the expense of forests acting as carbon sinks (Koh and Wilcove 2008). Dislich et al. (2017) determined 11 of 14 ecosystem functions decreased in levels of function by the introduction of oil palm plantations. Fitzherbert et al. (2008) determined that oil palm plantations support many fewer species than forests and some other tree crops: Habitat fragmentation and increased pollution can further increase GHG emissions. The detrimental aspects of increasing numbers of oil palm plantations has been discussed in terms of deforestation and haze production from burning peat soil to clear ground for new plantations (Tonks et al. 2017; Cook 2018; Veloo et al. 2015). These processes release GHGs contributing to climate change (Dislich et al. 2017).
The conversion of tropical rainforests into oil palm plantations is the primary environmental impact of the industry (Paterson and Lima 2018). Forested areas are used for the expansion of plantations where the emissions from conversion exceeded the potential carbon fixing of oil palm (Paterson et al. 2015, 2017). Oil palm production involving deforestation re-leases global anthropogenic emissions of 6–17% CO2 (Wich et al. 2012). The highest carbon emitter countries from forest cover loss are Brazil, Indonesia and Malaysia with values of 340, 105, and 41 [Teragrams (Tg) C/year] respectively. Indonesia and Malaysia account for high C emissions from deforestation as they are the first and second highest producers of oil palm. Substantial palm oil production is also undertaken in Columbia and Nigeria (Paterson et al. 2017). Emissions from oil palm cultivation in Indonesia accounted for ca. 2–9% of all tropical land use emissions from 2000 to 2010 (Carlson et al. 2018) and deforestation accounted for about 30% of global warming-related pollution emissions in 2009, with Indonesia as the world’s seventh-largest producer of such emissions. Plantation expansion in Kalimantan, Indonesia contributed 18–22% of the country’s CO2 emissions in 2020.
Large reductions in emitted GHGs and climate regulation function occur due to conversion of forest to oil palm plantations (Dislich et al. 2017). More GHGs and volatile organic compounds (VOCs), which are precursors to tropospheric ozone, are produced by oil palm plantations. GHGs emitted from land-clearing fires and land and plantation establishment are significantly greater than carbon sequestered by oil palm. VOCs, GHGs and aerosol particle emissions during fire periods result in direct and indirect changes of solar irradiation while undisturbed forests give lower air and soil temperature and higher air humidity microclimates compared to plantations (Dislich et al. 2017).
Indonesia has increased oil palm plantations, reducing drastically the primary forest. Sumatra has the highest primary rainforest cover loss in the country. Forest cover in Riau and Jambi declined from 93 to 38% between 1977 and 2009 changing microclimatic conditions from lack of natural forests regulation. Warming of land surface and increases in air temperature from climate change occur from oil palm expansion as observed in Sumatra (Paterson and Lima 2018). Oil palm foliage cover is lower, more open, and simpler than tropical rainforest foliage cover. Warming occurs from reduced evaporative cooling and warming induced by land cover change (LCC) exceeded the global warming effect.
The predominant compound contributing to the GHG from oil palm plantations is CO2 whereas nitrous oxide and methane are at reduced concentrations, although with greater effect per molecule. Large releases of CO2 from land-clearing fires occur, particularly on peat. Also, fires indirectly increase emissions by increasing peat decomposition. Drainage of peat soil releases large concentrations of CO2 to establish plantations by oxidation and decomposition: dissolved organic matter is removed from peat soils during drainage, which decomposes and releases additional CO2.
The very high fruit production of oil palm allows greater assimilation of CO2 and produces more biomass than forests and is often used erroneously as an argument in favour of oil palm. This higher rate of C uptake does not compensate for that released when forests are cleared, as forests have more biomass than oil palm plantations unless very long timescales of hundreds of years are considered, well beyond the maximum time frame of ca. 80 years considered in Paterson et al. (2015, 2017) in terms of the likely effect of climate change on suitable climate for oil palm growth for example.
Fires also add black carbon (soot), which increase global warming and oil palm plantations release more N2O into the atmosphere than forests, mainly from fertilizer use. Peatland deforestation for oil palm cultivation in West Kalimantan, Indonesia, increased GHG emissions greatly (Paterson and Lima 2018). Overall, the biological and managerial tools to surmount many challenges exist but need much better support (Murphy 2014) and will be discussed below. In addition, large scale conversion of tropical forest to oil palm plantations has detrimental effects on biodiversity.
Effects of climate change on oil palm cultivation
In terms of general effects, climate change is likely to affect sustainable production of palm oil as climatic suitability will decrease, with concomitant increases in economic and social problems in producing regions. Poleward movements in climate-related ranges of particular plants are by far the most frequently reported, including limited reports on poleward change in suitable climates for oil palm growth (Paterson et al. 2017; Fei et al. 2017). How species may react under climate change has been reported including the detrimental effect on the suitability of future climates on oil palm growth in a global setting (Paterson et al. 2017). Furthermore, oil palm production creates climate change as discussed above and this will affect detrimentally the ability to grow oil palms and alter their distribution (Paterson and Lima 2018). Oil palm is currently grown in optimal climatic conditions and has been for many decades (Corley and Tinker 2015).
Suitable oil palm climatic impact data have been used to create schemes for its mortality by postulating that large degrees of unsuitable and marginal climates in particular were likely to cause high amounts of mortality. Also, reductions in highly suitable and/or suitable climate per se would not cause a significant effect on oil palm mortality. Simulation modelling to determine suitable climate scenarios for growing oil palm (Paterson et al. 2017; Paterson 2019a), were employed to estimate how climate suitability for oil palm growth would change the estimated mortality rate from unsuitable climatic conditions. Predicted percentage oil palm mortality was determined in (a) SE Asia (Paterson 2020b) and (b) Latin America and extrapolated to Malaysia and Indonesia (Paterson 2020c) (Fig. 2a, b). These percentages represent large numbers of oil palms in Malaysia, Indonesia, Thailand and Papua New Guinea because of the large numbers grown in these countries. Information on oil palm mortality is also provided for some African countries in Paterson (2021e).
The detrimental effect of future climate changes on oil palm cultivation globally and on oil palm mortality in Kalimantan, Indonesia and some other SE Asian countries were determined (Paterson 2020b, c; Paterson et al. 2017), which provided information relevant to Malaysia (see Table 3). High oil palm mortalities were predicted for Thailand, and Myanmar and low mortalities for Kalimantan and the Philippines, while Papua New Guinea was intermediate (Harris et al. 2013; Gunarso et al. 2013). Modelling of oil palm mortality for three South American countries, Malaysia and Indonesia was performed (Paterson 2020c) using similar methods to Paterson (2020b). The Latin American countries, particularly Brazil, were assessed to have high future mortalities, whereas the figures for Malaysia and Indonesia were much lower. These potential effects on mortalities will have detrimental consequences on future abilities to meet the demand for palm oil. High levels of mortality were determined for Peninsular Malaysia but not in Sabah or Sarawak in the future from unsuitable climate (Paterson 2020c). In Indonesia, regions such as Sulawesi and Papua had low levels of mortality in contrast to Sumatra and Java where high mortality was determined (Paterson 2020d). A study of predicted oil palm mortalities in South America found that by 2050, low mortalities are predicted in (a) the East Coast from Brazil to Suriname, (b) more centrally in Paraguay and (c) Colombia, Peru and Ecuador in the west. High mortalities were determined for Guyana, Bolivia, Western Brazil and Venezuela (Paterson 2021a). By 2100, much higher mortalities were determined for all countries except Paraguay, which appeared virtually immune to the effects of future climate. Very high mortality of oil palm was determined for Ghana and Nigeria in Africa, especially by 2100 (Paterson 2021e), whereas Cameroon had low levels,
Table 3 Predicted oil palm mortalities (%) with climate change in various South American and SE Asian countries, plus the Kalimantan province of Indonesia.
African oil palms are likely more badly affected by climate change from increased GHGs, although there appears a low extinction risk in the immediate future. Furthermore, losses of oil palm habitats such as tropical rain forests are exacerbating the pressures on oil palm populations: their ecosystem functions and services will be highly sensitive to climate change. Blach-Overgaard et al. (2015) predicted climate suitability losses across almost all regions where palms occur in Africa and CLIMEX modelling indicated that Africa will have less suitable climatic conditions for oil palm cultivation (Paterson et al. 2017). However, sharp longitudinal trends to potential refuges from west to east Africa were found, which could allow oil palm to survive naturally, or by the creation of new plantations towards to east of the continent, with, of course, environmental concerns being paramount (Paterson 2021a). Using similar methods, a phased increase in suitable climate was predicted, which implied more unsuitable climate for growing oil palm towards the centre of the South American continent (Paterson 2021b). Increasing longitudinal trends in suitable climate for growing oil palm SE Asia were observed from current time to 2050 and 2100 from west to east (Paterson 2021c). Paterson (2021d) developed an improved model for determining suitable climate for growing oil palm in Africa which confirmed the west to east trend and could be employed in other regions such as South America and SE Asia.
A significant negative relationship was found between annual average temperature and sea level rise and oil palm production in Malaysia temperature with rises of 1 to 4 °C potentially causing oil palm production to decrease by 10 to 41% (Sarkar et al. 2020). Future changes to suitable climates for growing oil palm worldwide were considered using modelling based on temperature, soil moisture and wet stress data (Paterson et al. 2017). The general predictions were for a reduced level of suitable climatic regions by 2050 and further reductions by 2100. The projections indicate serious consequences to the oil palm industry generally. In Africa, the climate is predicted to be less suitable for growing oil palm at the same rate, or faster than, Malaysia and Indonesia with the exception of Uganda where increases in climatic suitability were predicted. Paraguay appears to gain suitability in climate for growing oil palm in South America, whereas Venezuela will have a particularly low level of suitable climate. French Guiana, Surname and Guyana appear to maintain suitable climates and large losses were determined in west Brazil by 2100. The western countries of Colombia, Peru and Ecuador will suffer severe losses of suitable climate. Furthermore, there was a three-phase trend in suitable climate rather than a single direct longitudinal change (Paterson 2021b). Vietnam, the Philippines, Papua New Guinea (PNG) and island Malaysia had increased suitable climate by 2050 in SE Asia. Large decreases in suitable climate by 2050 for Thailand, Laos and Cambodia, which are towards the west of SE Asia, were observed (Paterson 2021c).
Climate has an important role in defining the range limits of oil palm distribution by exerting eco-physiological constraints (Paterson et al. 2017). However, factors such as soil properties and biotic interactions may prevent plants from colonizing sites that are otherwise suitable. Studies such as Paterson et al. (Razak et al. 2020) are unusual in that a wider range of climatic conditions are considered than only temperature (Paterson 2021a). Changes in climate will have broad-scale impacts on the distribution of oil palm. Alterations in cold, heat and dry stresses were largely responsible for the changes in climatic suitability for oil palm cultivation, while wet stress was unimportant, hence extending the range of parameters from temperature alone (Paterson et al. 2017). Apart from temperature (Feeley et al. 2017) and diseases, a wide range of factors still awaits consideration, although studies on effects on crop production have been reported (Lobell et al. 2006).
One of the most important future threats is the emergence of new pests and diseases and/or the movement of existing diseases from one part of the world to another. The transfer of existing biotic threats could occur due to climatic factors, but another mechanism is movement via trade, travel or other human agency where potential pathogens might elude current biosecurity measures. For example, the serious impact of P. palmivora on plantations in S. America, and if this pathogen were to reach the central growing regions of SE Asia, its impact could be devastating (Paterson 2020a; Mohamed Azni et al. 2019). Although biosecurity measures are already in place, these tend to be focussed on known threats and may not be able to cover all of the many potential entry routes for a new pathogen. A similar situation exists for Fusarium wilt disease of oil palm (Paterson 2021e) with African countries suffering most from this particular disease.
An example of an unexpected new form of pathogen of oil palm is the orange-spotting coconut cadang-cadang viroid variant (OSCCC-Vd) (Coconut cadang-cadang viroid (cadang cadang disease) 2020). Viroids were only discovered in the 1970s and are the smallest and simplest known type of infectious pathogen, consisting of just one small, naked, circular single-strand of RNA that does not encode any proteins. The origin of viroids is unknown with some suggestions that they might date from an ancient non-cellular ‘RNA world’, although a more parsimonious hypothesis is that they have arisen de novo on multiple occasions as plant-specific pathogens (Catalán et al. 2019). OSCCC-Vd normally infects coconut plantations and is endemic in the Philippines, but early in 2011 a putative variant was found in oil palm plantations in Sabah, triggering a ban on the movement of oil palm materials to other parts of Malaysia. Although the threat of OSCCC-Vd eventually receded in the 2010s, this episode exposed problems in the surveillance mechanisms and phytosanitary procedures in the face of the sudden appearance of a hitherto unknown pathogen.
The effects of climate change on oil palm diseases by fungi and by the oomycete P. palmivora have been discussed and indicate a trend for increased BSR, Fusarium wilt and P. palmivora incidence with climate change. Modelling of the effect of changes in climate on the infection levels of BSR in Sumatra, Indonesia, including quantitative BSR data, indicated that BSR would become even more serious after 2050 (Paterson 2019b). Weather is a major factor in crop pathogenesis and, when crops suffer cold, heat or desiccation stress, they may be more susceptible. Mountain areas were considered in this assessment which affected some results considerably. For example, hilly regions in North Sumatra did not provide a suitable climate for oil palm.
A similar ‘Agriculture 4.0’ methodology of big data and simulation modelling was used to produce a scheme of how BSR might advance under future climates in Malaysia (Paterson 2019b). The assessments of BSR were merely qualitative and indicated, nevertheless, that the levels of infection would also increase a great deal after 2050. Paterson (2020b) considered future climate effects on BSR in Kalimantan and alternative countries in SE Asia. Kalimantan and the Philippines were assessed as sustainable, but Thailand and Myanmar were unsustainable, while Papua New Guinea was intermediate in sustainability (Fig. 3). P. palmivora is prevalent in South America and Paterson (2020c) extended the principles described above to the disease. Colombia and Ecuador were highly susceptible, while Brazil was less so. However, a severe threat to Malaysia and Indonesia was assessed, which would require increased future vigilance to control the disease. Paterson (2021e) indicated an equivalent situation for Fusarium wilt of oil palm focusing on African countries extrapolated to Malaysia and Indonesia.
Fig. 3
Basal stem rot in three S E Asian countries. The incidence of disease was determined from the changes in suitable climate for growing oil palm as described in Paterson (2020b)
Amelioration of climate change effects on oil palm and vice versa
Procedures for amelioration of the effects of climate change on oil palm and the effect of oil palm on climate change, have been discussed as partially based on CLIMEX models (Paterson and Lima 2018). The situation with the oil palm industry cannot be business as usual in light of the effects of climate change on oil palm and vice versa. A series of procedures have been devised to address how the industry might mitigate these problems (Paterson and Lima 2018). Many of these measures will help to maintain the biodiversity normally associated with forests because they will stop the plantation being a monoculture. Also, the soil microfauna will likely increase as a result of these measures.
Reducing the effects of oil palm on climate change
Plantation management measures can prevent or reduce losses of some ecosystem functions which will reduce climate change. These include (a) avoiding illegal land clearing by fire, (b) avoiding draining of peat, and (c) using cover crops, mulch, and compost (Dislich et al. 2017). Reducing GHG by limiting oil palm expansion to areas with moderate or low carbon stocks is most effective. This involves ceasing development of plantations on peatland and enforcing the moratorium on new concessions in primary forests. In addition, rehabilitation and restoration of converted peatlands are an option. Limiting the problems of flooding may prevent increased CH4 emissions on mineral soils. Reducing unnecessary expansion of plantations and ensuring existing ones are managed optimally are crucial. Mechanisms such as (a) reduced emissions from deforestation and forest degradation, plus conservation, sustainable management of forests, and enhancement of forest carbon stocks (REDD+), (b) national greenhouse gas accounting, and (c) accurate emission factors for C dynamics are essential (Comeau et al. 2016). Considerable funding has been obtained for REDD + scheme. REDD + proposals include growing oil palm on reclaimed soil and replacing the use of fertilizer with other methods. A few plantations are replacing grassland or scrub where the average C content of the plantation will exceed that of the previous vegetation and so becoming a greater C sink.
Controlling disease may assist in decreasing the unwanted expansion of plantations as yields will be increased from reduced disease in current plantations, such as described for Ganoderma rots of oil palm (see below). The current awareness of environmental issues makes optimizing current plantations by reducing disease imperative in any case. Reducing nitrogen fertilizer is an effective way to decrease nitrogen-based emissions (Dislich et al. 2017). Oil palm plantations release large quantities of nitrous oxide (N2O) into the atmosphere linked to nitrogen (N) fertilizer use. An option for oil palm planting, without threatening tropical rain forests, is the rehabilitation of anthropogenic grassland that was created by human clearance of natural forest many centuries ago. For example, there are extensive areas of anthropogenic grassland in Indonesia where much of the spread of oil palm plantations will take place (Paterson and Lima 2018).
Reducing effects of climate change on oil palm
Evidence is growing of the existing and likely future impacts of anthropogenic climatic changes on the oil palm industry. Immediate priorities should include further research to understand climatic effects on oil palm in the many regions of the tropics where the crop is now grown, and to begin the implementation of mitigation strategies to minimize adverse effects. Most climatic threats identified to date involve periods of elevated temperature and reduced rainfall, both of which cause stresses that impact on overall crop performance, and in particular oil yield. Increasingly well documented impacts of climatic cycles such as El Niño and La Niña have underlined the crucial role of climate for oil palm performance and oil yield (Rahutomo 2016; USDA 2016).
Strategies are required to minimize the adverse effects of climate change on oil palm cultivation: it cannot be business as usual for the industry. These practices may also decrease climate change from reduced deforestation if the yields of existing oil palm are optimized to cope with climate change. More dispersed cultivation outside the main producing countries could ameliorate threats from climate change as a wider range of climates would be encountered, some of which may be more suitable for oil palm. Current expansion into West Africa and South/Central America already underway was intended to create a more secure production system in the longer term. However, Latin America and Africa may be even more affected by climate change in terms of suitable climate for growing oil palm than SE Asia, implying that expansion will be unlikely. Even within these continents there are trends which will be useful for plantation managers (Paterson 2021a, b).
Cultivation at higher altitudes and/or lower and higher latitudes may be possible beyond the lowland tropics as climate change progresses. An increase in highly suitable climate for growing oil palm by 2030 in Indonesia and Malaysia largely in mountainous regions of Sumatra, Sarawak, Borneo, and Sulawesi was reported (Paterson et al. 2015). There may be novel areas for oil palm development even under climate change, although in general, the climate suitability per se will be reduced. A caveat being potential biodiversity and ecological function loss if novel areas are converted from, for example, forest. The use of cover crops to reduce climate effects on oil palm may be possible and increases biodiversity. The sustainability of oil palm production will depend in part on using cover crops, especially under suboptimal conditions. Leguminous cover crops are grown to (a) coexist with oil palm following jungle clearing and planting/replanting, (b) provide complete cover to an otherwise bare soil, and (c) protect from erosion. They also perform multiple functions such as reducing soil water evaporation, reducing runoff losses, improving and maintaining soil fertility, and recycling of nutrients (Samedani et al. 2015). They promise reduced environmental pollution and improved crop yields. Legumes may reduce C and N losses from oil palm systems and increase soil C sequestration. Some examples for oil palm are as follows: Pigeon pea (Cajanus cajan), Calopo (Calopogonium mucunoides Desv.), butterfly pea (Clitoria ternatea), white tephrosia (Tephrosia candida), and Brazilian stylo (Stylosanthes guianensis var. guianensis) some of which are already in use in SE Asia (Paterson and Lima 2018). The biodiversity of the plantation will be increased per se as each plant is introduced and by the increase in nitrogen fixing bacteria associated with the legume.
Developing new oil palm varieties resistant to climate change is another possibility (Rival 2017), although may not be easily achieved. Breeding oil palm for climate change requires multidisciplinary and collaborative research at a high level (see next section). Selecting for complete resistance, rather than tolerance to diseases, leads to high selection pressures for new variants of the pest/pathogen that can overcome the resistance in the crop. Resistant oil palm cultivars to climate change, or environmental stress, may overcome the less favourable growth conditions imposed by climate change (Tang 2019). However, it is impossible to know accurately what climate changes will be needed to enable resistant cultivar development, e.g. a cultivar resistant to desiccation stress may be sensitive to high temperature. High fertilizer use causes increased emissions of GHG from manufacturing, transportation, and application, and improvements will be required in oil palm nutrient uptake efficiency by breeding for suitable root systems.
Methods that ameliorate the effect of (a) climate change on oil palm and (b) oil palm cultivation on climate change include the following: Optimizing the rhizosphere by adding arbuscular mycorrhizal fungi (AMF) will also assist in reducing climate change with generalized benefits to oil palm growth, by reducing the need for fertilizer for example. Arbuscular mycorrhizal (AM) symbioses have beneficial effects on water transport to assist in overcoming drought conditions, of relevance particularly to ameliorating climate effects. Reducing fertilizer production and use will cause decreased emissions that lead to climate change, and the use of AM could ameliorate the effects on oil palm. AM and AMF addition will increase biodiversity within plantations (Paterson and Lima 2018).
“Slash-and-char” as an alternative to “slash-and-burn” of forests cleared for oil palm may be beneficial and feasible. Slash-and-char effectively produces charcoal to sequester CO2 normally employed for forest residues. This could be used more extensively to improve agriculture in the humid tropics, enhancing local livelihoods and food security, while sequestering various forms of carbon (C) to mitigate climate change. Biochar soil management systems can deliver tradable C emissions reduction as the C sequestered is accountable and verifiable. The fraction of the maximum sustainable technical potential that is realized will depend on socioeconomic factors, including the extent of government incentives and the emphasis placed on energy production relative to climate change mitigation (Paterson and Lima 2018). Reduced tillage is another possibility for affecting climate change, where reducing tillage with AMF provides the optimal conditions for oil palm. Low tillage combination with AMF assists nutrient uptake, water relations, and protecting against pathogens and toxic stress, hence potentially ameliorating the effect of climate change on oil palm growth. Also, low tillage will decrease the emission of GHG from oil palm plantations (Paterson and Lima 2018).
An important tool used by policymakers to assess the impacts of a particular cropping system is life cycle assessment (LCA) (Schmidt 2015; Yee et al. 2009). This method seeks to estimate the impact of all aspects of the production process from planting seed, growing, harvesting and processing the crop (including fuel and labour costs); application of inputs such as water, fertilizer, herbicides, and pesticides; shipping of the oil overseas and downstream conversion into products such as foods and oleochemicals; transport to wholesalers, retailers, and consumers; and finally, disposal of products at the end of their lifetimes. Unfortunately, very few published studies cover the entire system ‘from cradle to grave’.
The most effective manner of addressing climate change is to adhere to policies devised at the 2019 COP25 climate meeting by reducing GHGs and future temperature rises. Conservation scientists, managers and environmental policymakers need to adapt their guidelines and policies to mitigate the impact of climate change (Brooke 2008). The new recommendations from COP meeting in Glasgow, Scotland in 2021 should be implemented as a matter of urgency as the most effective procedures for controlling climate change and consequently the effects of climate change on oil palm. Importantly, palm oil producers should also collaborate more effectively to help shape future policies on climate change and oil palm.
Breeding and biotechnology to improve oil palm as a crop
Recently, there have been several significant advances in breeding and biotechnology use for oil palm improvement. This is despite the challenges posed by the long-lived perennial nature of oil palms, which are large plants typically grown commercially for > 25 years. Hence, such biological strategies are much more complex and lengthier to implement compared to the smaller, faster growing annual crops. Breeding efforts have tended to focus on major economic traits such as oil yield and composition, pest and disease resistance, and plant architecture. Until relatively recently, oil palm breeding was also disadvantaged by the restricted genetic pool of commercial varieties, most of which were derived from small numbers of plants imported from Africa to SE Asia in the nineteenth and twentieth centuries. The available gene pool has now been greatly expanded, largely thanks to a series of germplasm collection expeditions to Africa and South America by pioneering breeders such as Rajanaidu et al. (2014). This has now allowed for genome-wide association studies (GWAS) of key traits such as oil yield and fatty acid composition (Ithnin et al. 2020) in the case of American oil palm and an Elaeis oleifera × Elaeis guineensis hybrid (Osorio-Guarín et al. 2019). Recent breeding-related reviews include genomics, genomic selection (Nyouma et al. 2019), transgenics (Costa et al. 2012), genome editing (Yarra et al. 2020), and marker-assisted selection (Ting et al. 2018; Babu et al. 2019). Following the publication of the oil palm genome sequence in 2013 (Singh et al. 2013), several detailed linkage maps have now become available for the use of breeders (Ong et al. 2019; Herrero et al. 2020).
Genomics-based strategies such as marker-assisted selection are already generating several useful advances for a variety of important traits that include oil yield, fatty acid composition and crop morphology (Xia et al. 2019; A quantum leap with genome select 2020). One of the most exciting recent developments was the announcement in mid-2020 of new breeding lines that are capable of more than double the current average oil yield (A quantum leap with genome select 2020). These plants are part of a genomics-based programme called ‘Genome Select’ carried out by plantation company Sime Darby, with a claimed 9.9 t/ha average yield over 5 years in field trials under optimum conditions. Given that current average palm oil yields are less than 4 t/ha, and that soybean and rapeseed only yield 0.3 and 1.2 t/ha respectively, this could be a game changer for the industry if two important conditions are met. Firstly, the experimental lines will need to be assessed for their oil yield performance under commercial plantation conditions in a range of geographic regions and, if necessary, crossed with locally adapted varieties. Secondly, the new higher yielding varieties need to be part of an ambitious replanting programme that will potentially replace a significant proportion of the estimated 2.5 billion oil palms that are currently under cultivation worldwide.
In terms of molecular genetics approaches to BSR mechanism and control, G. boninense genome and transcriptome data are now available with two G. boninense genome assemblies in the NCBI Depository (Wong et al. 2019), which provides a table listing publicly available genome and transcriptome data associated with the G. boninense and G. boninense-oil palm pathosystem. High-throughput next-generation sequencing and improved bioinformatics analyses has greatly facilitated G. boninense pathogenesis and housekeeping candidate gene identification. However, G. boninense remains poorly studied with respect to system-level gene function studies and biotechnology manipulation, with no available gene co-expression network models. Most studies have focused on host transcriptome data, whilst similar studies on the pathogen remain scarce. Ho et al. (2016) utilised mass RNA sequencing and de novo assembly of RNA-seq and were able to detect a high number of Ganoderma transcripts involved in lignin metabolism, such as manganese peroxidase and laccases. It is encouraging that, very recently, in silico mapping within an oil palm breeding program has revealed several QTL associated with genetic resistance to G. boninense (Daval et al. 2021).
Publication of the transcriptome of G. boninense at monokaryon, mating junction and dikaryon stages (Ho et al. 2016; Daval et al. 2021; Govender et al. 2020) will be useful for investigation of the mating process of this fungus. However, annotation and functional studies of these differentially expressed genes at different stages have not yet been done. RNAi as a tool for functional genomics to study developmental or virulent genes is lacking, although the genome has been sequenced, and there is the promise of new approaches to molecular breeding using genome editing technologies such as CRISPR-Cas-9. The role of genes involved in ergosterol biosynthetic pathway in G. boninense utilizing RNAi-mediated gene silencing is currently being investigated (Govender et al. 2020). The identification and verification of candidate genes are crucial for the application of these targets in RNAi-based crop protection, such as host-induced gene silencing (HIGS) or spray-induced gene silencing (SIGS). In addition, a study on the potential application of RNA silencing targeting DCL genes of G. boninense to confer protection against basa 581 stem rot is in progress (Govender et al. 2020). The availability of G. boninense genome data in public database (NCBI) enables potential candidate genes to be identified for testing and designing of efficient silencing constructs to avoid off-target transcripts, whilst the availability of the oil palm genome data helps to ensure the silencing constructs do not target and negatively affect the host. Because G. boninense on attacks oil palm by degrading lignin (Fig. 1), there is the possibility of modifying lignin to make oil palm plants more resistant. Alternatively, making the plant more resistant to initial fungal colonization by inhibiting carbohydrate metabolism first is a more logical approach that possibly overrides the emphasis on lignin per se (Govender et al. 2020).
Global supply chains and consumer sentiment
Palm oils are globally traded commodities with lengthy and complex supply chains, which can impede implementation of sustainability criteria, such as no-deforestation (Lyons-White and Knight 2018). This complexity is further increased by non-economic factors including sustainability, traceability, disease monitoring and pest management. More recently, however, a more constructive dialogue has emerged as several NGOs and community groups have joined with bodies such as RSPO and some major industry players in exploring initiatives such as certification schemes, that seek to guarantee that palm oils are sourced from sustainable and environmentally friendly sources (RSPO 2009).
Palm oil supply chain and traceability
Due to increasing awareness of the wider impacts of oil palm crops, sourcing of palm oil from verified, certified sustainable/responsible sources is of growing interest. Supply chain traceability ensures that information about products can flow easily and enable consumers to have maximum information about product origins. Certification schemes have mostly been established to improve sustainability within the industry, but for these to operate openly and transparently, supply chain traceability is an essential requirement. An overview of a typical palm oil supply chain is displayed in Fig. 4. The most widely used sustainable certification scheme, which aims to improve traceability, is RSPO (RSPO Supply Chains 2017). A graphical overview of each model is also displayed in Fig. 5:
1.
Identity Preserved (IP): sustainable palm oil is derived from a single source and kept separate from all other sources throughout the entire supply chain
2.
Segregated (SG): sustainable palm oil is derived from multiple sources and mixed; it is then kept separate from conventional palm oil throughout the supply chain
3.
Mass Balance (MB): sustainable palm oil is mixed with palm oil from non-certified sources in a controlled and regulated manner
4.
RSPO credits: the supply chain is not monitored for the presence of sustainable palm oil. But manufacturers and retailers can buy credits from RSPO-certified growers, crushers and independent smallholders
Fig. 4
A conventional palm oil supply chain with no certified traceability. The palm oil is produced, transported, refined, incorporated into products and then used by the customer
The four different RSPO supply chain models including Identity Preserved, Mass Balance, Segregated and Book and Claim (source: www.rspo.org). The premise of how each supply chain works is described in-text. All palm oils produced under RSPO certification are able to carry RSPO branding, though in the case of Mass Balance and Book and Claim, there is no guarantee that 100% sustainable palms oils are being used
Labelling, health and nutrition
Labelling has been shown to influence consumer purchasing habits and to have positive impacts on food production. RSPO believes that using its certification trademark on products will be central to raising awareness and driving demand. Palm oil is used for cooking and is also added to many ready-to-eat foods. Its taste is considered savoury and earthy, with some people describing its flavour as being similar to carrot or pumpkin. It has been a staple in West African and tropical cuisines for millennia (Corley and Tinker 2015). In recent years, the public debate on the health and sustainability of palm oil and its use by food industries has strongly influenced consumer choices. There has been a perception that palm oil, with its relatively high saturated fat content, has adverse nutritional qualities, despite its long history as an important indigenous foodstuff in the tropics. This perception has been strongly challenged by recent meta-analyses and prospective observational studies, mainly conducted in North America and Europe, that failed to demonstrate a correlation between total saturated fat intake and an elevated risk of cardiovascular disease (Chowdhury et al. 2014).
Production of sustainable palm oil is recommended so that consumers only buy from companies using palm oil certified under RSPO, or similar certification schemes that have transparent commitments to improved ecosystem services and human wellbeing (Ayompe et al. 2021). Certification schemes improve consumer confidence and provide a high level of guarantee that that areas of high conservation value are preserved, local communities are supported, and that palm oil plantation managers are implementing best practices including for sustainability and the fair use of labour (Carlson et al. 2018; Schoneveld et al. 2019; Furumo et al. 2020; Santika et al. 2021). Whilst some groups have criticized certification schemes for not moving far or fast enough, researchers and NGOs such as WWF are working with schemes like RSPO, to facilitate greater progress and to include more progressive criteria for best practice, in order to certified. An example of such developments was the announcement in mid-2021 of a multi-stakeholder initiative called Project Lampung (Bootman 2021). This was launched with the aim of linking smallholder farmers in the Lampung province of Indonesia with the NGO, Solidaridad, plus multinationals (including BASF, Cargill and Estée Lauder) in order to enable their palm oil to reach global markets via RSPO certification (RSPO 2021).
Future prospects
As with many other sectors of commercial agriculture, the global oil palm industry is currently facing significant future challenges as it comes under increased scrutiny in an increasingly interconnected world. Many of these issues, such as the future of palm-based biodiesel, the stagnation in crop yield and related labour problems, and concerns about sustainability and environmental impact are relatively longstanding, but they have been brought into sharper focus as a consequence of the COVID-19 pandemic that started in 2020 and is likely to have long-term effects on the industry as will now be discussed.
An uncertain future for palm-based biofuels
Over the past decade a growing proportion of palm oil has been used as a biofuel, mostly in the transport sector as biodiesel derived from methyl esters of the oil. Most palm biodiesel is consumed locally in Malaysia and Indonesia. This is due to government-supported mandates that enforce the mixing of palm biodiesel with petroleum-derived diesel. However, the use of palm biodiesel as a carbon–neutral fuel in the wider global transport sector has proved to be controversial, especially in the EU (Muzii 2019). Until very recently, a substantial and growing amount of palm biodiesel, totalling 4.9 Mt in 2018, was used in the EU. As shown in Fig. 6, for over a decade the EU has steadily increased its imports of palm oil for fuel use while the amount used for food, feed and oleochemicals has declined from a high of almost 4 Mt to about 2.7 Mt (Chandran 2019). These data show that in 2018 the EU imported a total of 7.6 Mt palm oil but only 2.7 Mt (36%) of this was for food and personal care use, while the remaining 4.9 Mt (64%) was for use as transport biodiesel or fuel oil (e.g. for electricity generation).
As described above, concerns about the environmental impact of oil palm cultivation and the use of food crops for biofuel, coupled with recent advances in electric vehicle (EV) technologies, mean that the EU is now moving away decisively from both crop-based biofuels and fossil fuels, with many countries seeking to replace all carbon-based fuels by 2050. In the medium term, as fossil oil use continues to decline and its price remains low, there are few prospects that palm biodiesel will compete effectively on price in international markets. This is likely to reduce the global market for palm biodiesel, although the additional palm oil that this would release should still be in high demand for edible uses. For example, Corley and Tinker 2015) estimated that, by 2050, a further 6 Mha of land could be required to meet total oil palm production requirements, which is a formidable challenge in view of the scarcity of environmentally suitable land. However, if most of the current palm that is diverted to biodiesel is switched to food use, about 3–4 Mha of this additional land would not be required.
Production issues
On the production or supply side, the oil palm sector faces several significant challenges that include new scientific advances, changing patterns of global trade and consumer sentiment, and the related issues of labour and mechanization. The efficiency and effectiveness of plantation management varies greatly across the sector, both among large commercial enterprises and individual smallholders. One of the most remarkable features of the oil palm is the stagnation in yields at values around or under 4 t/ha over the past two decades (Chandran 2019). As shown in Fig. 7, this is in marked contrast to other major crops, including oilseeds, which have shown consistent yield increases in response to factors such as biological improvements, improved management and more efficient transport and supply chain infrastructure. In some cases, modelling analysis can provide new insights into plantation management that suggest possible improvements. A recent example is the application of model optimization and heuristic techniques that indicated significant potential for yield improvements by reducing the harvest cycle length from 19.6 to 8.3 days in a plantation in Columbia (Escallón-Barrios et al. 2020). Innovative new ideas for ‘smart’ oil palm mills have also been advanced (Isaac 2019) as well as the use of digital technologies, such as blockchain, to enhance the performance and transparency of supply chains (Keong 2019).
The oil palm cropping system is unusual in its continued reliance on large amounts of relatively unskilled manual labour that must operate in a humid and hot tropical climate on a year-round basis (Crowley 2020). During recent decades many plantations have increasingly relied on temporary migrant labour, but low wages and increasing incomes from alternative forms of employment have created staff shortages, which were greatly exacerbated by the COVID-19 pandemic (Crowley 2020; Raghu 2021). These problems have been compounded by allegations of poor labour practices in some plantations that led to the blacklisting of some of the largest commercial companies and import bans by the US Customs and Border Protection in 2020–21 (Jamal 2021).
In the long term, the most realistic solution to the current labour problems that plague the sector is to introduce more mechanization and shorter plants, as has been done for several other staple monocot and tree crops (Murphy 2011). One way of facilitating mechanization and increasing yield is to use modern molecular breeding approaches to modify crop architecture, for example to reduce trunk height as has been done with apples and major cereals such as wheat and rice (Murphy 2011; Nagai et al. 2020). Interestingly, a very recent study has identified three major QTLs associated with oil palm height on chromosome 11, which could facilitate the breeding of shorter and more compact palms for enhanced yield and ease of harvesting (Teh et al. 2020). Replanting of ageing and/or poorly performing palms is a vitally important strategy for improving the yield, and hence the overall sustainability and environmental footprint of oil palm crops. This applies to both large commercial growers and smallholders, many of whom use inferior seeds bought from middlemen with no record of their provenance. While there have been government initiatives in Malaysia and Indonesia, these efforts need to be redoubled and made more effective (Shehu et al. 2020; Yahya et al. 2020; Oosterveer 2020).
Sustainability and environmental challenges
The use of oil palm as a food ingredient in the large EU market has been in steady decline over the past decade (Fig. 6). There is little doubt that part of this decline has been due to adverse consumer sentiment about the oil palm industry in general and there are now discussions in the EU to require verifiable ‘point of origin’ declarations for all food-grade palm oil (Southey 2020). This could mean that any oil that cannot be reliably identified as from a sustainably certified source, such as RSPO, might not be imported into the EU. Clearly the industry needs to address these certification and authenticity issues in its supply chains to ensure that it becomes fully compliant with the requirements of its second largest customer, namely the EU.
Conclusions
The global oil palm industry is a major component of contemporary agriculture, supplying food to billions of people, plus a host of non-food products that include strategically vital cleaning products used in critical health care settings. However, there are well founded concerns about the expansion of oil palm plantations into sensitive habitats, such as highly biodiverse tropical forests and peatlands (Meijaard and Sheil 2019; Meijaard et al. 2018). There are no viable alternatives to oil palm in terms of its yield and delivery of a range of specific oils for human use (Parsons et al. 2020). It is therefore important to implement transparent and effective certification schemes right across the industry to guarantee that oil palm products can be labelled as being derived from environmentally sustainable and socially responsible sources. It is also important to recall that deforestation and habitat loss are also associated with the second most important global oil crop, soybean. Policymakers may therefore need to consider ways to reduce the demand for oils more specifically and for unhealthy ultra-processed foods more broadly. The industry also needs to redouble its efforts to engage with global consumers in a constructive dialogue aimed at addressing its image problem and explaining the many benefits of its products (Reardon et al. 2019; Borrello et al. 2020). Oil palm crops face many other challenges, including emerging threats from climate change and the likelihood of new pests and diseases, that require more effective international collaboration. The influential players in the industry need to interact with the key organizations and countries now fully committed to reducing climate change. Nevertheless, new breeding technologies are providing the promise of improvements in some areas, such as much higher yielding varieties, improved oil profiles, enhanced disease resistance and modified crop architecture to enable mechanization of fruit harvesting.
Ethics declarations
Ethics approval and consent to participate
Consent for publication
Competing interests
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. | Interestingly, there is also evidence that smallholdings can have lower environmental impacts (Lee et al. 2014) and higher biodiversity levels than commercial plantations (Razak et al. 2020).
In contrast, commercial plantations tend to be part of large ventures that are often owned by multinational companies that can extend over tens of thousands of hectares, with the largest totalling about one million ha. In terms of global trade, palm oils from commercial plantations are by far the most important contributors. In some cases, the larger plantation companies also own or control many key downstream elements in palm oil supply chains. These include mills, refineries, shipping operations and the distribution networks to processors and retailers in export destinations.
In summary, oil palm cultivation is still highly concentrated in SE Asia, but the focus of future expansion is likely to be elsewhere in the humid tropics, especially in West Africa and northern regions of South America. Therefore, the oil palm industry is a hybrid of large scale, globally focussed, commercial farming and small scale production of a cash crop, often for local consumption. As discussed below, the industry must manage the effects of environmental factors, such as climate change and increased disease incidence on cropping systems, as well as changing consumer sentiments in export destinations.
The environmental context
Oil palm is widely considered as a problematic crop. This has been mainly due to the environmental and ecological impacts of some of the land conversions to oil palm plantations over the past two decades, especially in Indonesia. In many cases these have displaced pristine tropical habitats and affected iconic wildlife species, such as orangutan (May-Tobin et al. 2012; Gaveau et al. 2014). The EU is the second largest global importer of palm-based oils and this consumer-led demand has been one of the drivers of the expansion of recent oil palm cultivation. | yes |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://rspo.org/why-sustainable-palm-oil/ | Why Sustainable Palm Oil? | Sustainability transforms the impact of palm oil
Why sustainable palm oil?
Sustainable palm oil is good for the planet, for people and for protected species. But the reverse is also true.
When grown unsustainably, palm oil can damage forests and endanger communities and wildlife. So why are there two sides to palm oil? And how can we make sure it only ever has a positive impact?
Palm oil is in half of all supermarket products
A vegetable oil unlike any other
Palm oil is the world’s most versatile vegetable oil. As well as a widely used cooking oil, it’s found in countless supermarket products, from soap and toothpaste to chocolate and pot noodles.
Palm oil is extracted from the flesh and the kernel of the oil palm fruit. Its popularity for cooking and as a combining ingredient springs from its diverse range of properties. Smooth and tasteless, it can also:
Hold its colour well
Stay solid at room temperature (to help baked goods last longer)
Remove oil and dirt
Moisturise hair and skin
Make soaps and detergents bubbly
Perhaps the crop’s standout quality is its productivity. Oil palms have much higher yields than any other vegetable oil plants. They require four to ten times less land than other vegetable oil crops to get the same amount of oil. And that efficient use of land makes palm oil attractive to producers and purchasers around the world.
The challenge with palm oil
Despite its unique qualities as a product and its high demand, palm oil has a mixed reputation. If produced unsustainably, it can have negative impacts – on the environment, on wildlife and on human rights.
In some regions palm oil has been produced irresponsibly. Forests have been cleared or damaged to grow palm oil, which has impacted both wildlife and local communities. And the workers and farmers producing palm oil in some places have suffered poor working conditions and low pay.
There have been calls to boycott palm oil because of these negative impacts. Yet switching to alternative vegetable oils to palm oil wouldn’t reduce these impacts. Sunflower, rapeseed and soy have much lower yields per hectare than oil palm, so, in fact, more land would be needed to produce an equivalent amount of oil. What’s more, millions of farmers and their families work on oil palm plantations and smallholdings. This provides them with the income for basic essentials such as food, clean water, and housing. Plus it allows many workers to send their children to school.
A hectare of oil palms will yield more than twice the oil of a hectare of sunflowers
A sustainable solution
We live in a world in which population growth and climate change threaten global food security as never before. Sustainable palm oil has an important part to play in relieving this pressure.
The emphasis is on “sustainable.” Sustainable palm oil has been farmed, processed, distributed, and sold responsibly with strict rules that protect animals, the environment and people who live and work in oil palm producing countries. It has involved:
Halting deforestation;
Treating communities and workers fairly; and
Protecting wildlife and the environment.
RSPO’s aim is to make sustainable palm oil the norm. We work across each supply chain sector, bringing together its many stakeholders to develop its sustainability and help make palm oil a force for good.
To feed a global population that is set to reach 9.8 billion by 2050 we will need to use less land to produce 60% more food | Sustainability transforms the impact of palm oil
Why sustainable palm oil?
Sustainable palm oil is good for the planet, for people and for protected species. But the reverse is also true.
When grown unsustainably, palm oil can damage forests and endanger communities and wildlife. So why are there two sides to palm oil? And how can we make sure it only ever has a positive impact?
Palm oil is in half of all supermarket products
A vegetable oil unlike any other
Palm oil is the world’s most versatile vegetable oil. As well as a widely used cooking oil, it’s found in countless supermarket products, from soap and toothpaste to chocolate and pot noodles.
Palm oil is extracted from the flesh and the kernel of the oil palm fruit. Its popularity for cooking and as a combining ingredient springs from its diverse range of properties. Smooth and tasteless, it can also:
Hold its colour well
Stay solid at room temperature (to help baked goods last longer)
Remove oil and dirt
Moisturise hair and skin
Make soaps and detergents bubbly
Perhaps the crop’s standout quality is its productivity. Oil palms have much higher yields than any other vegetable oil plants. They require four to ten times less land than other vegetable oil crops to get the same amount of oil. And that efficient use of land makes palm oil attractive to producers and purchasers around the world.
The challenge with palm oil
Despite its unique qualities as a product and its high demand, palm oil has a mixed reputation. If produced unsustainably, it can have negative impacts – on the environment, on wildlife and on human rights.
In some regions palm oil has been produced irresponsibly. Forests have been cleared or damaged to grow palm oil, which has impacted both wildlife and local communities. And the workers and farmers producing palm oil in some places have suffered poor working conditions and low pay.
There have been calls to boycott palm oil because of these negative impacts. Yet switching to alternative vegetable oils to palm oil wouldn’t reduce these impacts. | no |
Agribusiness | Are palm oils bad for the environment? | yes_statement | "palm" "oils" have a negative impact on the "environment".. the "environment" is harmed by the use of "palm" "oils". | https://palmdoneright.com/why-palm-oil-is-better-for-the-environment-than-other-oils/ | Why Palm Oil is Better for The Environment Than Other Oils | Why Palm Oil is Better for The Environment Than Other Oils
For years, palm oil has been viewed in a bleak light due to the negative environmental impacts on display. Although unethical practices have long since caused damage environmentally and socially, the bigger picture gives a more comprehensive view of palm oil and the truth surrounding it. As activists have pushed to tackle these issues, many parties within the palm oil supply chain have pledged to ethically and sustainably source their palm oil and have made significant efforts to make changes within the industry as a whole.
As we look at palm oil farmers involved with Palm Done Right, who grow palm oil organically and sustainably, we can see the positive side of palm oil and how it proves to be better for the environment than other comparable oils. How can palm oil actually benefit the environment, rather than cause harm? Discover how the palm oil industry, with the correct standards and regulations, can do good for the environment.
High yield
One of the most prominent positive impacts of palm oil on the environment is its high yield. What does this mean? Compared to other vegetable oils, more palm oil can be produced per area of land than its competitors. According to studies, up to nine times the amount of land could be needed to produce the same amount of other types of oils as compared to palm oil. This efficient oil, due to this incredible yield, actually serves its purpose as an extremely sustainable crop.
To further this point, data shows that palm oil accounts for 40% of the world’s vegetable oil but is grown on only 6% of the land used for oil production globally. This disparity is extremely telling, signaling that the environmental issues associated with palm oil actually have little correlation to the oil itself, but rather with the practices used by unethical parties. Especially as efforts increase to stop the deforestation commonly linked to palm oil production, we can more clearly see the advantages of using palm oil instead of other natural vegetable oils.
Pesticides
Pesticides are a common issue associated with the production of various vegetable oils and other crops in general. Used to control pests, such as insects and rodents, pesticides can also cause harm to the environment, contaminating the ecosystem and hurting beneficial organisms. Because of this, pesticide usage is cause for controversy. The oil palm tree – used, of course, to create palm oil – requires far less pesticide to be used on palm oil plantations than any other vegetable oil plantations.
Even better than lessening pesticide usage, pesticides can be eliminated entirely by efforts within the palm oil industry, pushing for organic farming to create products with only natural ingredients rather than chemicals. As environmental activists raise awareness across the industry and within the general population, we see synthetic pesticides, herbicides, and fertilizers being eradicated from palm oil farms and instead using practices that nurture plants and soil.
Biodiversity & multi-cropping
Another function of improved standards in the palm oil industry is the prospect of multi-cropping, which many farms have incorporated into their model. Multi-cropping means that farmers plant various crops in one plantation to foster biodiversity and complement the other crops in the area nutritionally. Interplanting in this way allows the plants to supplement each other through their root systems, and it attracts a greater variety of insects, which is positive for plant growth.
As we have highlighted previously here on Palm Oil Done right, interplanting with oil palm trees is wonderful for the ecosystem. Oil palm trees are frequently interplanted with passionfruit, cacao, cassava, citrus, maize, and pineapple. These plants benefit each other and nourish the land, rather than taking away from it.
A multi-use product
Palm oil is truly an irreplaceable part of countless industries and a product we simply cannot eliminate – and one we wouldn’t want to eliminate. A report at China Dialogue found that palm oil is in half of the products at the grocery store. 71% of food products and 24% of consumer goods contain palm oil. These staggering numbers show the endless usages for this natural vegetable oil.
The list goes on! Because of its versatility and ubiquity, sustainable palm oil is in a league of its own. And since we’re able to create so many goods using this oil which can be efficiently grown, its value is priceless.
The endless advantages of palm oil
Although palm oil production has been done destructively in the past, new standards can revolutionize the industry. As a commodity we can’t replace, palm oil harvested sustainably is a staple in so many facets of our lives. Prioritizing the production of other oils in its place would only cause further environmental harm and displace issues to different parts of the world, rather than eliminating them.
If you’re interested in palm oil’s benefits and the people who care about harvesting palm oil correctly, find out more about Palm Done Right. Through this organization’s efforts, palm oil production doesn’t need to be seen as a necessary evil; it can be revered as an unparalleled good. | Why Palm Oil is Better for The Environment Than Other Oils
For years, palm oil has been viewed in a bleak light due to the negative environmental impacts on display. Although unethical practices have long since caused damage environmentally and socially, the bigger picture gives a more comprehensive view of palm oil and the truth surrounding it. As activists have pushed to tackle these issues, many parties within the palm oil supply chain have pledged to ethically and sustainably source their palm oil and have made significant efforts to make changes within the industry as a whole.
As we look at palm oil farmers involved with Palm Done Right, who grow palm oil organically and sustainably, we can see the positive side of palm oil and how it proves to be better for the environment than other comparable oils. How can palm oil actually benefit the environment, rather than cause harm? Discover how the palm oil industry, with the correct standards and regulations, can do good for the environment.
High yield
One of the most prominent positive impacts of palm oil on the environment is its high yield. What does this mean? Compared to other vegetable oils, more palm oil can be produced per area of land than its competitors. According to studies, up to nine times the amount of land could be needed to produce the same amount of other types of oils as compared to palm oil. This efficient oil, due to this incredible yield, actually serves its purpose as an extremely sustainable crop.
To further this point, data shows that palm oil accounts for 40% of the world’s vegetable oil but is grown on only 6% of the land used for oil production globally. This disparity is extremely telling, signaling that the environmental issues associated with palm oil actually have little correlation to the oil itself, but rather with the practices used by unethical parties. Especially as efforts increase to stop the deforestation commonly linked to palm oil production, we can more clearly see the advantages of using palm oil instead of other natural vegetable oils.
Pesticides
Pesticides are a common issue associated with the production of various vegetable oils and other crops in general. | no |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://stroodles.co.uk/blogs/news/paper-straws-are-they-really-as-eco-friendly | Paper Straws: Are they Really As Eco-Friendly? | Stroodles - The ... | Your Cart
Paper Straws: Are they Really As Eco-Friendly?
Mon, Feb 28, 22
What is it about paper straws that makes them so appealing to people who want to do their part in saving the environment? Why would a person who is trying to conserve resources choose instead, a material that requires more processing and transportation than plastic or metal? The answer may be less altruistic than you might think.
We all know plastic straws are terrible for the environment. They can take up to 500 years to decompose in landfills, and oftentimes end up in our oceans where they endanger marine life. So it would stand to reason that paper straws must be a much more environmentally friendly option, right?
The truth is that paper straws are not actually any more environmentally friendly than their plastic counterparts. In fact, they may even be worse for the planet. This is because the process of making paper straws requires a lot of energy and results in the emission of greenhouse gases.
In addition, the production of paper straws leads to the destruction of forests, which means that fewer trees are available to absorb carbon dioxide from the atmosphere.
This article explores if paper straws are really as eco-friendly as people think they are, Or if they are just a band-aid solution to a larger problem?
The shift from Plastic to Paper straws:
What’s Behind It?
The push to paper straws is the product of two intersecting trends. The first trend, which has been building for years, is a growing awareness of the environmental impact of disposable plastic products, especially straws. The second trend is the rise of the so-called ethical consumer, people who are willing to pay more for products that meet their ethical standards.
Together, these trends have led to a backlash against disposable plastic straws. In some places, this backlash has resulted in bans on plastic straws. In others, it has led to a switch to paper straws.
The Problems with Paper Straws:
A greener option, but not a green one.
Everybody knows that paper straws are better than plastic ones because they're biodegradable. Well, at least that's what we're all being told. The problem is that even though paper straws will decompose, there's still a chance that small animals could swallow these little bits of paper.
In addition, there are environmental concerns about the manufacture of paper straws. The process of making paper straws requires a lot of energy and results in the emission of greenhouse gases. In addition, the production of paper straws leads to the destruction of forests, which means that fewer trees are available to absorb carbon dioxide from the atmosphere.
Plus, like plastic straws, Paper straws are still single-use waste items which are not ideal for the environment as their demand and overall production will increase over time. It is not hard to see how all of these factors together make paper straws a less-than-ideal replacement for plastic straws when it comes to environmental friendliness.
Why are Paper straws not very practical?
Ever finished your drink so quickly to avoid the inevitable sog of a paper straw? They turn mushy and bendy very quickly, meaning they tend to fall apart before you’ve even had a chance to enjoy your drink.
Not to mention the papery bitter aftertaste that is difficult to get rid of, sometimes. The truth is paper straws tend to dissolve really quickly in drinks which can make them really impractical for use. Moisture and contact with liquids make the straws fall apart and disintegrate quickly.
This means that if you're not drinking your beverage right away, you'll have to use a new straw which defeats the whole purpose of using a paper straw in the first place.
The answer to this question is not clear-cut. On the one hand, paper straws do have a smaller environmental impact than plastic straws. On the other hand, the production of paper straws requires a lot of energy and leads to the destruction of forests.
Ultimately, whether or not paper straws are more environmentally friendly than plastic straws depends on your point of view. If you are more concerned about the environmental impact of disposable products, then paper straws are a better option than plastic straws. If you are more concerned about the impact of manufacturing processes, then paper straws are not as good as plastic straws.
Subscribe to Stroodle news:
x
Eva Macijauskaite
I joined Mr. Stroodles in 2021 and started working on digital content marketing and the most interesting sales & marketing projects and campaigns since! Being quite an artsy person I put a lot emphasis on brand visuals. One of my goals in the company is to provide the customer of Stroodles with the nicest visuals on a daily basis, shape the customer experience and make sure that brand values are delivered to the customer the right way, in the right time, at the right place! I love how sustainability can actually be fun with Stroodles! | Your Cart
Paper Straws: Are they Really As Eco-Friendly?
Mon, Feb 28, 22
What is it about paper straws that makes them so appealing to people who want to do their part in saving the environment? Why would a person who is trying to conserve resources choose instead, a material that requires more processing and transportation than plastic or metal? The answer may be less altruistic than you might think.
We all know plastic straws are terrible for the environment. They can take up to 500 years to decompose in landfills, and oftentimes end up in our oceans where they endanger marine life. So it would stand to reason that paper straws must be a much more environmentally friendly option, right?
The truth is that paper straws are not actually any more environmentally friendly than their plastic counterparts. In fact, they may even be worse for the planet. This is because the process of making paper straws requires a lot of energy and results in the emission of greenhouse gases.
In addition, the production of paper straws leads to the destruction of forests, which means that fewer trees are available to absorb carbon dioxide from the atmosphere.
This article explores if paper straws are really as eco-friendly as people think they are, Or if they are just a band-aid solution to a larger problem?
The shift from Plastic to Paper straws:
What’s Behind It?
The push to paper straws is the product of two intersecting trends. The first trend, which has been building for years, is a growing awareness of the environmental impact of disposable plastic products, especially straws. The second trend is the rise of the so-called ethical consumer, people who are willing to pay more for products that meet their ethical standards.
Together, these trends have led to a backlash against disposable plastic straws. In some places, this backlash has resulted in bans on plastic straws. In others, it has led to a switch to paper straws.
The Problems with Paper Straws:
A greener option, but not a green one.
| no |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://thesugarcanestraw.com/switching-to-paper-straws-is-a-bad-idea-heres-why/ | Switching to Paper Straws is a BAD Idea, Here's Why | Sugarcane ... | Switching to Paper Straws is a BAD Idea, Here’s Why
We’ve all seen images of poor sea turtles with plastic straws stuck in their noses. Even if a plastic straw doesn’t end up in the airways of marine animals, exposure to sunlight, salt water, and extreme temperatures can break plastic straws into microplastics. Microplastic pollution can end up in the marine life that we eat, meaning that little plastic straw you threw away can end up back in your body via your favorite sushi restaurant.
Beyond plastic pollution, plastic production is a high-waste process with a huge carbon footprint and environmental impact. Single-use plastics fill landfills, poison our rivers, and oceans, and kill all types of animals every day.
Plastic and paper straws found at the beach.
In the past few years, there’s been a lot of news about plastic straw bans. Cities like Seattle, Los Angeles, and more have banned plastic straws, while entire states like California have put regulations on restaurants offering plastic straws.
If you’re an environmentally minded person, a business impacted by the plastic straw ban, or if you’re looking for an eco-friendly alternative, you might be thinking of switching from plastic straws to paper straws. After all, the issue with single-use plastic straws is plastic, right? Not necessarily.
There are environmental, safety, and experience concerns with your local coffee shop’s new paper straw offerings. Let’s break it down.
Paper Straws: a Not-So Environmentally Friendly Alternative
If you’re looking to switch from plastic straws to paper straws, it’s most likely out of concern for the atmosphere. Single-use plastic straws are a sustainability nightmare. However, paper straws harm the environment as well.
Single-Use
One of the key tenets of sustainability is re-use. Paper straws are single-use, disposable items. You cannot use the same paper straw over and over.
If you have any experience with paper straws, you know you can’t even use a paper straw for more than ten minutes in a single drink without it getting soggy and useless.
Not Really Recyclable or Biodegradable
While throwing out paper is not as terrible as throwing out plastic, people are under the assumption that paper straws can be recycled or composted. This is not the case.
While most paper products can be recycled, most recycling facilities in the United States do not allow food-contaminated materials. Because your paper straw is bound to soak up some of your drink, recycling programs will throw it in the trash instead of recycling it.
Not a big deal because paper straws will biodegrade instead, right? Again, not necessarily. Most restaurants do not have compost bins to place food scraps and biodegradable items into, so your paper straw will likely end up in the trash. Landfills are designed to prevent decomposition, so your straw will not be able to compost and instead sit in the trash heap for years.
Fossil Fuels Still Needed
Paper straws come from trees. These trees need to be cut down, shipped to a factory, pulped, and then made into straw. Then, the straws needed to be loaded up and shipped. All of these processes use fossil fuels.
Even though paper straws might have a smaller footprint than plastic straws, they still have an enormous carbon footprint, especially for an item that will be thrown away after only 10 minutes of use.
Paper Production is Not Eco-Friendly
The paper industry is not going to win any sustainability awards any time soon. It contributes to air pollution, water pollution, high volumes of paper waste, deforestation of the planet, and high greenhouse gas emissions. Paper production uses more water per product than any other industry. It’s the fifth-largest consumer of energy in the world. Overall, paper straws cause more pollution than straws made from sugarcane.
Some paper straws claim to be made of 100% recycled paper, however, you can never know if the claims are true or if it’s a case of greenwashing. Also, paper can only be recycled once or twice before it degrades too much to be useful, so it’s still not enough to make them environmentally friendly.
Paper Straws Can Be Unsafe
Paper straws aren’t just bad for the environment, they’re bad for you.
Paper straws have been found to contain toxic “forever chemicals” in their water-resistant coating. The researchers found that these chemicals can leach into your drink at a variety of temperatures, making each sip high risk. The chemicals found have been linked to different cancers, thyroid disease, and restricted immune responses.
Beyond the toxic chemicals, paper straws are also a cause of concern for parents of young children and people with disabilities. We’ve all used a paper straw that breaks down and leaves little bits of soggy paper floating in your drink. While it’s just an annoying and unappealing feature for most of us, it can be a choking hazard for children and people with disabilities.
The Paper Straw Experience
If you’ve switched to paper straws, you already know what an unpleasant experience it can be. The taste, mouthfeel, sogginess, and tendency to break down make sipping your drink a chore.
Taste of Paper Straws
When you hear “wet paper”, your first thought is probably not “delicious.” As soon as the first sip of your drink passes through a paper straw, you can taste the wet paper. Plus, the longer the straw sits in your cup, the more your entire drink is infused with the wet paper taste.
Mouth Feel of Paper Straws
The combination of the almost cardboard-like consistency of paper straw with its’ water-resistant, waxy coating makes for a very unpleasant mouthfeel. A plastic straw doesn’t feel so bad, but the odd mouth feel of a paper straw is a no-go. Plus, the longer it soaks in your drink, the worse it gets.
Sogginess of Paper Straws
Paper straws are made of paper, so they will get soggy when sitting in liquid, no matter how much coating they have. Once a paper straw gets soggy, it’s only minutes before it becomes completely useless. The sides start to stick together, making it impossible to sip through. It starts to bend and fold down into your drink. It’s like trying to drink through a wet noodle.
Sugarcane straws versus paper straws sogginess comparison.
Breaking Down of Paper Straws
After the paper straw starts to perform its gymnastics routine in your cup, bending and folding in every direction, the paper breaks down into smaller pieces. Floating with your ice is little bits of wax-coated paper that you can accidentally ingest. It’s not very appealing and can be a choking hazard.
Sugarcane Straws: The Best Alternate to Paper Straws
Now that you know all of the negative aspects of paper straws, you might be wondering what straws to use instead. Metal straws are a popular eco-friendly alternative. However, for use in hot drinks, they can be downright dangerous.
Plant-based straws like our sugarcane straws are a great alternative to plastic and paper straws. They’re 100% biodegradable, use zero plastic or paper, can be used in hot and cold drinks, are not coated in toxic chemicals, do not break down in drinks, and are reusable.
It’s difficult to make eco-friendly choices. Lack of availability, access, and price point make it hard for many people to actively choose products that are kind to the environment. Electing to use sugarcane straws over plastic and paper straws is one small step you can take to work towards a better planetary future. | If you’re an environmentally minded person, a business impacted by the plastic straw ban, or if you’re looking for an eco-friendly alternative, you might be thinking of switching from plastic straws to paper straws. After all, the issue with single-use plastic straws is plastic, right? Not necessarily.
There are environmental, safety, and experience concerns with your local coffee shop’s new paper straw offerings. Let’s break it down.
Paper Straws: a Not-So Environmentally Friendly Alternative
If you’re looking to switch from plastic straws to paper straws, it’s most likely out of concern for the atmosphere. Single-use plastic straws are a sustainability nightmare. However, paper straws harm the environment as well.
Single-Use
One of the key tenets of sustainability is re-use. Paper straws are single-use, disposable items. You cannot use the same paper straw over and over.
If you have any experience with paper straws, you know you can’t even use a paper straw for more than ten minutes in a single drink without it getting soggy and useless.
Not Really Recyclable or Biodegradable
While throwing out paper is not as terrible as throwing out plastic, people are under the assumption that paper straws can be recycled or composted. This is not the case.
While most paper products can be recycled, most recycling facilities in the United States do not allow food-contaminated materials. Because your paper straw is bound to soak up some of your drink, recycling programs will throw it in the trash instead of recycling it.
Not a big deal because paper straws will biodegrade instead, right? Again, not necessarily. Most restaurants do not have compost bins to place food scraps and biodegradable items into, so your paper straw will likely end up in the trash. Landfills are designed to prevent decomposition, so your straw will not be able to compost and instead sit in the trash heap for years.
| no |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://reusably.co/straws/paper-vs-plastic/ | Paper Straws vs Plastic Straws: Comparison, Benefits & Drawbacks ... | Paper Straws vs Plastic Straws: Comparison, Benefits & Drawbacks
Disclosure: This site uses affiliate links and advertising and may receive a commission as a way to help fund our vision of spreading awareness about the benefits of switching to reusable products.
Paper Straws vs Plastic Straws: Which is Better for the Environment?
These days, it feels like plastic straws have become symbolic of environmental damage. But paper straws are starting to become more and more popular. We’re being asked to consider the impact of our decisions, from what we buy to how we live, on the planet. So, which one is better for the environment: paper or plastic straws? Let’s take a look!
TLDR;
Paper straws are generally more eco-friendly and can biodegrade in a relatively short amount of time. However, they tend to become soggy quickly and can be more expensive than plastic straws.
Comparing Paper Straws and Plastic Straws
Paper straws and plastic straws have been pitted against one another in recent years due to their impact on the environment. Paper is seen as a more sustainable option while plastic is seen as a less eco-friendly choice.
On the surface, paper straws seem like the clear winner of this debate due to the fact that they’re biodegradable and do not take hundreds of years to break down as some plastic products do. Additionally, paper production uses less energy and produces fewer emissions than plastic production. Paper straws are also recyclable when done correctly, making them an even more attractive option for people who care about their environmental footprint.
However, not all hope is lost for those wishing to advocate for plastic straws. Plastic has made strides towards becoming a greener option with bioplastics, which are made from renewable sources, compostable plastics and polyester plastics which are more durable than traditional forms of plastic and have broadened the range of materials that can be recycled into new products.
Ultimately, it depends on consumer preferences and whether or not organizations are able to utilize more sustainable options such as bioplastics or compostable plastics. Although paper might be viewed as the most obvious choice for its natural properties, there is still room for plastic in the conversation if it’s used wisely. As we move forward with future discussions about sustainable straws, it’s important to consider how each material differentiates itself from one another- both in terms of strength, longevity, and disposal methods. With a closer look at these matterials differences between paper and plastic, we will now be able to make an informed decision about which material is best for the environment.
Material Differences
When looking at the material differences between paper straws and plastic straws, it becomes clear that one is a much better option for the environment than the other. Plastic straws are primarily composed of petroleum, which is an unsustainable resource that can be toxic to the environment once produced. Paper on the other hand, is made from less damaging and reusable sources. Many sustainable paper products are also biodegradable, which significantly lowers the risk of harming wildlife and natural habitats.
The debate between paper straws and plastic straws still remains; however, when considering environmental impacts it would seem that paper straws offer a much more sustainable option. Paper products are not only better for the environment but they often last longer and require fewer co2 emissions to produce compared to their plastic counterparts. With each passing year, more research has proven just how detrimental single-use plastics are to our planet. As such, it would appear that choosing paper over plastic is the right decision for 2020 and beyond.
Looking forward, it’s clear that switching to more eco-friendly alternatives can make a significant impact on our environment. While comparing paper straws and plastic straws may show us what materials are safest; understanding their benefits can further prove why making changes now will be beneficial in future years.
On average, a paper straw takes three hours to break down in the environment whereas a plastic straw can take up to 200 years.
A study published in 2017 found that paper straws use only 25% of the energy required for the production of plastic straws.
According to National Geographic, by 2050 plastics will account for 15% of global carbon emissions.
Essential Points to Remember
There is a debate over paper straws versus plastic straws, but when considering environmental impacts, paper straws offer a much more sustainable option. Paper straws are made from less damaging and reusable sources, biodegradable, last longer, and require fewer carbon dioxide emissions. Making eco-friendly changes now can make a significant impact on our environment for many years to come.
Benefits of Paper Straws
Paper straws are gaining immense popularity as an eco-friendly alternative to their plastic counterparts, but they offer several additional benefits that elevate them significantly. Paper straws are biodegradable, which means they can be broken down into natural elements without harming the environment. Additionally, since paper is a natural material, it does not contain additives like plastics, making it safe for use around food and beverages. Paper straws also have a more customer-centric benefit than plastic ones; because of their smaller size and added texture, paper straws are more comfortable to use and apply less pressure to the user’s lips during consumption.
In addition, due to their durable design, paper straws last as long as plastic straws in liquids but are much better for the environment than their plastic counterparts at the end of their life cycle. In terms of cost efficiency, paper straws may have higher upfront costs compared to plastic ones, but the added sustainability makes up for any potential short-term expense. Furthermore, since paper is a renewable source used frequently throughout the food industry, many businesses have begun using paper straws in place of plastic in order to reduce their environmental footprint while increasing their marketing efforts by putting forth an effort towards sustainability.
Although there are both sides of the argument on if paper straws are ultimately better for the environment than plastic ones, evidence supports that they do indeed offer numerous advantages over plastic such as biodegradability and comfort. To round off this segment assessing the benefits of paper straws we transition now to explore how other materials can be even more sustainable alternatives.
Eco-Friendly Alternatives
When analyzing Paper Straws vs Plastic Straws from an environmental perspective, it is important to consider the availability of eco-friendly alternatives. In recent years, there has been a growing market for more sustainable options to meet consumer demands for more environmentally friendly products. Biodegradable bamboo, glass, and metal straws are just a few of the many available products on the market. All of these alternatives put a different spin on reducing single use plastic straw waste by replacing them with reusable options.
When considering these alternatives, it is important to take into account the health risks they may pose in comparison to the primary candidates discussed – paper and plastic straws. For example, depending on one’s health history, metal straws may not be a viable option as some people have metal allergies that can be triggered by contact with metals. Additionally, it should also be noted that metal straws pose a burn risk if one consumes a hot beverage while using metal straws as metal conducts heat quickly. By contrast, both paper and plastic straws provide insulation against heat, making them a safer choice in comparison to metal alternatives.
Before settling on one kind of product or another, it is important to weigh all the pros and cons between each type of material to ensure one makes the most informed decision based on their individual needs and use case.
Despite this consideration for alternatives and the need for research around what constitutes the most sensible standard for sustainable consumption practices, it remains clear that plastic straws cannot be completely replaced without addressing single-use culture at its root cause. This reiterates the importance of transitioning from plastic to paper as not only does it prevent more plastic from entering our oceans and landfills but it also offers meaningful benefits over traditional plastics when studied critically. With that said, further discussion should thus be had about potential risks posed by plastic which will be addressed in the following section.
Disadvantages of Plastic Straws
While plastic straws are considered more convenient for many, their environmental drawbacks should definitely be taken into consideration. They are created from petroleum and other toxic chemicals which have a negative effect on the surrounding environment. According to a study by University of Plymouth, nearly 8 million metric tons of plastic waste are thought to enter the world’s oceans every year. Without proper disposing methods, these straws can take hundreds of years to biodegrade and as a result contribute heavily to plastic pollution in the ocean. Moreover, when confronted with an encounter with animals, these plastic straws can easily break into small pieces like microbeads and consequently have harmful effects.
These disadvantages emphasise the importance of using eco-friendly alternatives such as paper straws over plastic ones. Moving forward, we will now explore how this type of disposable garbage can impact marine life.
Harmful Effects on Marine Life
The harmful effects of plastic straws on marine life have been widely documented and not enough can be said about the perils of ocean pollution as a result of plastic waste. When exposed to sunlight, plastic can fragment into microplastics, which are ingested by aquatic organisms like fish and plankton, eventually making their way up the food chain and potentially into our food supply. Plastics also cause entanglement and ingestion risks for wildlife, take hundreds of years to decompose, and release toxins into the environment.
Currently, paper straws present a much more sustainable alternative to combat this issue. They are biodegradable and breaks down quickly in water, meaning they pose no threat to marine animals. Most widely available paper straws are also made from recycled materials or natural fibers such as bamboo – adding further sustainability points. Furthermore, paper straws have a minimal carbon footprint compared to other common alternatives like metal or glass straws.
Despite the current trends of switching to paper straws however, environmental organizations still advise people to “refuse single-use items whenever possible” in an effort to reduce pollution. It’s important we remember that dedicated efforts need to be made in order to protect our oceans from plastic pollution in the long run – otherwise we risk facing catastrophic consequences far beyond those caused by plastic straws alone.
Although it’s clear that paper straws offer a significantly better solution than plastic ones when it comes to protecting aquatic life from pollution, cost considerations should still be taken into account before making any final decisions regarding which one is best suited for specific applications.
Cost Considerations of Paper vs Plastic Straws
The cost consideration of paper vs plastic straws is an important factor to consider when debating which type of straw is more beneficial for the environment. Plastic straws are a far cheaper option out of the two, as they cost around 2 cents per straw, while a paper straw costs between 8 and 10 cents a straw. Additionally, the cost to transport paper straws is significantly higher than that of plastic due to their added weight and fragility; moreover, they can become soggy over time, presenting additional costs relating to replacing them.
Nonetheless, it should be noted that one of the main attractions associated with paper straws is that they are compostable and biodegradable. It has been found that after decomposing into organic matter in soil beds or compost piles, paper straws can act as a fertilizer supply for vegetation growing in that location. This could potentially lead to savings with respect to fertilizers and other chemicals used by gardeners, contributing to overall environmental sustainability benefits associated with the aforementioned products.
It is important to recognize that while plastic straws are much more affordable upfront, their high degree of durability also encourages wastage in comparison to paper straws; as such, paper straws could represent far better value for money over their longer-term usage lifespan. Furthermore, local authorities may offer tax discounts or incentives for businesses using eco-friendly products like paper straws; thus minimizing the overall cost considerations associated with these types of items.
In summary, when examining the costs associated with paper and plastic straws, it must be noted that there are pros and cons with each material from an economic standpoint. While upfront costs may be lower for plastic straws due to their lower unit price and ease of transportation/storage, saving money in terms of replacement costs and potential governmental incentives makes using paper a much more viable long-term option if sustainability before price is taken into consideration.
Frequently Asked Questions Explained
What are the environmental impacts of paper and plastic straws?
The environmental impacts of paper and plastic straws vary depending on their material makeup, production process and disposal methods. Paper straws are made from renewable resources such as trees, bamboo, or wheat straws, which provide enough fiber for manufacturers to produce them in an efficient manner with minimal waste. As such, paper straws can be considered more eco-friendly than their plastic counterparts since the manufacturing process is less energy-intensive, without the need for hazardous chemicals used in the making of plastic straws.
In contrast, plastic straws are generally made from petroleum-based materials that take thousands of years to break down. The production process associated with creating plastic straws involves significant energy consumption as well as water pollution problems when released into waterways. Additionally, when disposed of improperly plastic straws can choke natural habitats due to their flimsy and lightweight nature.
Overall, when considering environmental impacts it is evident that paper straws are more sustainable because they are biodegradable, generate fewer emissions during production and offer a longer lifespan than their plastic counterparts. However, both materials still have substantial environmental costs associated with them so it’s important to minimize usage whenever possible.
How does the manufacturing process for paper and plastic straws differ?
The manufacturing process for paper and plastic straws can differ depending on the type and quality of material used to create them. Paper straws are typically made from either card stock or paper board. The basic steps in the manufacturing process include cutting a roll of paper into thin strips; these strips get cut into the desired length and diameter that will eventually form the straws. A coating is then added to prevent them from getting soggy, which also helps retain their structure when placed in a beverage.
On the other hand, plastic straws are generally made out of polypropylene which is a type of thermoplastic material. It starts off as pellets that are melted down and shaped into hollow tubes using injection molding machines. The resulting tubes are then cut into the shape and lengths of drinking straws. Additionally, different types of PLA (polylactic acid) plastics have been developed recently to make biodegradable or compostable plastic straws, which drastically reduces their impact on the environment by eliminating their non-biodegradable nature, therefore making it much more eco-friendly.
What are the pros and cons of paper straws versus plastic straws?
The main pros and cons of paper straws versus plastic straws have to do with the environment.
Pros of Paper Straws:
• Paper straws are biodegradable which means they break down more quickly in the environment than plastic.
• They can also be recycled if composting is not an option.
• Paper straws have less of an environmental impact than plastic straws, releasing fewer chemicals into the air and water when they decompose.
Cons of Paper Straws:
• Paper straws are not as strong as plastic straws, so they may not hold up well under certain conditions, such as hot drinks or acidic drinks.
• They can become soggy or disintegrate faster than plastic straws and thus may require more frequent replacement.
• Paper straw production requires more energy than that of plastic straws, resulting in a greater carbon footprint for paper-based products.
Ultimately, both paper and plastic have their advantages and disadvantages when it comes to sustainability. By looking at the pros and cons above and considering other factors, such as individual usage patterns and the availability of recycling programs, we can make an informed decision about which material is best suited for our needs while keeping the environmental impact in mind. | Paper Straws vs Plastic Straws: Comparison, Benefits & Drawbacks
Disclosure: This site uses affiliate links and advertising and may receive a commission as a way to help fund our vision of spreading awareness about the benefits of switching to reusable products.
Paper Straws vs Plastic Straws: Which is Better for the Environment?
These days, it feels like plastic straws have become symbolic of environmental damage. But paper straws are starting to become more and more popular. We’re being asked to consider the impact of our decisions, from what we buy to how we live, on the planet. So, which one is better for the environment: paper or plastic straws? Let’s take a look!
TLDR;
Paper straws are generally more eco-friendly and can biodegrade in a relatively short amount of time. However, they tend to become soggy quickly and can be more expensive than plastic straws.
Comparing Paper Straws and Plastic Straws
Paper straws and plastic straws have been pitted against one another in recent years due to their impact on the environment. Paper is seen as a more sustainable option while plastic is seen as a less eco-friendly choice.
On the surface, paper straws seem like the clear winner of this debate due to the fact that they’re biodegradable and do not take hundreds of years to break down as some plastic products do. Additionally, paper production uses less energy and produces fewer emissions than plastic production. Paper straws are also recyclable when done correctly, making them an even more attractive option for people who care about their environmental footprint.
However, not all hope is lost for those wishing to advocate for plastic straws. Plastic has made strides towards becoming a greener option with bioplastics, which are made from renewable sources, compostable plastics and polyester plastics which are more durable than traditional forms of plastic and have broadened the range of materials that can be recycled into new products.
| yes |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://www.rubicon.com/blog/paper-straws-better-environment/ | Are Paper Straws Really Better for the Environment? | Rubicon | Connect
Content
Are Paper Straws Really Better for the Environment?
All across the globe, many companies and people are switching to paper straws instead of plastic, choosing paper over plastic as an eco-friendly alternative.
Over the summer,Starbucks announced it would eliminate plastic drinking straws in all locations by 2020. Seattle became thelargest U.S. city to ban plastic straws in July 2018. McDonald’s will begin testing alternatives to plastic straws in some U.S. restaurants this year, after beginning to phase them out in their U.K. and Ireland locations.
In the United States, it’s estimated that Americans dispose of 500 million straws each day. A recent study shows that8.5 billion plastic straws are thrown away each year in the U.K. Most of these straws end up in the ocean –one 2017 study estimated that as many as 8.3 billion plastic straws are polluting the world’s beaches.
It’s clear that the use of plastic straws is an issue that needs to be addressed. And with many companies choosing paper over plastic, it’s worth exploring whether paper straws are helping or hurting the environment.
But first, it’s important to understand the larger context of why we use plastic straws and the effects of their mass consumption by people and businesses.
A brief history of plastic straws
In 1888, a man named Marvin Stone was drinking a mint julep on a hot summer day when his straw, made of natural rye grass, began to disintegrate and left a gritty residue in the drink. Stone fashioned a paper straw instead and filed the first patent for a drinking straw, and by 1890, Stone Industrial was producing more straws than cigarette holders.
After World War II, American manufacturers began mass-producing plastic goods for consumers, in need of a new market instead of wartime plastic. By the 1960s, corporations were producing plastic straws at increasingly high rates.
As of 2015, the world was producing 380 million tons of plastic.
Plastic production & ocean pollution
As plastic production has increased, so has its effect on the environment, especially on the world’s oceans. Plastic straws are a significant part of that effect. Plastic straws were designed as a single-use product that we use to consume drinks before throwing them away after just one use. However, plastic straws are not recyclable and contribute significant amounts of waste that ends up inlandfills or our oceans.
A lot of single-use plastic collects in “garbage patches” that form as waste and debris get pushed together by circular ocean currents known asgyres. These garbage patches are primarily made up of microplastics, which make the water cloudy and gelatinous.
The largest garbage patch is the Great Pacific Garbage Patch, a.k.a. the Pacific Trash Vortex – it’s twice the size of Texas. However, only about 1 percent of plastic waste collects at the surface in patches like the Pacific Trash Vortex; most of it aggregates at the floor of the ocean, where deep-sea sediments behave as a sink for the microplastics. And microplastics are formed from, you guessed it, single-use plastics such as plastic straws.
A single plastic straw can take up to200 years to decompose. Plastic straws are not biodegradable – instead, they slowly fragment into smaller and smaller plastics (a.k.a. microplastics), which fish and marine animals mistake for food, ingesting the plastic. It’s estimated that up to 71 percent of seabirds and 52 percent of turtles end up ingesting plastic to their stomachs.
Beyond strangulation of marine life, the larger reason plastic is so dangerous is that it releases toxic chemicals like bisphenol-A (BPA) when it breaks down. Plastic straws are made out ofpolypropylene – a petroleum byproduct that is essentially the same stuff that fuels our cars. So, when plastic straws begin to decompose, they release harmful toxins like BPA that pollute our oceans.
Because of these negative effects, many industries across the world have started to ban plastic straws in lieu of alternatives.
The rise of plastic straw bans
Many countries are starting to restrict single-use plastics like plastic straws and plastic bags. In 2002, Ireland imposed a tax on plastic bags, which was followed by a 94 percent decrease in the use of plastic bags. As of 2017, 28 countries had imposed bans or taxes on plastic bags.
This isn’t to say that reducing plastic straw use doesn’t matter, though. It’s an important first step towards drastically limiting plastic in the ocean, by psychologically motivating people to engage in similar behaviors.
It’s clear that the use of plastic straws is an issue that needs to be addressed. But are paper straws truly better for the environment?
Making the switch from single-use plastic straws to paper straws can certainly have less of an impact of the environment. Here are 4 benefits of using paper straws over plastic straws.
Paper straws are biodegradable
Even if you toss your plastic straws in the recycling bin, they’ll likely end up in landfills or the ocean, where they can take years to decompose.
On the flip side, paper straws are fully biodegradable and compostable. If they do end up in the ocean, they’ll start to break down within just three days.
Paper straws take less amount of time to decompose
As we learned, plastic straws can take hundreds of years to fully decompose, lasting for up to 200 years in a landfill. It’s much more likely that they’ll wind up in the ocean, where they break into smaller microplastics that end up being ingested by fish and marine life.
Unlike plastic, paper straws will decompose back into the earth within2-6 weeks.
Switching to paper straws will reduce the use of plastic straws
Our use of plastic straws as a planet is staggering. Each day we use millions of straws – enough to fill 46,400 school buses per year. In the last 25 years,6,363,213 straws and stirrers were picked up during annual beach cleanup events. Choosing paper over plastic will greatly reduce this footprint.
They’re (relatively) affordable
As more businesses become aware of the negative effects of plastic straws and environmentally conscious of their waste and recycling footprint, demand for paper straws has risen. In fact, paper straw supply companiescan’t keep up with the demand. Businesses can now buy paper straws in bulk for as little as2 cents each.
Paper straws are safer for wildlife
Paper straws are marine life-friendly. According to astudy from 5 Gyres, they’ll break down in 6 months, meaning they’re safer for wildlife than plastic straws.
5 eco-friendly alternatives to paper & plastic straws
There are other options out there worth exploring for those who wish to reduce their paper and plastic waste. Here are 5 alternatives to paper and plastic straws.
Stainless steel straws
The first alternative for those looking to reduce waste is stainless steel straws. Just like metal cutlery, stainless steel straws are reusable, easy to clean, have a long lifespan, and are dishwasher safe. Many frequently come with pipe cleaners for easy wash.
Additionally, they won’t affect the taste of your drink, and they look relatively attractive.
Bamboo straws
Straws made out of all-natural bamboo sourced from sustainable forests are a great, lightweight alternative. Once bamboo straws wear out, they compost in a few months. Bonus – they’re perfect for tiki drinks.
Straw straws
Yes, they’re kitschy but they are biodegradable and eco-friendly alternatives to plastic straws. In the 1800s, before paper and plastic, people were literally drinking through straws made of straw. And they’re still around today – check outHarvest Straws.
Glass straws
Glass straws are reusable and durable, plus they’re dishwasher safe. These are available in a variety of lengths, diameters, and colors.
No straws
Of course, the most sustainable solution for the environment is going without straws altogether. If you can, choose not to get a straw with your drink. Or, if you’re a business that serves drinks, don’t offer your customers straws unless they ask for one.
What else can we do to reduce our plastic use?
Although reducing plastic straw use may not get rid of all the plastic in the ocean, there are other things you can do to lower your plastic consumption:
Use reusable shopping bags at the grocery store instead of paper or plastic bags
Use metal or reusable water bottles instead of buying plastic water bottles
Buy foods in bulk in order to reduce packaging use
Pack your lunch or snacks in reusable Tupperware rather than plastic bags | Each day we use millions of straws – enough to fill 46,400 school buses per year. In the last 25 years,6,363,213 straws and stirrers were picked up during annual beach cleanup events. Choosing paper over plastic will greatly reduce this footprint.
They’re (relatively) affordable
As more businesses become aware of the negative effects of plastic straws and environmentally conscious of their waste and recycling footprint, demand for paper straws has risen. In fact, paper straw supply companiescan’t keep up with the demand. Businesses can now buy paper straws in bulk for as little as2 cents each.
Paper straws are safer for wildlife
Paper straws are marine life-friendly. According to astudy from 5 Gyres, they’ll break down in 6 months, meaning they’re safer for wildlife than plastic straws.
5 eco-friendly alternatives to paper & plastic straws
There are other options out there worth exploring for those who wish to reduce their paper and plastic waste. Here are 5 alternatives to paper and plastic straws.
Stainless steel straws
The first alternative for those looking to reduce waste is stainless steel straws. Just like metal cutlery, stainless steel straws are reusable, easy to clean, have a long lifespan, and are dishwasher safe. Many frequently come with pipe cleaners for easy wash.
Additionally, they won’t affect the taste of your drink, and they look relatively attractive.
Bamboo straws
Straws made out of all-natural bamboo sourced from sustainable forests are a great, lightweight alternative. Once bamboo straws wear out, they compost in a few months. Bonus – they’re perfect for tiki drinks.
Straw straws
Yes, they’re kitschy but they are biodegradable and eco-friendly alternatives to plastic straws. | yes |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://www.scmp.com/yp/discover/lifestyle/features/article/3056589/are-paper-straws-really-better-environment-plastic | Are paper straws really better for the environment than plastic ones ... | Should volunteering be mandatory for secondary school students?
5 effortless recipes to help you keep your cool and beat the summer heat
Arm wrestling grips India with glitzy dreams
It’s safe to say that the global campaign against the use of plastic straws reached fever pitch this year, with many companies and people choosing to ditch plastic straws in favour of paper ones. But are paper straws really the eco-friendly alternative they claim to be? Chung Shan-shan, the director of science in environmental and public health management at Baptist University, doesn’t think so.
The main argument for using paper straws instead of plastic ones is that paper is biodegradable. This means that it can naturally be broken down and won’t end up floating in our oceans or being swallowed by turtles. However, the fix isn’t as simple as swapping plastic for paper.
Chung explained to Young Post that while paper straws, unlike plastic ones, will naturally decompose, or break down, into smaller pieces, there is still a chance that small animals could swallow these little bits of paper.
What’s more, the term “biodegradable” may be misleading, said Chung. The Environmental Protection Department’s “Biodegradability Testing Guideline” tests how well different materials break down by keeping them at a constant temperature of between 56-60 degrees Celsius for 180 days. If the carbon matter of that material decreases by 60 per cent, it can be considered biodegradable. In the real world, this means that so-called biodegradable materials could be around for a lot longer than 180 days and, even then, they don’t disappear completely.
“Even though paper is biodegradable, it won’t break down even after a very long time if it contains a lot of pulp,” said Chung. “You can find newspapers in landfills where, even after 10, 20 years, the words on them may still be readable.”
What’s more, there isn’t much point looking at how well materials can be broken down in nature when it comes to a city like Hong Kong,. The city’s litter all ends up in landfills and not in green spaces or the sea. This means that in reality, paper straws in Hong Kong already share the same fate as plastic ones, she said.
“I really want to emphasise that their ability to decompose in landfills is neither an advantage nor a disadvantage to the environment – because it’s not that much of a big deal,” said Chung. She added that if waste did start to decompose in landfills, it would not just release harmful methane gas into the atmosphere but also make the piles of rubbish on the landfill unstable, causing large or heavy items on top of them to fall or collapse inward.
When asked about the recent switch many have made from plastic to paper straws, Chung said that people want to reduce their plastic usage without sacrificing convenience. However, she said, paper straws are still single-use waste items.
“Unless you are getting rid of single-use items in natural places filled with wildlife, swapping plastic straws for paper straws will make no difference in a city like Hong Kong,” Chung added.
The only guaranteed way to make a difference to the environment, said Chung, is for people to stop using single-use straws altogether.
“There are already so many existing tools that can perform the duty of straws. Why can’t we just drink from our cups?” Chung even suggested using chopsticks or spoons to eat the tapioca balls in bubble tea.
Ultimately, said Chung, we need to change our lifestyles so that we use items that can be used again and again.
“Even if we create products using the most environmentally-friendly materials, as long as they are single-use products, the Earth won’t last.” | Should volunteering be mandatory for secondary school students?
5 effortless recipes to help you keep your cool and beat the summer heat
Arm wrestling grips India with glitzy dreams
It’s safe to say that the global campaign against the use of plastic straws reached fever pitch this year, with many companies and people choosing to ditch plastic straws in favour of paper ones. But are paper straws really the eco-friendly alternative they claim to be? Chung Shan-shan, the director of science in environmental and public health management at Baptist University, doesn’t think so.
The main argument for using paper straws instead of plastic ones is that paper is biodegradable. This means that it can naturally be broken down and won’t end up floating in our oceans or being swallowed by turtles. However, the fix isn’t as simple as swapping plastic for paper.
Chung explained to Young Post that while paper straws, unlike plastic ones, will naturally decompose, or break down, into smaller pieces, there is still a chance that small animals could swallow these little bits of paper.
What’s more, the term “biodegradable” may be misleading, said Chung. The Environmental Protection Department’s “Biodegradability Testing Guideline” tests how well different materials break down by keeping them at a constant temperature of between 56-60 degrees Celsius for 180 days. If the carbon matter of that material decreases by 60 per cent, it can be considered biodegradable. In the real world, this means that so-called biodegradable materials could be around for a lot longer than 180 days and, even then, they don’t disappear completely.
“Even though paper is biodegradable, it won’t break down even after a very long time if it contains a lot of pulp,” said Chung. “You can find newspapers in landfills where, even after 10, 20 years, the words on them may still be readable.”
What’s more, there isn’t much point looking at how well materials can be broken down in nature when it comes to a city like Hong Kong,. | no |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://cleanwater.org/2018/06/25/paper-or-plastic-why-answer-should-be-neither | Paper or Plastic? Why the Answer Should be “Neither” | Clean Water ... | Paper or Plastic? Why the Answer Should be “Neither”
As consumers, communities and governments push for an end to single-use plastic disposables such as straws and bags, many businesses are switching to paper products as an alternative. Although paper is considered by many the “better” option, it too has harmful environmental impacts.
First, paper bags and straws are made from trees. Trees act as a carbon sink, temporarily storing carbon from the atmosphere which reduces atmospheric carbon dioxide levels, thereby lessening climate change. Plastic bags on the other hand are made of petroleum byproducts, meaning they are made from materials that have already been extracted and processed for other purposes. In contrast, paper bags must be made from fresh raw materials which translates to more deforestation and habitat damage.
Second, the production of paper bags is much more resource intensive in terms of energy and water. About 10 percent more energy is used to produce a paper bag versus a plastic one, and about 4 times as much water. Although recycled paper can be used it takes even more energy and water to go through the recycling process than virgin material, and the finished product is less durable.
Third, paper bags have more mass and are much heavier than plastic bags which means they require more fuel to transport. To put it in perspective, seven trucks are required to transport two million paper bags whereas only one truck is needed to transport the same number of plastic ones. Moreover, the increased weight and volume significantly increases the amount of waste going to landfill once they are thrown away. In fact, the disposal of paper bags results in a threefold to sevenfold increase in greenhouse gas emissions in the landfill versus their plastic counterparts. Large quantities of paper bags have even been linked to acid rain and damage to lake ecosystems.
Environmental issues aside, paper products are often more expensive than plastic. Paper straws can cost roughly 5 to 12 cents per unit, while plastic straws cost a little under 2 cents each. Despite common belief, paper products are a lose-lose for both businesses and the environment.
Therefore, the answer on whether to choose paper or plastic is neither.
The best environmentally friendly solution is to avoid single-use items altogether in favor of reusables. Reusable alternatives, such as fabric bags or reusable stainless steel or glass water bottles, coffee cups, and straws can be used over and over again in order to reduce throwaway waste and are the best option over paper and plastic.
Bags are slightly more complex. According to studies, a single use plastic bag has by far the least ecological footprint to produce when compared to paper, cotton, and non-woven polypropylene. However, the true ecological footprint of these materials depends on how often they are used and how they are disposed. A cotton bag (assuming it is non-organic) must be used 131 times before it becomes the more environmentally friendly option over plastic bags because of its resource-intensive manufacturing and transport. A non-woven polypropylene bag however must only be used 11 times to beat out single use plastic.
Clean Water Action's ReThink Disposable program helps empower businesses and communities to make the best choices for themselves and the planet. If you live in an area that is considering or has already banned single-use plastics or foam, please contact us today to learn how you can implement reusable products into your business in order to go green, save money, and make your customers happy!
ReThink Disposable is funded by a grant through the Northeast Water Pollution Control Commission (NEIWPCC), partnership with the Environmental Protection Agency (EPA) Trash Free Waters initiatives, and the Environmental Endowment of New Jersey. | Paper or Plastic? Why the Answer Should be “Neither”
As consumers, communities and governments push for an end to single-use plastic disposables such as straws and bags, many businesses are switching to paper products as an alternative. Although paper is considered by many the “better” option, it too has harmful environmental impacts.
First, paper bags and straws are made from trees. Trees act as a carbon sink, temporarily storing carbon from the atmosphere which reduces atmospheric carbon dioxide levels, thereby lessening climate change. Plastic bags on the other hand are made of petroleum byproducts, meaning they are made from materials that have already been extracted and processed for other purposes. In contrast, paper bags must be made from fresh raw materials which translates to more deforestation and habitat damage.
Second, the production of paper bags is much more resource intensive in terms of energy and water. About 10 percent more energy is used to produce a paper bag versus a plastic one, and about 4 times as much water. Although recycled paper can be used it takes even more energy and water to go through the recycling process than virgin material, and the finished product is less durable.
Third, paper bags have more mass and are much heavier than plastic bags which means they require more fuel to transport. To put it in perspective, seven trucks are required to transport two million paper bags whereas only one truck is needed to transport the same number of plastic ones. Moreover, the increased weight and volume significantly increases the amount of waste going to landfill once they are thrown away. In fact, the disposal of paper bags results in a threefold to sevenfold increase in greenhouse gas emissions in the landfill versus their plastic counterparts. Large quantities of paper bags have even been linked to acid rain and damage to lake ecosystems.
Environmental issues aside, paper products are often more expensive than plastic. Paper straws can cost roughly 5 to 12 cents per unit, while plastic straws cost a little under 2 cents each. Despite common belief, paper products are a lose-lose for both businesses and the environment.
Therefore, the answer on whether to choose paper or plastic is neither. | no |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://www.foopak.com/paper-straws-are-they-more-environmentally-friendly-than-plastic-straws/ | Paper Straws : Are They More Environmentally Friendly than Plastic ... | Paper Straws : Are They More Environmentally Friendly than Plastic Straws?
The awareness of preserving nature has increased. One of the example is using paper straws as the alternative of plastic straws, it has becoming more popular among entrepreneurs and consumers. Utilization of paper straws has a positive impact on minimizing the amount of plastic waste as they are more easily decompose, environmentally friendly, and safe if accidentally swallowed by humans. In the manufacturing process, paper straw in its circulation must meet the applicable food packaging safety rules and requirements, including the manufacturing materials.
According to the data projection from Ellen MacArthur Foundation, if there is no preventive action, there will be 12 billion tons of plastic waste scattered in the ocean, exceeding the number of fish populations. From the aspect of sustainability, compared to paper straws, plastic straws are very difficult to biodegrade in nature and still consumed very massively. Therefore, a joint effort is necessary to take steps towards change. The use of paper straws as a substitute for plastic straws is an effective solution. According to the data from the UK Environment Agency, paper straws can be decomposed in 2-6 weeks, while plastic straws can take up to 200 years to biodegrade. According to the time needed for it to be decomposed, paper straws definitely have more positive impacts that is sustainable.
Plastic Straw / Source : Pinterest
In terms of health, many plastic straws still contain harmful chemicals such as bisphenol A (BPA) and phthalates, which have negative impacts on human health. BPA is a chemical that is often used in plastic production and has been shown to have negative effects on the human hormonal system. Meanwhile, phthalates are chemicals used to make plastic more flexible and can trigger health problems such as reproductive disorders and cancer.
The projection of using paper straws as an alternative to plastic straws is very promising in the future, especially with the increasing awareness of plastic waste negative impact for the environment and human health. Some restaurants, cafes, and other businesses have starting to adopt paper straws as alternative way to replace to plastic straws. The participation of all parties is also necessary to be able to save the environment by moving together and synergizing to continuously use paper straws as a substitute for plastic straws.
Paper Straw Foopak Bio Natura / Source : Foopak Documentation
Despite paper straws various positive impacts, there are several challenges in the use of paper straws as alternative substitute for plastic straws. One of the challenges is the strength and durability of paper straws, which is still not as strong and tight as plastic straws. Generally, paper straws can function normally for up to 60 minutes at room temperature and 4 hours at a low temperature of around 4°C.
Foopak Bio Natura / Source : Foopak Documentation
Indah Kiat Pulp and Paper, under the Sinarmas Group, has created a leading innovative paperboard for food packaging, Foopak Bio Natura. Foopak Bio Natura is an eco-friendly food packaging material made from renewable resources. It is biodegradable, compostable, and recyleable, making is a sustainable alternative to traditional plastics coated paper board. Foopak Bio Natura is water resistance and heatsealable, suitable for both hot and cold cup application. It is also microwaveble and freezer safe. Overall, Foopak Bio Natura offer excellent strength and product features while reducing the environmental impact of food packaging waste.
Foopak Bio Natura has received the plastic-free certification from Flustix, this certification verifies that Foopak Bio Natura is entirely free of conventional plastics, microplastics, and nanoplastics, making it a truly sustainable packaging option. Foopak Bio Natura plastic-free certification from Flustix demonstrates its commitment to reducing plastic pollution and supporting circular economy.
Foopak Bio Natura biodegradable raw materials from Asia Pulp and Paper is not only guaranteed for their top-notch quality, but it is the preferred choice for eco-friendly paper straw and proven to be used in any kind of shapes and size. Foopak Bio Natura also has a better water resistance as it was tested and was proved that it does not disintegrate despite being put in water for at least ten hours. Furthermore, it can be lasered with customized corporate logos without having the need to add any plastic, all this whilst maintaining high-standard inspection which meets the EU standards. Foopak Bio Natura can also be composted and recycled without the need for any additional substances. With its exceptional advantage Foopak Bio Natura is an excellent choice to substituting plastic usage and have more positive impacts on the environment.
Taking care of the environment can start with small steps, such as replacing plastic straws with eco-friendly paper straws. Let’s start using paper straws, a small step can create big impact for the future. | Paper Straws : Are They More Environmentally Friendly than Plastic Straws?
The awareness of preserving nature has increased. One of the example is using paper straws as the alternative of plastic straws, it has becoming more popular among entrepreneurs and consumers. Utilization of paper straws has a positive impact on minimizing the amount of plastic waste as they are more easily decompose, environmentally friendly, and safe if accidentally swallowed by humans. In the manufacturing process, paper straw in its circulation must meet the applicable food packaging safety rules and requirements, including the manufacturing materials.
According to the data projection from Ellen MacArthur Foundation, if there is no preventive action, there will be 12 billion tons of plastic waste scattered in the ocean, exceeding the number of fish populations. From the aspect of sustainability, compared to paper straws, plastic straws are very difficult to biodegrade in nature and still consumed very massively. Therefore, a joint effort is necessary to take steps towards change. The use of paper straws as a substitute for plastic straws is an effective solution. According to the data from the UK Environment Agency, paper straws can be decomposed in 2-6 weeks, while plastic straws can take up to 200 years to biodegrade. According to the time needed for it to be decomposed, paper straws definitely have more positive impacts that is sustainable.
Plastic Straw / Source : Pinterest
In terms of health, many plastic straws still contain harmful chemicals such as bisphenol A (BPA) and phthalates, which have negative impacts on human health. BPA is a chemical that is often used in plastic production and has been shown to have negative effects on the human hormonal system. Meanwhile, phthalates are chemicals used to make plastic more flexible and can trigger health problems such as reproductive disorders and cancer.
The projection of using paper straws as an alternative to plastic straws is very promising in the future, especially with the increasing awareness of plastic waste negative impact for the environment and human health. | yes |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://www.cnbc.com/2018/07/09/paper-straws-are-better-for-the-environment-but-they-will-cost-you.html | Paper straws cost 'maybe 10 times' more than plastic straws, says ... | "If you buy a paper straw, it’s about two cents and a half," he said. Plastic straws cost about a half-cent.
But Merran, whose company distributes paper straws —using recyclable and organic materials — to coffee shops, Las Vegas casinos and large stadiums like Madison Square Garden, said it's all about perspective.
"You go from something that is very, very, very cheap, to something that is still actually cheap," he said.
However, Starbucks, in lieu of paper straws — which many customers have complained lose their shape too fast — said it will replace plastic straws with a recyclable sippy cup-type lid.
"Any green solution is a solution," Merran said.
The design studio Kikkerland designed these festive paper straws that can be tossed in a home composter after a party. They come in a box of 144 and can be purchased for under $10.
Allen J. Schaben | Los Angeles Times| Getty Images
He said other alternatives include re-usable straws, where customers clean their straws at home and bring them back each time they dine out, similar to a to-go mug.
As for how long the paper straws last after being inserted into liquid, Merran said it depends on the beverage and temperature.
"It should hold for about the time for you to [finish the] drink," he said. "It’s going to become a little soggy, but you can still drink from it. It’s like any alternative. It’s not perfect but it does the trick." | "If you buy a paper straw, it’s about two cents and a half," he said. Plastic straws cost about a half-cent.
But Merran, whose company distributes paper straws —using recyclable and organic materials — to coffee shops, Las Vegas casinos and large stadiums like Madison Square Garden, said it's all about perspective.
"You go from something that is very, very, very cheap, to something that is still actually cheap," he said.
However, Starbucks, in lieu of paper straws — which many customers have complained lose their shape too fast — said it will replace plastic straws with a recyclable sippy cup-type lid.
"Any green solution is a solution," Merran said.
The design studio Kikkerland designed these festive paper straws that can be tossed in a home composter after a party. They come in a box of 144 and can be purchased for under $10.
Allen J. Schaben | Los Angeles Times| Getty Images
He said other alternatives include re-usable straws, where customers clean their straws at home and bring them back each time they dine out, similar to a to-go mug.
As for how long the paper straws last after being inserted into liquid, Merran said it depends on the beverage and temperature.
"It should hold for about the time for you to [finish the] drink," he said. "It’s going to become a little soggy, but you can still drink from it. It’s like any alternative. It’s not perfect but it does the trick." | yes |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://www.restaurantsupplydrop.com/blogs/barista/ecofriendly-paper-straws | Why you should Invest in Eco-Friendly Paper Straws? | Restaurant ... | Why Invest in Eco-Friendly Paper Straws?
Paper straws are becoming increasingly popular as an environmentally-friendly alternative to plastic straws. They are made from natural materials like paper and bamboo, which are biodegradable and sustainable.
Furthermore, they do not require special treatment or maintenance, so that they can be safely disposed of after use. Furthermore, finally, paper straws are less likely to contaminate the environment with harmful chemicals.
All in all, there are many good reasons to invest in eco-friendly paper straws, especially if you are concerned about the environmental impact of your business.
Choosing Eco-Friendly Straws
Over 1.5 million metric tons of plastic are produced annually on a global scale. This includes almost 9 million metric tons of single-use plastic items, such as straws, disposable plastic cups with lids, and individually wrapped straws in bulk.
When you use a plastic straw, you effectively consume one of the 9 million metric tons of plastic. Even worse, when a plastic straw is discarded, it produces significant environmental difficulties. So not only are you contributing to the global problem of plastic trash, but you are doing so at a time when our environment is already trying to adapt to the effects of climate change.
There are several eco-friendly alternatives to throw away plastic straws on the market. Paper and bamboo straws, both biodegradable and compostable, are among the most popular.
Benefits of choosing Paper straws
Biodegradable and Compostable
If you're looking for a way to reduce your environmental impact, you might consider using paper straws instead of plastic or metal ones. Paper straws are biodegradable and compostable, so they're good for the environment in multiple ways.
Paper straws are made from plant-based materials, so they don't create any waste when used and can be recycled. They also have a small ecological footprint, which is good because smaller footprints mean less environmental impact.
One of the benefits is that paper straws are compostable. This means that they will break down into smaller pieces naturally when you dispose of them.
When you use paper straws, you are not only reducing your own environmental impact but also helping to protect the environment of future generations.
Variety of options
If you are looking for a trendy drinking utensil to complement your stylish wholesale plates for catering, paper straws are a great option. In addition to the standard black, white, and pink options, there are now a variety of colors and styles available to suit any customer's taste. From fun and colorful geometric designs to classic straws with brightly colored tips, there is a style for everyone.
So paper straws are a great option if you want to appeal to a younger crowd or add some extra flair to your drinks.
Improves your restaurant's image
If you want to positively impact your restaurant's image, adding eco-friendly paper straws is a great way to start. People generally like businesses that care about the environment, so incorporating environmentally friendly practices into your operations can score some brownie points.
Not only are these straws less harmful to the environment, but they also tend to be more popular with customers. Many people prefer restaurants that use eco-friendly straws over those that do not.
Paper straws are hygienic
If you want to improve your restaurant's hygiene levels, eco-friendly straws are a great way to do it. Not only are they less damaging to the environment, but they also reduce the number of bacteria that spread around.
This makes them a safer option for customers who are always worried about their health. In addition, paper straws are much more hygienic than their plastic counterparts. They are also easy to clean, which is excellent if your restaurant tends to get messy.
Conclusion
Plastic straws are often made from harmful chemicals, like BPA, which can cause health problems if ingested. Paper straws are much safer than plastic ones because they do not easily break or become entangled in food. In addition, using paper straws will help promote sustainability in your restaurant business. By using eco-friendly products like paper straws, you will do your part to help protect the environment and promote a healthy lifestyle for yourself and your customers.
Are you looking for excellent restaurant catering supplies at an affordable price? Look no further than Restaurant Supply Drop. We have a wide selection of quality items, all of which are guaranteed to help your business function efficiently. Our top offerings are paper straws, commercial kitchen utensils, and more. So, whether you are running a small business or a large chain, we have the supplies you need to keep your operations running smoothly.
Disclaimer- The information provided in this content is just for educational purposes and is written by a professional writer. Consult us to learn more about eco-friendly paper straws.
Newsletter
Quick links
Get in touch
Restaurant Supply Drop
We're a one stop shop for wholesale to-go food containers, custom printed restaurant supplies, coffee shop supply, take-out boxes, frozen yogurt supplies, and bubble tea supplies. For restaurants, we carry 500+ different sizes and styles of carry out containers. For juice shops, we have all the different style PET Plastic Cups you'll need. For bubble tea shops + shaved ice store + FroYo Stands, we have tea zone syrups, toppings, and various quick desserts. If you need coffee shop supplies, you need to be shopping here for your favorite Torani Syrups, disposable coffee cups, and 1883 Maison Routin Syrup. Long Story short, if you own a QSR and need something disposable, we're the people to call. | Why Invest in Eco-Friendly Paper Straws?
Paper straws are becoming increasingly popular as an environmentally-friendly alternative to plastic straws. They are made from natural materials like paper and bamboo, which are biodegradable and sustainable.
Furthermore, they do not require special treatment or maintenance, so that they can be safely disposed of after use. Furthermore, finally, paper straws are less likely to contaminate the environment with harmful chemicals.
All in all, there are many good reasons to invest in eco-friendly paper straws, especially if you are concerned about the environmental impact of your business.
Choosing Eco-Friendly Straws
Over 1.5 million metric tons of plastic are produced annually on a global scale. This includes almost 9 million metric tons of single-use plastic items, such as straws, disposable plastic cups with lids, and individually wrapped straws in bulk.
When you use a plastic straw, you effectively consume one of the 9 million metric tons of plastic. Even worse, when a plastic straw is discarded, it produces significant environmental difficulties. So not only are you contributing to the global problem of plastic trash, but you are doing so at a time when our environment is already trying to adapt to the effects of climate change.
There are several eco-friendly alternatives to throw away plastic straws on the market. Paper and bamboo straws, both biodegradable and compostable, are among the most popular.
Benefits of choosing Paper straws
Biodegradable and Compostable
If you're looking for a way to reduce your environmental impact, you might consider using paper straws instead of plastic or metal ones. Paper straws are biodegradable and compostable, so they're good for the environment in multiple ways.
Paper straws are made from plant-based materials, so they don't create any waste when used and can be recycled. They also have a small ecological footprint, which is good because smaller footprints mean less environmental impact.
| yes |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://www.tembopaper.com/news/plastic-straws-and-the-environment-what-is-the-impact | Plastic Straws and the Environment: What is the Impact? | Tembo ... | Plastic Straws and the Environment: What is the Impact?
Although plastic goods became widely available to consumers during the 1950s, it’s only during the last twenty years that we’ve seen the real boom in plastic – and as a result, plastic waste.
In the 1960s less than 1% of our refuse was plastic; by 2005 that had increased to 10% according to one ground-breaking study of plastic production.
In recent years, concern has mounted over the increasing quantities of single-use plastic items that are becoming part of our everyday lives. One of these items is the plastic drinking straw, billions of which are given out in cafés and restaurants, or as part of takeaway meals, every year.
In the US, an estimated 500 million single-use plastic straws are used each day, while in Europe the figure stands at 25.3 billion in a year. But what happens once these straws are used and discarded?
This article will take a frank look at how plastic straws affect the environment in four key ways, focusing particularly on how they impact our oceans and marine life. We’ll also consider how more environmentally friendly drinking straw solutions are becoming available, which can help us to reduce plastic waste and clean up our waterways.
1. Plastic straws are not biodegradable
Why are plastic straws bad for the environment? Well, the first problem is that unlike natural materials such as paper, wood, or cotton, the polypropylene used to manufacture most single-use plastic straws is not biodegradable. This means that once plastic straws go to landfill, small organisms such as insects or bacteria can’t break them down by consuming them.
Instead, what happens is that the straws will simply degrade, gradually disintegrating into smaller and smaller particles – known as microplastics – over a period of up to 200 years. As the plastic degrades, it also exudes harmful chemicals such as bisphenol A (BPA), that have been linked to environmental pollution and health problems.
2. Plastic straws are difficult to recycle
Not only are plastic straws not biodegradable, but they are also very difficult to recycle after we’ve finished using them. Of the 8,300 million metric tons of plastic that has ever been produced, a mere 9% has been recycled. Moreover, polypropylene plastic straws are categorised as a type 5 plastic, which is even less commonly recycled.
Because of this, consumers struggle to find recycling facilities for straws, and local councils or authorities refuse to collect them from the kerbside. Further, if plastic straws are accepted for recycling, they are so small and light that they are often sifted out at mechanised recycling plants and sent to landfill anyway.
3.Plastic straws pollute our oceans and waterways
Depositing plastic straws into landfill so they can slowly degrade is by no means an environmentally friendly solution. However, the reality is that used plastic straws frequently have a much worse destination: our oceans. It’s estimated that 8 million tons of plastic ends up in the ocean each year, and 1.15–2.41 tons of it is carried there down major rivers around the world.
Plastic straws are particularly prone to making their way to our waterways. First, they constitute a significant part of beach litter, with one large-scale beach litter pick identifying straws as the seventh most collected item. Because plastic straws are small and light, they are regularly blown out of rubbish bins, refuse vehicles, and landfill sites by the wind. They can then quickly find their way to watercourses and be washed into the sea.
Finally, along with other small plastic items, straws can be ingested by birds scavenging at landfill sites. As the straws do not biodegrade, they then stay in the bird’s stomach until it dies. The bird itself biodegrades, leaving the plastic straw to be blown or washed into waterways as before.
Once plastic straws reach the ocean, they can accumulate with other plastic waste and form huge floating masses on the ocean surface. The largest of these “plastic islands” has been named the Great Pacific Garbage Patch, located between California and Hawaii, and covers an area of 1.6 million square kilometres.
The debris can prevent sunlight from reaching algae and plankton beneath the water, stopping them from changing the light into vital nutrients. If algae and plankton populations are threatened, this can impact the entire marine food web. In the long term, this could result in less seafood being available for humans too.
Plastic straws might be small, but when we use billions of them per year they make a significant contribution to plastic waste in our seas. In fact, scientists predict that if we continue to allow plastic to enter the ocean at the current rate, by 2050 there will be more plastic (by weight) than fish there.
4. Plastic straws are harmful for ocean wildlife
Of course, such a quantity of plastic waste reaching our oceans cannot fail to have a negative impact on the marine and coastal wildlife that live in and near the water. It’s estimated that around 800 different species are affected by ocean plastic pollution and that at least 100,000 marine mammals die every year as a result of plastic debris.
Plastic straws that wash into the sea pose a particular threat to wildlife, as their small size makes them easier for birds, animals, and larger fish to ingest. Although it isn’t possible to put a number on the impact of plastic straws alone, it’s thought that 90% of seabirds have ingested some kind of plastic from the ocean and by 2050 99% of species could be affected. If a large quantity of plastic is ingested, this can cause a marine bird or mammal to starve to death; feeling the weight in its stomach, it assumes it has eaten and is not motivated to find enough food to keep it alive.
Entanglement in plastic debris is another huge problem for marine creatures. In 2015 a video of a sea turtle having a section of plastic straw removed from its nose by a group of marine biologists went viral. This film shocked millions of viewers and raised awareness about the dangers of plastic pollution, lending weight to campaigns to ban single-use plastic straws altogether.
As discussed above, plastic straws degrade into smaller particles over time and this makes them even easier for fish to swallow. In this way, plastic is actually entering the food chain and may, ultimately, be consumed by humans too. More research is needed to establish how many people have these microplastics in their bodies, and whether this could have deeper health implications.
An eco-friendly solution to the plastic straw problem
Across the world, countries and states are taking action to ban or limit single-use plastics and clean up our environment. In the USA, California, Oregon, and Hawaii have plastic bans in place (at the time of writing), while the European Union has set a deadline of 2021 to ban single-use plastics.
The days of disposable plastic straws are numbered, and in their place more eco-friendly options are appearing. Individuals have the option to purchase reusable straws made of glass or stainless steel. However, businesses (such as food and beverage companies) that still want to give customers the option of a disposable straw with their product are increasingly turning to biodegradable paper straws.
These positive changes are a hopeful sign that the problem of plastic straws and how they impact the environment will soon be a thing of the past. Now to roll up our sleeves and clean up our oceans, support our marine wildlife, and leave our beaches pristine for future generations to enjoy.
This website uses functional and analytical cookies to provide you with an optimal visitor experience. This enables us to record and analyze the behavior of visitors and thereby improve our website. By clicking on the agreement below you give us consent to do so. | We’ll also consider how more environmentally friendly drinking straw solutions are becoming available, which can help us to reduce plastic waste and clean up our waterways.
1. Plastic straws are not biodegradable
Why are plastic straws bad for the environment? Well, the first problem is that unlike natural materials such as paper, wood, or cotton, the polypropylene used to manufacture most single-use plastic straws is not biodegradable. This means that once plastic straws go to landfill, small organisms such as insects or bacteria can’t break them down by consuming them.
Instead, what happens is that the straws will simply degrade, gradually disintegrating into smaller and smaller particles – known as microplastics – over a period of up to 200 years. As the plastic degrades, it also exudes harmful chemicals such as bisphenol A (BPA), that have been linked to environmental pollution and health problems.
2. Plastic straws are difficult to recycle
Not only are plastic straws not biodegradable, but they are also very difficult to recycle after we’ve finished using them. Of the 8,300 million metric tons of plastic that has ever been produced, a mere 9% has been recycled. Moreover, polypropylene plastic straws are categorised as a type 5 plastic, which is even less commonly recycled.
Because of this, consumers struggle to find recycling facilities for straws, and local councils or authorities refuse to collect them from the kerbside. Further, if plastic straws are accepted for recycling, they are so small and light that they are often sifted out at mechanised recycling plants and sent to landfill anyway.
3.Plastic straws pollute our oceans and waterways
Depositing plastic straws into landfill so they can slowly degrade is by no means an environmentally friendly solution. However, the reality is that used plastic straws frequently have a much worse destination: our oceans. | yes |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://greatpaperstraws.com/7-reasons-to-choose-paper-straws-over-plastic/ | Here's Why You Should Choose Paper Drinking Straws Over Plastic | 1. Plastic Straws are Bad for the Ocean
When plastic straws are left behind by forgetful (or malicious) beachgoers, left on boats or cruise ships, or in overstuffed trashcans, where do you think they end up?
The unfortunate reality is that they end up in the ocean. They’re so small and lightweight that the wind carries them away, right into the water.
Even if they end up in gutters or storm drains, they all have the same unfortunate end. The ocean is their resting place.
Once in the ocean, they break down into microplastics. Those microplastics disperse into the ecosystem and get into the bodies of the undersea animals. They’re incredibly harmful and pose a serious threat to the future of aquatic life worldwide.
2. Paper Drinking Straws Are Recyclable
While many plastics are recyclable, plastic straws are often too small and lightweight for the machines to trap them.
They drop through the holes in the machines and get sifted out with the rest of the garbage, making their ways to landfills and bodies of water worldwide.
Paper, however, can be crumpled up and be thrown in easily with the rest of the paper and cardboard recycling. Because of the nature of paper, it’s less likely to disappear through any cracks. If it did though, it wouldn’t matter. Paper won’t stay on the earth forever like plastic will.
3. Paper Straws Can Be Stylish
Paper straws come in a variety of colors and styles, making them a trendy drinking utensil. They’re easily customizable with dyes, and the materials and dyes are FDA approved to be safe for use with foods and beverages.
Those cute straws you see on food blogs with the fancy stripes stuck in milkshakes? Those are generally paper straws.
Fun colors aren’t only for plastic.
4. They’re Biodegradable and Compostable
While paper products are all recyclable, it is a possibility that your paper straw will meet the same fate as the plastic one: getting lost amongst the garbage.
If that happens? No worries.
These straws won’t stay in the ground or ocean forever, releasing microplastics and contributing to pollution until the end of time. Rather, paper is biodegradable. It will disappear into nothing. These straws can disintegrate right into the dirt without a worry for the environment.
Now, this doesn’t mean we should just go throwing them onto the ground without paying attention, but it does mean that you can rest easy knowing that you aren’t contributing to the microplastic problem in the oceans.
5. People Still Enjoy Using Straws
So, maybe your solution is just to go strawless entirely. This is great for the environment and will help overall.
That said, people really like their straws. They’re attached to them. People would sometimes rather bring a straw along with them than not use one at all.
Sometimes, though, people forget that some people with disabilities need straws. To ensure accessibility while also being more environmentally friendly, it would be best to offer single-use paper straws to all customers.
This way, no one has to ask, perhaps making themselves uncomfortable if they’re someone that needs a straw for reasons of disability (or even just because they want one) and you’re still helping protect the world from plastics.
6. Made From Renewable Resources
Most single-use plastics aren’t bioplastics. Bioplastics are plastics made from renewable resources, making them not ideal, but not terrible for the environment.
Other single-use plastics are often made from petroleum or natural gas. This might not seem like a problem now, but once these resources are used up (and for plastic straws, no less), they’re gone.
Paper drinking straws, however, don’t have this problem. Paper is made from renewable resources, naturally. As it’s also recyclable, the straws could potentially become future straws.
7. It’s a Good Look For Your Company
As going-green is trending, more and more companies are joining the wave. Starbucks, Disney, and plenty of other major companies are cutting down on their single-use plastic straws. They’re getting a lot of attention for it.
On the other hand, it can be a bad look to continue passing out plastic straws to every customer, regardless of whether or not they ask for one.
Even if your area hasn’t completely banned plastic straws, like some cities have, you can lead the trend and look great doing it.
Keeping up appearances shouldn’t be the only reason, but it will be good for your brand and your customers will remember.
So, Paper or Plastic?
Paper drinking straws are great for the environment and a perfect single-use alternative for plastic straws. Alternatives don’t have to be super expensive, and making the switch now will make you ahead of the curve in the trend towards working and living green.
Your business can and will benefit from ditching single-use plastics. They’re unnecessary for most people and they’re actively harmful to the environment. Rather than contribute to the microplastic problem in the ocean, wouldn’t you rather be part of the solution?
To learn more about our mission, or purchase some of our straws wholesale, check out our site. Consider making the switch for your business today. | This way, no one has to ask, perhaps making themselves uncomfortable if they’re someone that needs a straw for reasons of disability (or even just because they want one) and you’re still helping protect the world from plastics.
6. Made From Renewable Resources
Most single-use plastics aren’t bioplastics. Bioplastics are plastics made from renewable resources, making them not ideal, but not terrible for the environment.
Other single-use plastics are often made from petroleum or natural gas. This might not seem like a problem now, but once these resources are used up (and for plastic straws, no less), they’re gone.
Paper drinking straws, however, don’t have this problem. Paper is made from renewable resources, naturally. As it’s also recyclable, the straws could potentially become future straws.
7. It’s a Good Look For Your Company
As going-green is trending, more and more companies are joining the wave. Starbucks, Disney, and plenty of other major companies are cutting down on their single-use plastic straws. They’re getting a lot of attention for it.
On the other hand, it can be a bad look to continue passing out plastic straws to every customer, regardless of whether or not they ask for one.
Even if your area hasn’t completely banned plastic straws, like some cities have, you can lead the trend and look great doing it.
Keeping up appearances shouldn’t be the only reason, but it will be good for your brand and your customers will remember.
So, Paper or Plastic?
Paper drinking straws are great for the environment and a perfect single-use alternative for plastic straws. Alternatives don’t have to be super expensive, and making the switch now will make you ahead of the curve in the trend towards working and living green.
Your business can and will benefit from ditching single-use plastics. They’re unnecessary for most people and they’re actively harmful to the environment. | yes |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | yes_statement | "paper" "straws" are more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is better for the environment than using "plastic" "straws". | https://www.imperialdade.com/blog/plastic-straws-vs-paper-straws | Plastic Straws vs Paper Straws: Should Your Business Switch to ... | Imperial Dade Insights
Stay Informed With Helpful Information From Our Experts
Paper straws are becoming more and more popular as the trend toward sustainability grows, and plastic straws become the target of increasing government regulations. If your business is considering a switch from plastic straws to a more environmentally friendly option, paper straws could be the right solution for you.
Why Are Plastic Straws Being Banned?
Plastic straws are being banned across multiple cities because they do not break down in a landfill and are a threat to the environment, wildlife, and human health.
Although recyclable, plastic straws are not typically accepted by curbside recycling programs. Municipalities that do accept plastic straws for recycling, may have a hard time separating them due to sorting inefficiencies.
Even when accepted, most plastic straws will not make it to recycling centers because people discard them as trash.
When plastics break down, they break up into smaller pieces over time. Eventually, plastics break into microplastics (tiny pieces of plastic).
Microplastics contaminate our ecosystems. Animals, such as fish, can mistake microplastics as food and eat them. Not only is it harmful for fish and other animals to eat plastics, but when humans eat the fish, we may also be consuming microplastics as a result.
Consumers are becoming increasingly aware of the negative effects of plastic products on the environment and human health, and are demanding more sustainable options.
One of the most common environmentally friendly alternatives to single-use plastic straws are paper straws.
Paper straws break down into organic materials leaving a smaller footprint on the earth.
Advantages of Paper Straws
Eco-Friendly Benefits of Paper Straws
If you are looking to display your sustainability efforts, using paper straws can make it obvious to your customers that you consider the full product lifecycle and are making an effort to leave a smaller footprint on the earth.
Are paper straws compostable?
Yes, paper straws are compostable at commercial facilities.
Pro Tip:What does compostable mean?
A product is compostable if it can break down into CO2, water, and organic materials within six months of beginning the composting process.
How do I know if my straw is certified compostable?
Look for third-party certifications such as the Cedar Grove Composter ® approved seal.
Cedar Grove is a commercial composter that provides third-party certifications to products. To receive the Cedar Grove Certification, a product is tested against the American Society for Testing and Materials (ASTM) standards and proven to compost at a compost facility and/or at a home composting bin.
If you are not sure if your paper straw is certified compostable, reach out to an EBP Foodservice Specialist.
Are paper straws recyclable?
Paper straws are recyclable. Always check with your commercial hauler to see if they are currently accepting paper straws in your area.
Pro Tip:What does recyclable mean?
Products which can be collected, separated, or recovered from the waste stream are considered recyclable.
Comparable Features
When considering the switch from plastic straws to paper straws, understanding the comparable features of paper straws can help you understand if paper straws are a viable replacement option for your operation.
Length
Paper straws are available in multiple lengths comparable to plastic straws.
Some paper straw lengths include:
5.75” 7.75” 10”
Exact size may vary by manufacturer.
Size
To accommodate various product offerings from smoothies to cocktails, paper straws are offered in various diameters comparable to plastic straws.
Paper straws come in:
Jumbo Giant Colossal
Exact diameter will vary by manufacturer.
Color
Depending on the manufacturer, paper straws can be available in basic, patterned or printed colors.
Colored straws use water-based ink which will not hurt the environment when composted.
Do the colors of paper straws bleed?
Colored paper straws do not bleed.
Wrapped or Unwrapped
Like plastic straws, paper straws are available wrapped or unwrapped.
Wrapped paper straws are a good option if you need to replace a wrapped plastic straw at a self-service station or in another situation in which you will need a straw to stay sanitary.
Disadvantages of Paper Straws
Paper straws will not offer the same price, performance, or user experience as plastic straws.
Cost
Paper straws currently cost more than plastic straws.
Not only is the initial cost of paper straws currently higher, but because paper straws are not as durable or long lasting as plastic straws, it often causes customers to ask for more than one straw during their visit.
Performance / Durability
The durability of a paper straw will depend on the manufacturer, but it is less than plastic straws.
The functionality of paper straws will be reduced the longer they are left to sit in a liquid.
Pro Tip:Similar to plastic straws, paper straws should not be used with hot liquids. Hot liquids will cause a paper straw to lose its shape faster.
User Experience
Although still useable when soft, consumers often negatively react to a soggy straw.
The longer a paper straw is left to sit in a liquid, the more it will start to break down.
Who Should Use Paper Straws?
Paper straws are a great option:
If your business is affected by the rising number of plastic straw bans
If your business is looking to meet consumer demands for more sustainable options
If your business is looking to align with sustainability trends
Final Thoughts
Whether you are switching because of shifts in consumer demand, local regulations, or aligning with sustainability trends, Imperial Dade can help you choose the best straw for your business.
Imperial Dade locations have a large selection of green foodservice products to satisfy your facility’s needs and budget. We offer a wide selection of disposable foodservice products whether you’re located in the United States, Puerto Rico, or the Caribbean. | Imperial Dade Insights
Stay Informed With Helpful Information From Our Experts
Paper straws are becoming more and more popular as the trend toward sustainability grows, and plastic straws become the target of increasing government regulations. If your business is considering a switch from plastic straws to a more environmentally friendly option, paper straws could be the right solution for you.
Why Are Plastic Straws Being Banned?
Plastic straws are being banned across multiple cities because they do not break down in a landfill and are a threat to the environment, wildlife, and human health.
Although recyclable, plastic straws are not typically accepted by curbside recycling programs. Municipalities that do accept plastic straws for recycling, may have a hard time separating them due to sorting inefficiencies.
Even when accepted, most plastic straws will not make it to recycling centers because people discard them as trash.
When plastics break down, they break up into smaller pieces over time. Eventually, plastics break into microplastics (tiny pieces of plastic).
Microplastics contaminate our ecosystems. Animals, such as fish, can mistake microplastics as food and eat them. Not only is it harmful for fish and other animals to eat plastics, but when humans eat the fish, we may also be consuming microplastics as a result.
Consumers are becoming increasingly aware of the negative effects of plastic products on the environment and human health, and are demanding more sustainable options.
One of the most common environmentally friendly alternatives to single-use plastic straws are paper straws.
Paper straws break down into organic materials leaving a smaller footprint on the earth.
Advantages of Paper Straws
Eco-Friendly Benefits of Paper Straws
If you are looking to display your sustainability efforts, using paper straws can make it obvious to your customers that you consider the full product lifecycle and are making an effort to leave a smaller footprint on the earth.
Are paper straws compostable?
| yes |
Sustainable Living | Are paper straws more environmentally friendly than plastic straws? | no_statement | "paper" "straws" are not more "environmentally" "friendly" than "plastic" "straws".. using "paper" "straws" is not better for the environment than using "plastic" "straws". | https://hallandalebeachfl.gov/1237/Straw-Ordinance | Straw Ordinance | Hallandale Beach, FL - Official Website | Straw Ordinance
What!? The City has a Straw Ban?
As a coastal community, the City is committed to environmental leadership and the conservation of natural resources. The City realizes that discarded plastic straws threaten wildlife and degrade ecosystems. Because of this, Commissioner Richard Dally proposed a straw ban in 2018. Since the Ban’s adoption, plastic and bio-plastic straws will be banned within the City of Hallandale Beach and its public beaches effective January 1, 2019. Specifically, the City has banned the sale and distribution of plastic straws within the City Limits and the use of plastic straws on public beaches within City Limits. There are a few exceptions:
What does the Straw Ban mean for restaurants?
Since the sale and distribution of plastic straws is banned, restaurants, bars, and other food and beverage providers are required to take a different approach to their beverage service. The City suggests businesses take one or more of the following approaches:
1) Switch to non-plastic straw alternatives like paper straws.
2) Do not provide a straw with every beverage served. Offer non-plastic straws by request only.
3) Purchase and incorporate re-usable non-plastic straws such as stainless steel, silicone, and glass straws.
4) Do not offer straws with beverages at all.
5) Join Surfrider’s “Ocean Friendly Restaurants” program and receive a 50% discount on paper straws from Aardvark Straws.
See below for downloadable designs which you can print in-house to educate your customers about the Straw Ban.
While paper straws can be more expensive than plastic straws, by offering them by request only or by taking part in the “Ocean Friendly Restaurants” businesses may experience a net benefit from the Straw Ban. The Straw Ban is enforceable starting January 1, 2019. Please note that many paper straw vendors are experiencing a 7-12 week shipping window.
What does the Straw Ban mean for the environment?
Less pollution. Less dependence on fossil fuels. Less harmed wildlife. More environmental sustainability.
According to Strawless Ocean, Americans use over 500 million straws every day. In Miami alone, over 700,000 straws are disposed of each day. Most of these straws end up in the ocean, polluting the water and harming marine life. Plastic, including plastic straws, are made of petroleum, a fossil fuel which when burned contributes to global climate change. Approximately 4% of the world’s oil production per year is used to make plastics. Plastic does not biodegrade but instead photodegrades into smaller pieces of plastic that are virtually impossible to remediate. The Great Pacific Garbage Patch is an island of these small plastic particles, which is twice the size of Texas. The Great Pacific Garbage Patch is one of five trash gyres in the world ocean. Scientists estimate that as much as 70% of marine debris actually sinks to the bottom of the ocean, making the Garbage Patch seem insignificant. Lastly, 100,000 marine animals are killed each year due to the pollution of plastic straws in the ocean.
By Banning straws, the City of Hallandale Beach reduces its consumption of plastics and oil, reduces its contribution to marine debris, and takes steps to protect marine life.
What does the Straw Ban mean for residents?
The straw ban offers an opportunity for residents to lessen their impact on the environment, without even trying! When purchasing beverages throughout the City, residents will automatically be provided with a more sustainable alternative to single-use plastic straws. However, residents will need to be mindful about the plastics they bring with them to the beach. Effective January 1, 2019 residents will no longer be permitted to use plastic straws on public beaches within City limits. However, if the resident is differently abled and requires the use of a plastic straw, their needs are protected by the Ban’s exceptions.
Join the Movement- Downloadable Designs
Frequently Asked Questions
Question: Why should Hallandale Beach ban straws when not many other communities are banning them? Answer: Any reduction in the amount of single-use plastic used, benefits the environment. Even when we try to dispose of plastic straws correctly, they often end up airborne in trash transport, ultimately finding their way to waterways and the ocean. The only way to reduce the future amount of plastic debris in the ocean is to reduce the amount of plastic used. The City of Hallandale Beach joins more than nine cities in the US and numerous private companies in banning straws. Alone, we set an example. Together, we change the world. Question: Why do disabled or impaired people still get to use plastic straws? Answer: Some disabled individuals have difficulty lifting/holding glasses, swallowing, limited jaw control, or other conditions which can cause them to aspirate liquids when not consumed via plastic bendy-straw. The City supports the needs of its disabled residents and extends the exception within the Straw Ban. Question: Are paper straws actually better for the environment than plastic straws? Answer: Plastic straws are made of petroleum and take over 500 years to degrade. Paper straws are made of wood, a renewable resource and biodegrade quickly. Paper straws are indeed better for the environment than plastic straws. However, no straws is the most environmentally friendly option.
Question: What will happen to me if I buy a soft drink with a straw from another City and then bring it into Hallandale Beach? Answer:Nothing, as long as you do not bring the beverage and straw onto a public beach. However, you are always encouraged to bring your own re-usable straw for times like these!
Question: How will the Straw Ban be enforced? Answer:Starting January 1, 2019 violators of the Straw Ban will first be issued a written warning or notice of violation. Following the initial warning, violations within a 12-month period will incur the following fine schedule:
• Second Offense: fine not to exceed $100 • Third Offense: fine not to exceed $200 • Forth and subsequent offenses: fine not to exceed $500. If you have a question about the Straw Ban which is not included here, email the City’s Green Initiatives Coordinator, Alyssa Jones Wood at [email protected]
Disclaimer: Under Florida law, e-mail addresses are public records. If you do not want your e-mail address released in response to a public records request, do not send electronic mail to this entity. Instead, contact the city by phone or in writing. | Answer: Any reduction in the amount of single-use plastic used, benefits the environment. Even when we try to dispose of plastic straws correctly, they often end up airborne in trash transport, ultimately finding their way to waterways and the ocean. The only way to reduce the future amount of plastic debris in the ocean is to reduce the amount of plastic used. The City of Hallandale Beach joins more than nine cities in the US and numerous private companies in banning straws. Alone, we set an example. Together, we change the world. Question: Why do disabled or impaired people still get to use plastic straws? Answer: Some disabled individuals have difficulty lifting/holding glasses, swallowing, limited jaw control, or other conditions which can cause them to aspirate liquids when not consumed via plastic bendy-straw. The City supports the needs of its disabled residents and extends the exception within the Straw Ban. Question: Are paper straws actually better for the environment than plastic straws? Answer: Plastic straws are made of petroleum and take over 500 years to degrade. Paper straws are made of wood, a renewable resource and biodegrade quickly. Paper straws are indeed better for the environment than plastic straws. However, no straws is the most environmentally friendly option.
Question: What will happen to me if I buy a soft drink with a straw from another City and then bring it into Hallandale Beach? Answer:Nothing, as long as you do not bring the beverage and straw onto a public beach. However, you are always encouraged to bring your own re-usable straw for times like these!
Question: How will the Straw Ban be enforced? Answer:Starting January 1, 2019 violators of the Straw Ban will first be issued a written warning or notice of violation. | yes |
Ecophysiology | Are pesticides harmful to all insects? | yes_statement | "pesticides" are "harmful" to all "insects".. all "insects" are "harmed" by "pesticides". | https://policy.friendsoftheearth.uk/insight/effects-pesticides-our-wildlife | Effects of pesticides on our wildlife | Policy and insight | Effects of pesticides on our wildlife
It’s not only bees that are harmed by pesticides. We show how routine use of chemicals harms birds, earthworms, hedgehogs, frogs, wild plants and wider nature.
Paul de Zylva06 Dec 2019
Share:
Share
We know pesticides harm bees – the evidence is compelling. What about other wild species? When I searched for the effect of pesticides on wildlife, I couldn’t find a good summary so I produced this one.
In our new report 'Problems with Pesticides' I’ve pulled together the main findings of recent scientific studies and reviews of evidence from the UK and beyond.
From butterflies, beetles, damselflies and hoverflies to earthworms, hedgehogs, frogs and fish – not forgetting the impact on our water and soils – the evidence shows that problems with pesticides go well beyond bees.
The pesticide and industrial farming lobby insists crop protection and other products are aimed at real pests like aphids, but are safe for the insects we love, like bees. But independent studies show that routine and rising pesticide use isn’t smart enough to discriminate between species.
Close-up of aphids on stem
Most of the widely used chemicals are broad spectrum, meaning they affect more than just the intended target pest, disease or weed. If you’re a bee nesting or feeding near crops treated to control pests, like flea beetles, you’re likely to get a potentially harmful dose, like it not. There’s also evidence of harm to soils and water, and the organisms that depend on them.
None of this has stopped the pesticides lobby pouring scorn on anyone who questions pesticide use, including independent scientists and researchers whose studies have exposed how pesticides passed as safe still harm a range of bee species and other vital pollinating insects. Problems with Pesticides also shows that other wild species and habitats are harmed.
Although pesticides were used initially to benefit human life through increase in agricultural productivity and by controlling infectious disease, their adverse effects have overweighed the benefits associated with their use. Gill and Garg, 2014
Creature contact with chemicals
It’s easy for wildlife to come into contact with chemicals, because the abundance of pesticides in fields, streets, parks and watercourses means they get exposed in different ways.
Residues in soils affect the quality and structure of soils and can be toxic to soil-living organisms.
Residues blown across fields and landscapes such as when fields are ploughed.
Chemicals leach into soils and onward into rivers and water courses and affect aquatic life.
When birds eat worms and insects, pesticide residues move up through the food chain.
When herbicides kill plants regarded as weeds they remove vital sources of food and shelter for wild species, adding to pressure on them to relocate, alter their diet, or starve.
Effects of pesticides on a range of wildlife
Problems with Pesticideshighlights the scientific evidence for pesticides harming many more wild species than tends to be publicised. This adds up to a profound impact on nature, including on species needed to pollinate our crops or keep our soils healthy.
Key findings:
Earthworms
These are harmed by herbicides and pesticides, which disrupt enzymatic activities, increase mortality, decrease fecundity and growth, and change their feeding behaviour. Earthworms are vital for soil health and recycling of plant material, yet pesticides appear to be jeopardising their reproduction and survival.
Soil-living organisms
As well as earthworms, other important organisms are being affected by the long-term, indiscriminate and excessive application of pesticides. Studies point to severe effects on soil ecology, including potentially undermining the ability of soils to enhance crop production.
Birds
Studies have linked bird decline to pesticide use in the Netherlands and France, and a UK study for the Health and Safety Executive reported that farming practices have driven the decline of farmland birds. For example, a study of grey partridges found "good evidence that herbicides have played a significant role in their decline.”
Butterflies
A long-term study from 1985 to 2012 found declines in 15 butterfly species corresponding with areas of highest use of pesticides.
That tallies with a study of butterflies across 19 European nations, which found large declines since 1990. “The pesticide problem is especially a problem in the intensive agricultural areas of western Europe...", said one of the study team.
Water quality
A study of contamination of UK rivers and freshwaters found half of water samples in England exceeded chronic pollution limits. 88% of samples showed pesticide contamination.
Aquatic insects such as damselflies and dragonflies get less attention, partly due to inadequate monitoring of pesticides in water. But they may be more vulnerable than thought because the neonicotinoid type of pesticide is highly mobile in water.
A study of common blue‐tailed damselflies has found neonicotinoids playing “a central role” in their decline. A study of once common dragonflies in Japan related their decline to use of pesticides in rice growing.
Fish
A 2019 study of fish species in Japan reports aquatic systems being “threatened by the high toxicity and persistence of neonicotinoid insecticides” with effects cascading through ecosystems to alter the structure of the food chain.
Frogs, toads, other amphibians and reptiles
These creatures also appear to be at risk. Laboratory and field tests have linked pesticides passed as safe to mortality in frogs and toads.
Wild plants
Inadequate monitoring makes it hard to attribute the decline or loss of wild plants directly to herbicide use but they will reach non-target plant and other species and UK plant charity Plantlife cites "extensive use of herbicides” as a reason for wildflowers disappearing across Britain.
Mammals
Mammals are not immune and a comprehensive review of 58 British mammal populations over 20 years found species such as hedgehogs, water voles, common and pygmy shrews declined by up to 66% over the past 20 years.
Hedgehogs are struggling for various reasons including habitat loss, land use change and a lack of insects for food, which is linked to use of pesticides and slug pellets in gardening.
The British Hedgehog Preservation Society has expressed concern about “the lack of food in sterile fields where lots of pesticides and chemicals are used – there are also larger scale farms so there are less hedgerows for hedgehogs to use.”
Why is this being allowed to happen?
Overuse of pesticides.
Pesticides are typically applied prophylactically even if the actual risk of harm from genuine pests or diseases is low.
Pesticides are widespread across landscapes and seasons and years, which is how long some pesticide residues can remain in soils, plant matter and in rivers, water courses and systems.
Incredibly, the effects of their ubiquitous use on species, habitats and the food chain are not tested or monitored. And the cocktail effect of different treatments used in combination isn’t well monitored.
While the effects of such overuse are unknown, there are concerns about reduced efficacy and pest resistance, similar to those about the overuse of antibiotics in treating animals and humans.
Rising use of pesticides
The area of the UK planted with crops has remained constant at about 4.6 million hectares (ha), yet the total area treated with individual active ingredients in pesticides rose from 59 million ha in 2000 to 73 million ha in 2016.
That 24% rise in active ingredients applied to land meant the average of 12.8 actives used per ha in 2000 rose to 15.9 per ha in 2016. Clearly, several active ingredients may be applied together but spray passes are also rising. In 2000 41% of cereal hectarage was sprayed more than four times, but by 2016 this had increased to 55%.
Flawed pesticide testing.
The problem starts much earlier with the testing of pesticides, long before bees, beneficial bugs and other wild species are affected.The safety checks forpesticides are not as robust and rigorous as the public has been told. For example:
Products are tested on a narrow set of species rather than the full range likely to be affected and whose biology and tolerance will differ from the intended target.
Tests don’t properly assess both lethal effects (how much of a product is needed to kill its target) and non-lethal effects, like harming the ability to feed and reproduce.
Unrealistic testing of products on their own, instead of how they’re actually used on the farm or in combination with other treatments.
Doctors and pharmacists advise us how to avoid cocktail-type side-effects when taking prescription treatments, but bees and other creatures don’t benefit from such knowledge because the testing is inadequate.
Pesticides and food security
For years, the orthodox view has been that pesticides are needed for food production, crop yields and security of food supply. But it’s increasingly recognised that routine use of pesticides is not essential for crop production and has serious implications for our environment, our health and future food security.
The UN blew apart such claims when it linked the self-interest of the pesticides industry with the loss of diverse farming systems, and the decline of natural predators with rising food insecurity and pest resistance. It said:
“The assertion promoted by the agrochemical industry that pesticides are necessary to achieve food security is not only inaccurate, but dangerously misleading. In principle, there is adequate food to feed the world; inequitable production and distribution systems present major blockages that prevent those in need from accessing it. Ironically, many of those who are food insecure are in fact subsistence farmers engaged in agricultural work, particularly in lower-income countries.”
So how can we turn the tide against pesticides?
Reduce pesticide use
Our report starts with setting ambitious targets to reduce the use and impacts of pesticides, ensuring they are used as a last resort, not a prophylactic first line of defence.
To do this needs financial support and incentives to farmers and landowners to be geared to boost the take up of more sustainable ways to manage land and crops.
Independent advice
We must improve the advice given to farmers, growers and landowners. And make sure information is independent of the pesticides industry and its advisers, but instead supports knowledge transfer to and between users on ways to control actual pests.
More research into sustainable methods
Boosting research into and development of alternatives ways to produce and protect crops is going to be crucial to farmers – innovative farmers trialling different methods are too often left to go it alone. We know enough to act now. But even with a significant reduction in pesticides, it’s vital that the products which remain in use are properly tested.
A robust testing and monitoring regime
The way pesticides are tested must be truly robust, rather than the flawed system currently in place. Product testing must cover all species likely to be affected, as well as cocktail effects on species when products are used in combination. Testing must be entirely independent of the pesticide industry.
Given the contamination of fields, water courses, habitats and landscapes, proper monitoring of pesticides is needed to track how they behave in the environment long after they have been applied, such as indirect effects on aquatic species.
The technology already exists for round-the-clock monitoring and tracking of pesticides’ real-world effects. Proper monitoring would then inform ongoing product testing to reduce or avoid the need for separate tests when the safety of a product is questioned.
Multiple benefits without pesticides
Modern industrial farming has created a culture of reaching for pesticides as an instant solution and has built over-dependence on these chemicals. In some cases pesticides have a place, but too little attention has been paid to other ways of producing and protecting crops.
Sustainable approaches can aid the restoration of soil health and water quality. And they can help more diverse wild species, including those which help control pests, to survive and thrive on farms and across entire landscapes.
Ultimately, farming and land use must move away from industrial monocultures to resilient and fully functioning natural ecosystems delivering multiple benefits.
Help make the countryside safe for wildlife and support farmers in cutting pesticide use.
Further reading
Although pesticides were used initially to benefit human life through increase in agricultural productivity, this report shows that their adverse effects on nature are substantial and that their use now needs to be minimised for the benefit of ecosystems and food production. | Effects of pesticides on our wildlife
It’s not only bees that are harmed by pesticides. We show how routine use of chemicals harms birds, earthworms, hedgehogs, frogs, wild plants and wider nature.
Paul de Zylva06 Dec 2019
Share:
Share
We know pesticides harm bees – the evidence is compelling. What about other wild species? When I searched for the effect of pesticides on wildlife, I couldn’t find a good summary so I produced this one.
In our new report 'Problems with Pesticides' I’ve pulled together the main findings of recent scientific studies and reviews of evidence from the UK and beyond.
From butterflies, beetles, damselflies and hoverflies to earthworms, hedgehogs, frogs and fish – not forgetting the impact on our water and soils – the evidence shows that problems with pesticides go well beyond bees.
The pesticide and industrial farming lobby insists crop protection and other products are aimed at real pests like aphids, but are safe for the insects we love, like bees. But independent studies show that routine and rising pesticide use isn’t smart enough to discriminate between species.
Close-up of aphids on stem
Most of the widely used chemicals are broad spectrum, meaning they affect more than just the intended target pest, disease or weed. If you’re a bee nesting or feeding near crops treated to control pests, like flea beetles, you’re likely to get a potentially harmful dose, like it not. There’s also evidence of harm to soils and water, and the organisms that depend on them.
None of this has stopped the pesticides lobby pouring scorn on anyone who questions pesticide use, including independent scientists and researchers whose studies have exposed how pesticides passed as safe still harm a range of bee species and other vital pollinating insects. Problems with Pesticides also shows that other wild species and habitats are harmed.
| yes |
Ecophysiology | Are pesticides harmful to all insects? | yes_statement | "pesticides" are "harmful" to all "insects".. all "insects" are "harmed" by "pesticides". | https://nationalzoo.si.edu/migratory-birds/news/when-it-comes-pesticides-birds-are-sitting-ducks | When it Comes to Pesticides, Birds are Sitting Ducks | Smithsonian's ... | Members are our strongest champions of animal conservation and wildlife research. When you become a member, you also receive exclusive benefits, like special opportunities to meet animals, discounts at Zoo stores and more.
Search Google Appliance
When it Comes to Pesticides, Birds are Sitting Ducks
This update was written by Smithsonian Migratory Bird Center scientist Mary Deinlein
The word pesticide is a catch-all term for chemicals that kill or control anything that humans have deemed to be a pest. Such chemicals can be grouped according to the kind of organism targeted, such as insecticide (insect), herbicide (weed), fungicide (fungus), or rodenticide (rodent).
Most pesticide compounds in use today are synthetic; that is, they are man-made concoctions produced in a laboratory. A danger inherent to the use of synthetic poisons is that once the chemicals are released into the environment, they may harm unintended victims and have unanticipated effects.
On a global scale, over 5 billion pounds of conventional pesticides are used annually for agricultural purposes, forest and rangeland management, and disease control, as well as in homes, and on lawns, gardens, golf courses, and other private properties. Twenty percent of this total volume, or 1.2 billion pounds, is used in the United States alone. What does this massive chemical dousing of the earth mean for the health of the environment? Birds provide some of the answers.
Population declines and extensive mortality of birds strongly indicate that the health of the environment, and thus the health of organisms that depend on it, suffers due to the prevalence of pesticides. From songbird declines beginning in the 1940's, to population crashes of peregrine falcons, ospreys, and other predatory birds first detected in the 1960's, to the more recent deaths of over 5% of the world’s population of Swainson’s hawks during the winter of 1995, birds have been unwitting victims of pesticide contamination.
The Legacy of "Silent Spring"
In 1962, Rachel Carson’s eloquent and best-selling book, Silent Spring, drew international attention to the environmental contamination wrought by pesticides, particularly the insecticide DDT. Carson cited declines in the number of songbirds due to poisoning as a key piece of evidence.
Six years later came documentation of a more insidious effect of pesticide use. Accumulations of DDE, a compound produced when DDT degrades, were causing reproductive failure in several species of predatory birds, including Peregrine Falcons, Brown Pelicans, Osprey, and Bald Eagles. Not only was DDE toxic to developing embryos, it also caused eggs to be laid with abnormally thin shells. So fragile were the shells that the eggs would easily break under the weight of the adult bird during incubation.
DDT belongs to a class of insecticides known as organochlorines, which also includes dicofol, dieldrin, endrin, heptachlor, chlordane, lindane, and methoxychlor, among others. Some of these pesticide ingredients, such as dieldrin and heptachlor, are poisonous in very small amounts.
However, the most dangerous traits of the organochlorines are their persistence– that is, their tendency to remain chemically active for a long time—and their solubility in fat, which means they become stored in fatty tissues within organisms and can accumulate over time. Because of these two traits, contaminant levels become more concentrated with each step up in a food chain—a process known as biomagnification.
For example, when ospreys repeatedly feed on fish contaminated with DDT, increasing amounts of the pesticide are stored within their bodies. Biomagnification accounts for why predatory birds, being at the top of the food chain, are most severely affected by organochlorine pesticides.
Thanks partly to the fervor generated by Carson’s book and partly to a study done by the National Institutes of Health which found DDT or its by-products in 100% of the human tissues it examined, DDT and most other organochlorines were banned for use in the United States in the early 1970's.
Since the ban, numbers of the more severely affected bird species have slowly recovered. However, the fate of some populations of Peregrine Falcons remains uncertain because DDT, its breakdown products, and other organochlorines are still prevalent in the environment.
If DDT was banned in the United States in the early 1970's, why is there still a problem today? One reason is that the United States continues to export DDT, along with other pesticides known to be hazardous to the environment and to human health.
The countries of Latin America, the wintertime destination for many of the migratory birds that breed in the United States and Canada (including many Peregrine Falcons), are also the destination for many of these exported pesticides.
Over 5 billion pounds of pesticides are used annually. What does this mean for the health of our environment? Birds provide some of the answers.
New Pesticides, New Problems
Because of the ban on DDT and the tight restrictions placed on other organochlorines, a new arsenal of pesticides predominates today. Organophosphates and carbamates are now two of the most common classes of active ingredients found in pesticide products. Although organophosphate and carbamate compounds are not as persistent as the organochlorines, they are much more acutely toxic, which means that even very small amounts can cause severe poisoning.
It is estimated that of the roughly 672 million birds exposed annually to pesticides on U.S. agricultural lands, 10%– or 67 million– are killed. This staggering number is a conservative estimate that takes into account only birds that inhabit farmlands, and only birds killed outright by ingestion of pesticides. The full extent of bird fatalities due to pesticides is extremely difficult to determine because most deaths go undetected.
Nevertheless, sobering numbers of dead birds have been documented. For example, in 1995, the pesticide monocrotophos, sprayed to kill grasshoppers, was responsible for the deaths of at least 20,000 Swainson’s Hawks in Argentina. Thanks to the efforts of the American Bird Conservancy and other organizations, Novartis (formerly Ciba-Geigy), a major manufacturer of monocrotophos, has recently agreed to phase out the production and sale of this pesticide.
Over 150 bird "die-offs", involving as many as 700 birds in a single incident, have been attributed to diazinon, an organophosphate insecticide commonly used for lawn care.
In 1990, diazinon was classified as a restricted ingredient, and banned for use on golf courses and turf farms, marking the first time regulatory action has been taken specifically on behalf of birds.
However, in most states diazinon is still available over the counter for use on home lawns and parks. So despite the restricted-use status, as much as 10 million pounds of diazinon are still used yearly in the United States, primarily by home owners.
Continued reports of bird fatalities, and additional evidence concerning the extreme toxicity of diazinon and its metabolites to aquatic invertebrates and mammals have prompted the US Fish and Wildlife Service and a consortium of environmental organizations headed by the Rachel Carson Council to petition the Environmental Protection Agency to further restrict uses of diazinon.
In 1989, the Environmental Protection Agency reported that carbofuran was estimated to kill at least 1-2 million birds in the United States each year. This carbamate pesticide was introduced in the mid-1960's, but it wasn’t until 1994 that any regulations were imposed on the manufacturer, FMC Corporation. Granular forms are now banned for most uses because of widespread bird kills, although about 2 million pounds in liquid form are still used in the U.S. each year.
So far, about 40 active ingredients in pesticides have been found to be lethal to birds, even when used according to the instructions on the label. Only about a quarter of these ingredients have been banned in the United States, and most are still used elsewhere. The active ingredients that have proven to be deadliest to birds include diazinon, phorate, carbofuran, monocrotophos, isofenphos, chlorpyrifos, aldicarb, azinphos-methyl, and parathion.
Routes of Exposure and Direct Effects
Ingestion is probably the most common way that birds are exposed to pesticides. Birds can swallow the pesticide directly, such as when a bird mistakes a pesticide granule for a seed, or indirectly, by consuming contaminated prey. They may also ingest pesticide residues off feathers while preening, or they may drink or bathe in tainted water. Pesticides can also be absorbed through the skin, or inhaled when pesticides are applied aerially.
Whether or not a bird is harmed as a result of pesticide exposure depends on a number of factors, including the toxicity of the chemical(s), the magnitude and duration of exposure, and whether the exposure is recurrent. Potential harmful effects range from imminent death due to acute poisoning to a variety of so-called "sub-lethal" effects, including the following:
eggshell thinning;
deformed embryos;
slower nestling growth rates;
decreased parental attentiveness;
reduced territorial defense;
lack of appetite and weight loss;
lethargic behavior (expressed in terms of less time spent foraging, flying, and singing);
suppressed immune system response;
greater vulnerability to predation;
interference with body temperature regulation;
disruption of normal hormonal functioning;
and inability to orient in the proper direction for migration.
Each of these sub-lethal effects can ultimately reduce populations as effectively as immediate death, since they lower birds’ chances of surviving or reproducing successfully, or both.
Indirect Effects
Pesticides can also affect birds indirectly by either reducing the amount of available food or altering habitat. Birds that eat insects are literally at a loss when insecticides cause a drop in the number of insect prey available, especially when they have young to feed. The breeding season of many birds has evolved to coincide with peaks of insect abundance. Unfortunately for them, peaks in insect abundance also mean peaks in insecticide use.
Herbicides, too, can lead to decreases in insect availability by eliminating weeds on which insects live– a chain of events responsible for sharp declines of Gray Partridges in the United Kingdom. The food supply of birds that eat the seeds of weeds can also be reduced by herbicides. In Britain, Linnets, a type of seed-eating finch, have gone from being a rather common bird on agricultural lands to an extremely rare one due to this type of indirect herbicide effect.
Another way that herbicides can harm birds is by reducing the amount of plant cover available for predator avoidance and nest concealment. For example, erbicides have been used extensively in the western United States to convert sagebrush habitat into cattle pastures. This loss of sagebrush has caused declines in Brewer’s Sparrows, which require the cover provided by the plant for nesting.
Should we heed the warning signs provided by birds, or continue to pay the high environmental and social costs of rampant pesticide use?
Neotropical Migratory Birds
Birds that breed in the United States and Canada and winter in Latin America and the Caribbean are potentially exposed to more pesticides than are resident birds, given the great distances over which they travel.
Whereas the regulatory process for protecting the environment and human health in the United States may not be exemplary, conditions are generally worse in most Latin American countries where there are few regulations banning or governing the sale and use of pesticides. Therefore, resident birds and birds that overwinter in these countries, not to mention the people who live there, have a greater likelihood of exposure to harmful pesticides.
Pesticide contamination is often cited as one of the factors responsible for declining numbers of some Neotropical migratory birds, and yet so far there is very little hard evidence to support this claim. This lack of evidence does not necessarily mean that pesticides are not contributing to the declines; more likely it is testimony to the difficulty of detecting the role of pesticides in causing death or reproductive failure.
It has been shown that exposure to acephate, an organophosphate, can interfere with an adult bird’s ability to orient itself in the proper direction for migration. Who knows how many vagrants (birds that are seen far from the range which is normal for their species) sighted off course each year have been disoriented by pesticides? Or, how many migrants don’t make it to their right destination for this same reason?
When fat reserves are rapidly used up, as can occur during migration, enough accumulated organochlorine pesticides can be "liberated" within the body to cause death. Who knows what proportion of the birds that die during migration are victims of pesticide poisoning?
The same sorts of questions can be posed regarding the numbers of young birds that do not survive each year. How many were in nests that were inadvertently sprayed with pesticides, or were fed contaminated food, or did not receive enough food because pesticides reduced the number of available insects?
It is often difficult, if not impossible, to tease apart the many factors making life more and more difficult for migratory birds and to determine the relative contribution of each factor to population declines. Accordingly, the role that pesticides play remains unexplained. It stands to reason, however, that as the amount of wildlife habitat continues to dwindle and the quality of what remains takes on an even greater significance, anything that compromises that quality could be the proverbial straw that breaks the camel’s back.
Lessons from the Birds and the Bees
As evidence mounts regarding links between pesticide exposure and rates of sterility, cancers, hormonal disruption, and immune system disorders in humans, should we heed the warning signs provided by birds, or continue to pay the high environmental and social costs of rampant pesticide use?
Here are a few thoughts and figures to consider. The benefits of pesticides are often cited in terms of their contribution to world food production, and yet it is estimated that crop losses to pests would increase only 10% if no pesticides were used. Between 1945 and 1989, pesticide use in the U.S. increased tenfold and yet crop losses doubled from 7-14%. Consider also that all of us, everywhere, are exposed to some pesticide residues in food, water, and the atmosphere. Residents of the United States eat an estimated 2 billion pounds of imported produce tainted with banned pesticides each year.
Scientist Paul Ehrlich has compared pesticides to heroin in that "they promise paradise and deliver addiction." Pesticide use leads to dependency by killing not only the targeted pests but also the natural predators and parasites of those pests and through the development of resistance in the pests.
The destruction of natural enemies and increased resistence are countered by heavier and more frequent pesticide applications, thus maintaining the "pesticide habit" and increasing the costs of supporting it.
Honey bees and wild bees are among the victims of pesticide poisoning and their numbers are on the wane, a fact that is gaining increasing attention because of their economic, ecological, and agricultural importance as pollinators. With something as fundamental as the birds and the bees at stake, shouldn’t we all be concerned?
If you want to help reduce global contamination and its costs, here are some things you can do:
educate yourself and others about the effects of pesticides and alternative pest control methods,
buy organically grown products, and support organizations working to reduce society’s dependence on pesticides.
Sign Up for Emails
Related Projects
The Smithsonian Migratory Bird Center is the only scientific institution solely dedicated to studying migratory birds. These species fly thousands of miles every year from summer breeding grounds in the United States and Canada to warm winter homes in the Caribbean, Central and South America. | Accordingly, the role that pesticides play remains unexplained. It stands to reason, however, that as the amount of wildlife habitat continues to dwindle and the quality of what remains takes on an even greater significance, anything that compromises that quality could be the proverbial straw that breaks the camel’s back.
Lessons from the Birds and the Bees
As evidence mounts regarding links between pesticide exposure and rates of sterility, cancers, hormonal disruption, and immune system disorders in humans, should we heed the warning signs provided by birds, or continue to pay the high environmental and social costs of rampant pesticide use?
Here are a few thoughts and figures to consider. The benefits of pesticides are often cited in terms of their contribution to world food production, and yet it is estimated that crop losses to pests would increase only 10% if no pesticides were used. Between 1945 and 1989, pesticide use in the U.S. increased tenfold and yet crop losses doubled from 7-14%. Consider also that all of us, everywhere, are exposed to some pesticide residues in food, water, and the atmosphere. Residents of the United States eat an estimated 2 billion pounds of imported produce tainted with banned pesticides each year.
Scientist Paul Ehrlich has compared pesticides to heroin in that "they promise paradise and deliver addiction." Pesticide use leads to dependency by killing not only the targeted pests but also the natural predators and parasites of those pests and through the development of resistance in the pests.
The destruction of natural enemies and increased resistence are countered by heavier and more frequent pesticide applications, thus maintaining the "pesticide habit" and increasing the costs of supporting it.
Honey bees and wild bees are among the victims of pesticide poisoning and their numbers are on the wane, a fact that is gaining increasing attention because of their economic, ecological, and agricultural importance as pollinators. With something as fundamental as the birds and the bees at stake, shouldn’t we all be concerned?
| yes |
Ecophysiology | Are pesticides harmful to all insects? | yes_statement | "pesticides" are "harmful" to all "insects".. all "insects" are "harmed" by "pesticides". | https://www.saferbrand.com/articles/safe-use-of-pesticides | Your Insect Helpers: Don't Kill the Garden's Good Guys | Your Insect Helpers: Don't Kill the Garden's Good Guys
If you are new to backyard gardening, the sight of a beetle or a strange worm crawling across your plants can be alarming. Is it friend or foe? Will it hurt my plant or eat my vegetables, or is it just a friendly visitor passing through?
Before you pull out the spray bottle, take a few minutes to consider the effects of synthetic pesticides on the plants you’re growing. Will the chemicals linger? Do they break down in the environment? Will those chemicals cause unintended harm to other animals and plants?
That last question is especially important: Indiscriminate use of pesticides kills beneficial insects as well as harmful ones. Beneficial insects such as bees, spiders – yes, they’re good – and various predatory insects such as ladybugs, praying mantids and some wasps can be quickly killed with many general-use pesticides. The effects of pesticides, whether they are used to protect plants or directly target insects, can be devastating because they often kill all the insects in the area, not just the harmful ones.
It’s important to understand that the good bugs keep the bad ones in check far better than you can do so without them. When the balance is right, the good guys do their job – feeding on the invaders that threaten your plants. While you may have some chewed leaves or lost fruit, overall, your garden will benefit. The army of destructive insects won’t conquer the garden when it’s properly supported by a legion of helpful insects.
For your part, know that a new gardener needs to learn how to separate the pests and the good bugs, and then choose the right methods for control to adhere to organic standards. By learning these procedures, you will be working in concert with nature and helping the beneficial insects do their job even more efficiently.
How Pesticides Work
Pesticides poison insects or disrupt their lifecycle in some way, which ultimately results in their death. Picking the right pesticide can be difficult, though. A pesticide that kills the Colorado potato beetle may also spell the end for helpful beetles, such as ladybugs. Most of us know that ladybugs are harmless, but not everyone realizes that they’re incredibly helpful garden insects, often targeting aphids and other small insect pests.
Effects of Pesticides on Plants
General, synthetic pesticides pose another less obvious danger – they don’t break down very quickly. This creates a “killing blanket” over a treatment area. Basically, once applied, the pesticide lingers in the area, killing off any insects that wander into the area.
The result? The insects will suffer and your garden will suffer, too. This happens because your plants can’t get their normal benefits – namely pollination – from beneficial insects that regularly visit your garden and flowerbeds. Even as weeks and months go by, the effects of synthetic pesticides may linger and leave your plants struggling.
Unintended Consequences
Most gardeners are familiar with the unintended environmental consequences of pesticides. Since the 1950s, the widespread use of arsenic-laced pesticides and chemicals, such as DDT, have harmed important players in the environment. DDT is a chemical that doesn’t break down well and it worked its way up the food chain until it threatened the eggs of eagles, hawks and other birds of prey.
Arsenic-laced pesticides proved troublesome as well. The heavy use of this chemical has been linked to the rapid decline of honeybee colonies throughout North America, too. Honeybees, of course, are vital pollinators for farm crops and backyard gardens.
As gardeners and farmers learned of the dangers of synthetic pesticides, more turned to organic gardening methods. They began to seek out horticultural practices that work with nature and not against it. Along with seeking out pesticides compliant for use in organic gardening, they sought ways to better use predatory insects in their battle against pests. The ultimate goal: To keep the ratio between beneficial and harmful garden insects in balance.
A City in the Soil
Another, lesser-known, yet noteworthy unintended consequence of using pesticides is harming the so-called “city in the soil.”
What’s the city in the soil? You have many such metropolises toiling away in your yard, garden and flowerbeds right now! These cities can be found inside every teaspoon of garden soil found on your property. Inside that scoop of dirt lives a thriving community of thousands upon thousands of beneficial microbes, which include bacterium, fungi, nematodes and arthropods. These microbes scavenge through the minerals in your garden soil as they search for the organic compounds left by decaying plants. Some of these microbes digest food particles and excrete the nutrients your plants need to thrive: nitrogen, phosphorous, potash. Others attach to plant roots so plants can readily and easily absorb nutrients from the soil. In total, these microbes create the healthy soil that boosts the productivity of your vegetable patch, flower bed and lawn.
When you use synthetic pesticides or harsh fertilizers, you risk wiping out those colonies of beneficial microbes. Pesticides can kill them just as easily as the large insects you can see.
To help your soil flourish with these microbes, you can regularly mix compost into your garden soil. Compost that begins with Safer® Brand Compost Plus gets a kick-start of its own, thanks to the range of microorganisms it fosters within the compost. Once your compost has completed its transformation, add it in layers to your garden or flowerbed and you’ll see the results in the coming season and your city in the soil will be stronger than ever.
Safer® Brand Insecticides Offer a Good Solution
Safer® Brand’s variety of products offer gardeners a great alternative to synthetic pesticides. These OMRI Listed® insecticides treat aphids, grubs, stink bugs, Japanese beetles and more while remaining compliant for use in organic gardening. Other options for insect control include diatomaceous earth, a mechanical insecticide, or neem spray, which harms only the intended insect victim. The result is a garden that gets help for problem pests without killing off beneficial insects, too.
To learn more about our products that are compliant for use in organic production, visit Safer® Brand on Facebook. Subscribe to our E-Newsletter for helpful articles from SaferBrand.com, special product announcements and links to the products you need.
Expert advice, resources and offers right to your inbox!
Connect With Us
By continuing to use our website you are agreeing to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. By using our website, you agree to our Privacy Policy and our Cookies Policy.Learn More | Your Insect Helpers: Don't Kill the Garden's Good Guys
If you are new to backyard gardening, the sight of a beetle or a strange worm crawling across your plants can be alarming. Is it friend or foe? Will it hurt my plant or eat my vegetables, or is it just a friendly visitor passing through?
Before you pull out the spray bottle, take a few minutes to consider the effects of synthetic pesticides on the plants you’re growing. Will the chemicals linger? Do they break down in the environment? Will those chemicals cause unintended harm to other animals and plants?
That last question is especially important: Indiscriminate use of pesticides kills beneficial insects as well as harmful ones. Beneficial insects such as bees, spiders – yes, they’re good – and various predatory insects such as ladybugs, praying mantids and some wasps can be quickly killed with many general-use pesticides. The effects of pesticides, whether they are used to protect plants or directly target insects, can be devastating because they often kill all the insects in the area, not just the harmful ones.
It’s important to understand that the good bugs keep the bad ones in check far better than you can do so without them. When the balance is right, the good guys do their job – feeding on the invaders that threaten your plants. While you may have some chewed leaves or lost fruit, overall, your garden will benefit. The army of destructive insects won’t conquer the garden when it’s properly supported by a legion of helpful insects.
For your part, know that a new gardener needs to learn how to separate the pests and the good bugs, and then choose the right methods for control to adhere to organic standards. By learning these procedures, you will be working in concert with nature and helping the beneficial insects do their job even more efficiently.
How Pesticides Work
Pesticides poison insects or disrupt their lifecycle in some way, which ultimately results in their death. Picking the right pesticide can be difficult, though. | yes |
Ecophysiology | Are pesticides harmful to all insects? | yes_statement | "pesticides" are "harmful" to all "insects".. all "insects" are "harmed" by "pesticides". | https://www.nytimes.com/2015/04/09/business/energy-environment/pesticides-probably-more-harmful-than-previously-thought-scientist-group-warns.html | Pesticides Linked to Honeybee Deaths Pose More Risks, European ... | Pesticides Linked to Honeybee Deaths Pose More Risks, European Group Says
PARIS — An influential European scientific body said on Wednesday that a group of pesticides believed to contribute to mass deaths of honeybees is probably more damaging to ecosystems than previously thought and questioned whether the substances had a place in sustainable agriculture.
The finding could have repercussions on both sides of the Atlantic for the companies that produce the chemicals, which are known as neonicotinoids because of their chemical similarity to nicotine. Global sales of the chemicals reach into the billions of dollars.
Pesticides are thought to be only one part of the widespread deaths of bees, however. Other factors are believed to include varroa destructor mites, viruses, fungi and poor nutrition.
Two of the main producers of neonicotinoids — Syngenta, a Swiss biochemical company, and the German company Bayer CropScience — have sued the European Commission in an effort to overturn the ban, saying it is not supported by the science. That legal case is still pending.
Research has been directed largely at the effects of neonicotinoids on honeybees, but that focus “has distorted the debate,” according to the report released on Wednesday by the European Academies Science Advisory Council.
The council is an independent body composed of representatives from the national science academies of European Union member states. The European ban is up for review this year, and the council’s report, based on the examination of more than 100 peer-reviewed papers that were published since the food safety agency’s finding, was prepared to provide officials with recommendations on how to proceed.
A growing body of evidence shows that the widespread use of the pesticides “has severe effects on a range of organisms that provide ecosystem services like pollination and natural pest control, as well as on biodiversity,” the report’s authors said.
Predatory insects like parasitic wasps and ladybugs provide billions of dollars’ worth of insect control, they noted, and organisms like earthworms contribute billions more through improved soil productivity. All are harmed by the pesticides.
The report found that many farmers have adopted a preventive approach to insect control, soaking their seeds in the pesticides, a method that releases most of the chemicals directly into the environment. They said a farming approach known as integrated pest management, which takes a more natural approach to insect control, would allow for a sharp decrease in their use.
The authors were critical of studies of neonicotinoids on bee health that tested the insects’ ability to survive a single exposure to a given quantity of pesticide dust; they noted that the effect of the chemicals is cumulative and irreversible, meaning that repeated sublethal doses will eventually be deadly if a certain threshold is passed.
Considering the broad impact of the pesticides, they said, “the question is raised as to what extent widespread use of the neonicotinoids is compatible with the objectives of sustainable agriculture.”
Utz Klages, a spokesman for Bayer CropScience, said on Wednesday that the company stood by its position that its neonicotinoid products “can be used safely if they’re used according to the label.”
A European industry group to which Bayer CropScience and Sygenta belong sought on Wednesday to rebut the study, describing it as a “biased report.”
“This is not new research or even a meaningful review of all the studies available,” Jean-Charles Bocquet, director general of the European Crop Protection Association, said in a statement. “Rather, it is a misleading and very selective reading of some of the literature, especially from organizations well known for their opposition to neonicotinoids.”
The restrictive approach used by European regulators contrasts with the more lenient stance of United States regulators. In March, American opponents of neonicotinoid use delivered more than four million signatures to the White House calling for stronger action to protect pollinators.
The E.P.A. last week warned pesticide makers that it was unlikely to approve new uses for the class of pesticides “until new bee data have been submitted and pollinator risk assessments are complete.”
But critics say the E.P.A.’s interim policy is rife with loopholes, allowing continued use of existing products for approved applications, for example. They also criticized the agency for not halting the approval of some products that are chemically quite similar to neonicotinoids but classified differently for regulatory purposes.
A temporary ban on new uses “is going to have a negligible impact,” said Larissa Walker, director of a bee-protection campaign at the Center for Food Safety, an environmental advocacy group in Washington. “They really need to look at the bigger picture. They should prohibit all future registrations for all systemic pesticides.”
Pollination — the transfer of pollen from one flower to another, typically by wind, bug or bird — is essential to the global food supply. An estimated 75 percent of all traded crops, including apples, soybeans and corn, depend on pollination.
Neonicotinoids are absorbed by a plant so that the neurotoxic poison spreads throughout its tissues, including the sap, nectar and pollen. Far more deadly to insects than to mammals, they do not discriminate between harmful pests and beneficial pollinators.
But the pesticides are also among the most effective insecticides available to farmers. Proponents argue that they are essential to food security, and note that many of the chemicals they replaced were worse in important respects.
A version of this article appears in print on , Section B, Page 3 of the New York edition with the headline: Pesticides Linked to Bee Deaths Pose More Risks, European Group Says. Order Reprints | Today’s Paper | Subscribe | Research has been directed largely at the effects of neonicotinoids on honeybees, but that focus “has distorted the debate,” according to the report released on Wednesday by the European Academies Science Advisory Council.
The council is an independent body composed of representatives from the national science academies of European Union member states. The European ban is up for review this year, and the council’s report, based on the examination of more than 100 peer-reviewed papers that were published since the food safety agency’s finding, was prepared to provide officials with recommendations on how to proceed.
A growing body of evidence shows that the widespread use of the pesticides “has severe effects on a range of organisms that provide ecosystem services like pollination and natural pest control, as well as on biodiversity,” the report’s authors said.
Predatory insects like parasitic wasps and ladybugs provide billions of dollars’ worth of insect control, they noted, and organisms like earthworms contribute billions more through improved soil productivity. All are harmed by the pesticides.
The report found that many farmers have adopted a preventive approach to insect control, soaking their seeds in the pesticides, a method that releases most of the chemicals directly into the environment. They said a farming approach known as integrated pest management, which takes a more natural approach to insect control, would allow for a sharp decrease in their use.
The authors were critical of studies of neonicotinoids on bee health that tested the insects’ ability to survive a single exposure to a given quantity of pesticide dust; they noted that the effect of the chemicals is cumulative and irreversible, meaning that repeated sublethal doses will eventually be deadly if a certain threshold is passed.
Considering the broad impact of the pesticides, they said, “the question is raised as to what extent widespread use of the neonicotinoids is compatible with the objectives of sustainable agriculture.”
| yes |
Ecophysiology | Are pesticides harmful to all insects? | yes_statement | "pesticides" are "harmful" to all "insects".. all "insects" are "harmed" by "pesticides". | https://hgic.clemson.edu/factsheet/insecticidal-soaps-for-garden-pest-control/ | Insecticidal Soaps for Garden Pest Control | Home & Garden ... | Insecticidal Soaps for Garden Pest Control
If you are looking for a safe, effective, and low toxicity alternative to more toxic pesticides to control many undesirable insects in your garden, insecticidal soaps may fit the bill. Insecticidal soaps have many advantages when compared to other insecticides. They are inexpensive to use, are among the safest pesticides, leave no harsh residue, are natural products that are virtually non-toxic to animals and birds, and can be used on vegetables up to harvest. In addition, most beneficial insects are not harmed by soap sprays.
Small, soft-bodied insects such as aphids, mealybugs, thrips, scale crawlers, and spider mites are most susceptible to the soaps. Insecticidal soaps kill by suffocation, they appear to disrupt the cellular membranes of the insect, and they remove protective waxes that cover the insect, resulting in dehydration. Insecticidal soaps are also an effective leaf wash to remove honeydew, sooty mold, and other debris from leaves.
Soaps are made when the fatty acid portion of either plant or animal oils are joined with a strong alkali. They are potassium salts of fatty acids. Commercial insecticidal soaps are a highly refined version of liquid dish soap. While you could make your insecticidal soap mixture, there is a substantially increased risk of plant injury with them. Dry dish detergent and all clothes-washing detergents are far too harsh to use on plants because of all the additives in them. Some soaps and detergents are poor insecticides, and other additives in these products may be phytotoxic (i.e., they may damage the plant).
Some plants are sensitive to soap sprays and may be seriously injured by them. Read the label to make sure your plant is not one of them.
Other somewhat sensitive plants are azaleas (Rhododendron spp.), begonias (Begonia spp.), fuchsias (Fuchsia spp.), geraniums (Pelargonium spp.), and impatiens (Impatiens spp.). Rinse plants with a clean water spray if they show signs of wilting or leaf edge browning within a few hours of treatment.
To test for plant sensitivity, spray a small area and wait 24 hours to see if any damage occurs. Plants under water stress should not be sprayed.
Application
As with anything applied to plants, it is important to read the entire label and carefully follow the directions. Insecticidal soaps are usually used as a 1 to 2% solution (2½ to 5 tablespoons per gallon). Always follow the label for the product you are using. Do not attempt to use in higher concentration, as this may be very harmful. Mix the soap concentrate in a clean sprayer. Do not apply the soap in full sun or at temperatures above 90 ºF as this may damage the plants. High temperatures and high humidity may increase plant stress and, therefore, sensitivity. It is best to treat your plants in the early morning or late in the day. Since the soap spray is only effective as long as it is wet, the slower drying conditions favor better insect or mite control.
It is important to spray both the top surface and, especially, the underside of the leaves as many of the pests will be found there. Because of the relatively short residual action and the fact that the insects must be in contact with the soap to be effective, repeat applications may be necessary every 4- to 7-days (follow the label directions) until the pests are eliminated. Avoid excessive applications as leaf damage may accumulate with repeated exposure. Always follow the directions on the label.
The quality of the water you are using should be considered when using insecticidal soaps. Hard water reduces the effectiveness of the insecticidal soap. Calcium, magnesium, and iron cause the fatty acids to precipitate out of the solution causing the soap to be ineffective. It is important to use the purest water possible. You can determine if your tap water is compatible by mixing the recommended concentration of soap that you want to use with the appropriate amount of water in a glass jar. Agitate and let the mixture stand for 15 minutes. If the mix remains uniform and milky, the water quality is fine for the spray. If there is scum on the surface, you should use distilled or bottled water.
The only disadvantages of insecticidal soaps are associated with the limitations of their nature.
The soap solution must wet the insect during application.
There is no residual effectiveness because soap dries or is washed away.
There is a potential for phytotoxicity when the soap residue is affected by high temperatures.
Insecticidal soaps can be found where garden supplies are sold. They are available as either a concentrate or as a pre-mixed RTU (Ready to Use) spray bottle. Some commonly available insecticidal soap brands are:
Bonide Insecticidal Soap RTU
Espoma Organic Insect Soap RTU
Garden Safe Insecticidal Soap Insect Killer RTU
Miracle-Gro Natures’s Care Insecticidal Soap RTU
Natria Insecticidal Soap RTU
Natural Guard Insecticidal Soap Concentrate
Safer Brand Insect Killing Soap Concentrate
Whitney Farms Insecticidal Soap RTU
Insecticidal soap is a great tool for any gardener. It provides a safe and effective way to grow plants naturally, control many soft-bodied pests safely and reduce the number of harsh chemicals needed to keep your garden lush, lovely, and healthy.
There are brands of insecticidal soap that contain an additional active ingredient such as neem oil, pyrethrin, sulfur, or spinosad. These are all-natural insecticides and can aid in pest control.
Pesticides are updated annually. Last updates were done on 7/21 by Joey Williamson.
Original Author(s)
Revisions by:
This information is supplied with the understanding that no discrimination is intended and no endorsement of brand names or registered trademarks by the Clemson University Cooperative Extension Service is implied, nor is any discrimination intended by the exclusion of products or manufacturers not named. All recommendations are for South Carolina conditions and may not apply to other areas. Use pesticides only according to the directions on the label. All recommendations for pesticide use are for South Carolina only and were legal at the time of publication, but the status of registration and use patterns are subject to change by action of state and federal regulatory agencies. Follow all directions, precautions and restrictions that are listed.
Clemson University Cooperative Extension Service offers its programs to people of all ages, regardless of race, color, gender, religion, national origin, disability, political beliefs, sexual orientation, gender identity, marital or family status and is an equal opportunity employer. | Insecticidal Soaps for Garden Pest Control
If you are looking for a safe, effective, and low toxicity alternative to more toxic pesticides to control many undesirable insects in your garden, insecticidal soaps may fit the bill. Insecticidal soaps have many advantages when compared to other insecticides. They are inexpensive to use, are among the safest pesticides, leave no harsh residue, are natural products that are virtually non-toxic to animals and birds, and can be used on vegetables up to harvest. In addition, most beneficial insects are not harmed by soap sprays.
Small, soft-bodied insects such as aphids, mealybugs, thrips, scale crawlers, and spider mites are most susceptible to the soaps. Insecticidal soaps kill by suffocation, they appear to disrupt the cellular membranes of the insect, and they remove protective waxes that cover the insect, resulting in dehydration. Insecticidal soaps are also an effective leaf wash to remove honeydew, sooty mold, and other debris from leaves.
Soaps are made when the fatty acid portion of either plant or animal oils are joined with a strong alkali. They are potassium salts of fatty acids. Commercial insecticidal soaps are a highly refined version of liquid dish soap. While you could make your insecticidal soap mixture, there is a substantially increased risk of plant injury with them. Dry dish detergent and all clothes-washing detergents are far too harsh to use on plants because of all the additives in them. Some soaps and detergents are poor insecticides, and other additives in these products may be phytotoxic (i.e., they may damage the plant).
Some plants are sensitive to soap sprays and may be seriously injured by them. Read the label to make sure your plant is not one of them.
| no |
Ecophysiology | Are pesticides harmful to all insects? | yes_statement | "pesticides" are "harmful" to all "insects".. all "insects" are "harmed" by "pesticides". | https://faunalytics.org/whats-abuzz-with-wild-insects/ | What's Abuzz With Wild Insects? - Faunalytics | Beyond their six legs, segmented bodies, and external skeletons, it’s difficult to lump all insects into the same category. This is because they live in virtually any region and habitat, from the arctic tundra of Siberia to the wet tropics of Australia. Some insects, like cockroaches and mosquitoes, are generalists in terms of their diet and habitat. Others require certain food sources and have adapted to surviving in specific environmental conditions. Although insects are commonly viewed by humans as pests, a growing body of evidence suggests they are sentient animals with emotions and complex capabilities.
Furthermore, insects are vital to safeguarding the global food system. Bees, butterflies, moths, and certain types of flies, beetles, and ants act as pollinators to help fertilize plants. Others consume dead organisms and break down plant matter to improve soil quality. Because of the vital role they play in an agricultural setting, it’s safe to say that protecting insects is necessary if we want to continue feeding the world.
The problem is that insect populations are in decline. Beekeepers in the U.S. reported losing up to 46% of their managed colonies from 2020-2021, and globally, one in six bee species is now regionally extinct. Furthermore, meta-analyses have found that 40% of insect species in general are in decline and at risk of extinction. This is double the rate for vertebrates, but what’s even more frightening is that limited research exists on the vast majority of insects in existence. This means that the declines may potentially be more drastic than we know.
The media loves to share ideas for consumers to protect native insects and pollinators. You might be familiar with recommendations to plant pollinator-friendly gardens or limit exterior lighting on your property to prevent light pollution. These actions are important, but we also need to recognize the more systemic causes of insect declines — and arguably the most urgent threat to insects is our rapidly intensifying agriculture industry.
How Does Intensive Agriculture Harm Insects?
Just as wild birds and mammals suffer when their habitats are cleared to make room for grazing and farming land, so too do insects. Over the past 300 years, humans have rapidly converted forests, swamps, and rich meadows into pastureland, disrupting previously undisturbed habitats that many plant species — and the insects who consume them — relied on to survive.
One of the largest drivers of deforestation is the cow ranching industry, which currently uses 26% of the earth’s land surface as grazing land. This is one way that meat consumption drives environmental destruction. However, plant agriculture is also harmful to insects, in part because of the monoculture system, which is dominant around the world. In this type of system, large amounts of land are cleared for a single type of crop, creating a flat, homogenous landscape. Monocrop systems are particularly vulnerable to destruction by certain plant-eating insects and soil-borne diseases. Because of this, harmful fertilizers and pesticides are used to sustain the crops, which have detrimental side-effects. While it may be efficient from a production standpoint, monoculture systems are usually absent of hedges, wildflowers, crop stubble, and rich biodiversity that sustain wild insects.
Pesticides, Fertilizers, And Insects
Each year, 1 billion pounds of pesticides are used in the U.S. alone. On a global scale, this number increases to nearly 9 billion pounds. Many people think of pesticides as chemicals used to kill insects, but this is just one type. Pesticides may also target weeds, fungi, and other so-called “pests” who threaten agricultural production. However, even pesticides that aren’t meant for insects can end up harming them either directly or indirectly. What’s more, the damage pesticides cause isn’t always lethal — they can harm insect reproduction or make them more susceptible to diseases, which can be more broadly damaging than simply killing them.
Beyond the inherent welfare concerns of using chemicals to kill unwanted insects, pesticides also cause unintentional harm to pollinators and other beneficial species. For example, butterfly populations have declined 50% in the U.K. and Netherlands, caused in part by the widespread use of chemical pesticides. Even insects who don’t eat plants can be affected by pesticides. Dung beetles, who break down cow waste in farming pastures, are facing declines in the U.K., due to pesticide residue in their food sources.
Another problem is that pesticides are used and regulated differently from one country to the next. Although only 15% of pesticides used in the Netherlands are highly toxic for bees, the figure is closer to 33% in Brazil and 47% in Kenya. Neonicotinoid insecticides are banned for non-emergency use in Europe, but they remain the most commonly-used pesticides in the world.
Finally, another problem that can’t be overlooked is fertilizers. Using fertilizers to enhance crop production is often viewed as a positive thing, but overfertilization and the use of artificial fertilizers can exacerbate harms to insects. Some fertilizers are high in nitrogen, which has been shown to increase plant predation by unwanted insects including rose-grain aphids in the U.K., whitebacked planthoppers in Japan, and corn borers in the United States. This in turn leads to more pesticide use, which creates a cycle of harm. When synthetic fertilizers spread to local waterways, it can pollute the natural environment and harm other animals and insects.
Pollinators: The “Darlings” Of The Insect World
Research suggests that pollinators support around 75% of global crops and 33% of global crop production. Apples, grapes, tomatoes, pumpkins, avocados, coffee, and cranberries are just a few of the many pollinated foods we eat.
Bees, in particular, are critical for food security: Fewer bees means lowered crop yields and higher prices for the crops available. Although many animal advocates are against keeping bees for honey, beekeeping is an important income source for many rural communities around the world. The E.U. Insect Atlas notes that many women in developing countries, who are less likely to own land than men, rely on beekeeping for income.
The deforestation and monoculture systems affecting many insects are especially harmful for bees. Wild bees frequently build nests underground or live in trees and natural cavities. When land is changed to make way for agriculture, these bees lose both their homes and the wild flowers they use for nectar consumption.
Although pollinators are commonly viewed as the “darlings” of the insect world, they’ve earned this reputation because of the functional benefit they provide to humans. It’s promising that people care about what happens to pollinators, but they aren’t the only insects harmed by the agriculture industry — and they aren’t the only insects we should care about.
Problems With The Solution
Although experts are experimenting with different ways to protect wild insects, the proposed solutions may come with problems of their own. For example, to avoid pesticide use, some stakeholders suggest introducing natural predators of insects who consume plant supplies in the agriculture industry. Others are testing so-called evolutionary traps, which involve manipulating an animal’s environment to encourage insects to make poor behavioral decisions (such as laying eggs in piles of wood that humans can then destroy). Both types of insect management haven’t been researched extensively, and it’s unclear whether they could also impact non-target insects. Furthermore, introducing predators to kill insects poses animal welfare concerns: Many insect predators “suck the juices” or lay eggs inside of their hosts.
To increase pollinator numbers, some companies have started breeding and selling pollinators and beneficial insect predators to the agriculture industry. However, the mass breeding and farming of insects is rife with welfare concerns that haven’t been fully explored. In addition, introducing non-native pollinator species into new habitats can create competition with local pollinators. There’s also a possibility that purchased species propagate so much that they become considered “pests” themselves, which happened with the harlequin ladybird: It was introduced to combat aphids in the 1980s and is now spreading across Europe and North America at the expense of local species.
Finally, new technologies are being developed to combat harmful insects and to supplement the rapid decline in pollinator populations, but it remains unclear what effect these developments will have in the long term. Some scientists propose genetically modifying plant-eating insects so they cause less harm to plants — for example, by releasing sterile individuals to mate with wild ones. While the idea seems promising, it may have catastrophic effects if the sterile individuals end up mating with other species, leading to unintended population declines. Likewise, there is an interest in using robotic pollinators to sustain crop production, but these so-called “robo-bees” can end up becoming litter on the fields, resulting in plastic and chemical pollution and potentially harming unsuspecting birds who consume them.
Buzzing Toward The Future
Although many international leaders recognize the urgent need to protect insects from the harms of intensive agriculture, relatively little has been done to push this issue forward in a meaningful way. From an economic perspective, governments can invest in finding alternative, scalable sources of food and energy that won’t require monoculture systems and mass deforestation. Subsidies, which are common in the agriculture industry, can be used to reward farming practices that support wild insects — planting hedges, cutting back on pesticides, and preserving plots of uncultivated land are a few examples.
Perhaps the most obvious way to support insects is by shifting away from intensive and monoculture systems. This means engaging in crop rotation, using cover crops to enrich the soil, farming smaller fields where possible, avoiding pesticides, and using limited, organic fertilizers. Organic farms, which prioritize pollinators and exclude synthetic chemical pesticides, now account for more than 4% of U.S. food sales. Research has found that organic farms have more insect biodiversity, but at the same time, they also produce lower crop yields. As a result, switching to a fully organic system likely isn’t feasible unless it’s coupled with a reduction in animal farming.
This brings us to our last point: As a consumer, one of the most effective ways to support wild insects is by reducing one’s meat consumption. According to the FAO, 26% of global ice-free land is used for grazing, while 33% of croplands are used for animal feed production. By shifting away from meat, we can reduce both grazing land and the monoculture systems used to produce soybeans and other animal feed crops.
Insects have been around far longer than humans, and we would struggle to survive without them. Human activity is largely responsible for the harm befalling insect populations, and the onus is on us to create a safer world where they can thrive. This starts and ends with our global food system.
Casey Bond is the Content Editor at Faunalytics and a lover of animals, words, and all forms of potatoes. She holds an M.A. in Animal Studies from NYU, where she produced research on alligator protection and interspecies homelessness, as well as an MSc in Social Psychology from the London School of Economics and a B.S. in Communications and Psychology from the University of Miami. She sits on the Advisory Board of Vegan Hacktivists and loves spending time with her bearded dragon, Stanley. | Pesticides, Fertilizers, And Insects
Each year, 1 billion pounds of pesticides are used in the U.S. alone. On a global scale, this number increases to nearly 9 billion pounds. Many people think of pesticides as chemicals used to kill insects, but this is just one type. Pesticides may also target weeds, fungi, and other so-called “pests” who threaten agricultural production. However, even pesticides that aren’t meant for insects can end up harming them either directly or indirectly. What’s more, the damage pesticides cause isn’t always lethal — they can harm insect reproduction or make them more susceptible to diseases, which can be more broadly damaging than simply killing them.
Beyond the inherent welfare concerns of using chemicals to kill unwanted insects, pesticides also cause unintentional harm to pollinators and other beneficial species. For example, butterfly populations have declined 50% in the U.K. and Netherlands, caused in part by the widespread use of chemical pesticides. Even insects who don’t eat plants can be affected by pesticides. Dung beetles, who break down cow waste in farming pastures, are facing declines in the U.K., due to pesticide residue in their food sources.
Another problem is that pesticides are used and regulated differently from one country to the next. Although only 15% of pesticides used in the Netherlands are highly toxic for bees, the figure is closer to 33% in Brazil and 47% in Kenya. Neonicotinoid insecticides are banned for non-emergency use in Europe, but they remain the most commonly-used pesticides in the world.
Finally, another problem that can’t be overlooked is fertilizers. Using fertilizers to enhance crop production is often viewed as a positive thing, but overfertilization and the use of artificial fertilizers can exacerbate harms to insects. | yes |
Ecophysiology | Are pesticides harmful to all insects? | yes_statement | "pesticides" are "harmful" to all "insects".. all "insects" are "harmed" by "pesticides". | https://armstronggrowers.com/neonicotinoids-bee-health-and-armstrong-growers | Neonicotinoids, Bee Health and Armstrong Growers | Armstrong ... | Neonicotinoids, Bee Health and Armstrong Growers
Armstrong's Commitment to Bee Health
Armstrong Growers is committed to practices that ensure honey bees are not harmed by the use of pesticides. We never spray pesticides containing neonicotinoids, commonly referred to as "neonics." Armstrong Growers drenches (applies a solution directly to the soil) with this class of pesticides when needed. This practice avoids the possibility that bees will come in contact with neonics. (Research has shown that extremely small amounts, if any, of the neonics move into flowers from a drench or granular application to soil. Drenching is the safest way to apply this class of pesticide.)
Secondly, Armstrong only uses this class of pesticides on as-needed basis. It is impossible to say what plants are treated due to the way we manage our crops. We only apply if there is an urgent need. In most cases we do not need to apply. Most all insecticides will affect bees if they are sprayed on them or if bees come in contact with residues on treated foliage or blooms. That's why it's so important to read and follow all pesticide label instructions. (See New EPA Rules below.)
New EPA Rules
In 2012, the EPA put in place new and strengthened rules for pesticide labeling of pesticides containing neonicotinoids. The image of a honey bee must be included on labels of pesticides with potential to harm them. The words "This product can kill bees and other insect pollinators" must also appear. It must state that these pesticides cannot be applied when plants and trees are in flower—all petals must have fallen. There are several other warnings, including the importance of avoiding drift.
Armstrong Growers has eliminated all foliar treatments made with the three named neonicotinoid insecticides - Dinotefuran, Imidacloprid and Thiamethoxam - from our container and bedding plant fields.
Foliar applications of any insecticides are now known to be the most harmful to bees, and as a result we have strict controls on this practice. Research has shown that neonicotinoids represent a tremendous advancement over older pesticide treatment options. When used properly, neonicotinoids effectively control problem insects, while exhibiting less impact on non-target insects (including bees). Their ability to provide residual control means fewer applications and less applicant exposure. Other alternatives are more harmful to the environment, and beneficial insects do not provide the same level of control, require repeated applications, leave pesticide residuals on the foliage, impacting the aesthetic value of the plant material, and may also cause more plant phytotoxcicity.
We at Armstrong acknowledge our stewardship role in using these chemistries; we deploy them as part of a management strategy like Integrated Pest Management or Best Management Practices and always use them only as directed by the EPA-approved label.
This is an ongoing discussion between governmental agencies, researchers, developers and growers, and Armstrong Growers is committed to working with our industry partners to find a positive resolution. We focus on conducting business in an environmentally responsible manner and are dedicated to making communities a better place for generations to come. | That's why it's so important to read and follow all pesticide label instructions. (See New EPA Rules below.)
New EPA Rules
In 2012, the EPA put in place new and strengthened rules for pesticide labeling of pesticides containing neonicotinoids. The image of a honey bee must be included on labels of pesticides with potential to harm them. The words "This product can kill bees and other insect pollinators" must also appear. It must state that these pesticides cannot be applied when plants and trees are in flower—all petals must have fallen. There are several other warnings, including the importance of avoiding drift.
Armstrong Growers has eliminated all foliar treatments made with the three named neonicotinoid insecticides - Dinotefuran, Imidacloprid and Thiamethoxam - from our container and bedding plant fields.
Foliar applications of any insecticides are now known to be the most harmful to bees, and as a result we have strict controls on this practice. Research has shown that neonicotinoids represent a tremendous advancement over older pesticide treatment options. When used properly, neonicotinoids effectively control problem insects, while exhibiting less impact on non-target insects (including bees). Their ability to provide residual control means fewer applications and less applicant exposure. Other alternatives are more harmful to the environment, and beneficial insects do not provide the same level of control, require repeated applications, leave pesticide residuals on the foliage, impacting the aesthetic value of the plant material, and may also cause more plant phytotoxcicity.
We at Armstrong acknowledge our stewardship role in using these chemistries; we deploy them as part of a management strategy like Integrated Pest Management or Best Management Practices and always use them only as directed by the EPA-approved label.
This is an ongoing discussion between governmental agencies, researchers, developers and growers, and Armstrong Growers is committed to working with our industry partners to find a positive resolution. | no |
Ecophysiology | Are pesticides harmful to all insects? | yes_statement | "pesticides" are "harmful" to all "insects".. all "insects" are "harmed" by "pesticides". | https://civileats.com/2021/06/04/how-pesticides-are-harming-soil-ecosystems/ | How Pesticides Are Harming Soil Ecosystems | Civil Eats | Read more about
Related
The first year after Jason Ward began transitioning his newly purchased conventional farm to organic production, he started seeing more earthworms in the soil beneath his corn, soybeans, and wheat fields. By the third year, he had spotted numerous nightcrawlers—big worms reaching up to eight inches long—on his 700-acre farm in Green County, Ohio.
With conventionally farmed land, “anything synthetic is hurting the natural ecosystem of the soil,” said Ward, whose acreage is now largely certified organic. “As you transition away from that, the life comes back.”
By life, Ward means the rich diversity of insects and other soil invertebrates—earthworms, roundworms, beetles, ants, springtails, and ground-nesting bees—as well as soil bacteria and fungi. Rarely do conversations about the negative impacts of pesticide use in agriculture include these soil invertebrates, yet they play a vital role in soil and plant health and sequestering carbon. Worms eat fallen plant matter, excrete carbon-rich casts and feces, cycle nutrients to plants, and create tunnels that help the soil retain water. Beetles and other soil insects feed on the seeds of weeds, or prey on crop pests such as aphids.
But those critical functions are jeopardized by more than a billion pounds of pesticides used in the U.S. every year, according to a new peer-reviewed study. Compiling data from nearly 400 laboratory and field studies, researchers at the Center for Biological Diversity, Friends of the Earth, and the University of Maryland found that pesticides harmed beneficial soil invertebrates in 70.5 percent of cases reviewed . Studies conducted in the field alone, however, resulted in fewer significant negative impacts (about 50 percent of cases reviewed).
“What this study really drives home is that pesticide use is incompatible with healthy ecosystems, across organisms, pesticide classes, and a whole set of different health outcomes, including death,” said Kendra Klein, senior scientist at Friends of the Earth and co-author of the study. “We have to be talking about pesticide reduction in conversations about regenerative agriculture.”
Herbicide use has risen steadily in the U.S. in past decades, particularly on genetically modified crops. Recent USDA surveys show 98 percent of soybean acres and 97 percent of corn acres are sprayed with herbicides with known health and environmental impacts, including glyphosate (Roundup), atrazine, and dicamba. Neonicotinoid insecticide use has also risen in recent decades as a seed treatment for field crops, even though pesticides in this class are implicated in colony collapse disorder in bees and potential endocrine disruption in humans. The U.S. lags behind the world’s largest agricultural producers, including Europe, China, and Brazil, in banning harmful pesticides, according to a 2016 study that found that more than a quarter of all agricultural pesticides used in the U.S. are banned in Europe.
The researchers reviewed studies covering 275 unique species and 284 different pesticides or combinations of pesticides available in the U.S. Insecticides, unsurprisingly, produced the largest negative impact on soil invertebrates (75 percent of cases), followed by fungicides (71 percent), herbicides (63 percent), and bactericides (58 percent). The pesticides either directly killed the organisms studied or significantly harmed them by impairing their growth, for example, or decreasing their abundance and diversity. The earliest signs of pesticide impact (e.g., structural changes and biochemical biomarkers of harm) were observed most frequently in soil organisms, followed by reproductive harms, mortality, and impacts on behavior, growth, richness and diversity, abundance, and biomass.
The findings add further evidence to the role that pesticides may be playing in biodiversity decline and the “insect apocalypse,” and they raise critical questions about the ability of soil to capture and store carbon if pesticides are killing or harming the very organisms that perform those vital functions.
Ecotoxicologist Ralf Schulz, a professor at the University of Koblenz-Landau, wasn’t surprised by the results but urged caution in their interpretation. Field-based studies, which account for one-third of the cases reviewed, showed fewer significant negative effects. One possible reason: Lab studies often use higher pesticide concentrations, while uncontrolled environmental variables could provide some buffering capacity for pesticide effects in the field. It’s important to evaluate whether lab studies tested pesticide concentrations at levels that would be found in the field, Schulz said, but the researchers noted that was beyond the scope of their paper. Schulz’s own research has found that pesticide toxicity has more than doubled for many invertebrates since 2005.
“It’s a very important study, but it’s not so easy to interpret the 70 percent negative effects directly,” Schulz said. One shouldn’t assume that means ‘in 70 percent of soils we have problems,’” he added, “because that could be wrong.”
However, Richard Smith, an agricultural ecologist and associate professor of natural resources and the environment at the University of New Hampshire, said that—on the contrary—the findings could be “somewhat conservative.”
“It paints a really good picture of the general negative effects of pesticides on soil invertebrates,” he said, “but it doesn’t necessarily tell us the degree of [harm].”
We’ll bring the news to you.
Soil Invertebrates ‘Routinely Ignored’
The U.S. Environmental Protection Agency’s (EPA) risk assessment process for approving pesticides requires manufacturers to test for potential harmful effects on aquatic insects and the European honeybee, which is used as a surrogate for other terrestrial invertebrates, but not soil-dwelling organisms, said Nathan Donley, environmental health science director at the Center for Biological Diversity.
But the honeybee “is not adequately representative of a lot of really important insects and arthropods,” Donley told Civil Eats. “EPA thinks that terrestrial invertebrates fall in one of two categories—pollinators and bird food—but they do so much more than that. This is such an important group of animals that is being routinely ignored.”
The study results underscore the need to include soil organisms in any risk analysis of a pesticide that could contaminate soil, both Donley and Klein said. The risk assessment should also take into account the important functions these organisms provide, such as decomposing dead plants and animals, regulating pests and diseases, and sequestering carbon in the soil.
The Center for Biological Diversity and Friends of the Earth filed a legal petition, supported by 67 public health, environmental justice and other organizations, urging the EPA to include a robust assessment of the harms to six classes of soil organisms, beyond the European honeybee.
Bringing Pesticide Reduction into Regenerative Agriculture
Some of Smith’s published research has focused on the impact of seeds coated with insecticides and fungicides on weeds and below ground invertebrate communities. He found that much of the chemicals end up in the soil and aren’t taken up by the crops.
“We’re also finding that it travels quite a bit in the soil, and it resides in the soil for longer than folks suspected . . . As we’re looking at the data coming from these studies, [it shows] that even a small amount is having an impact on the soil communities.”
Klein says the researchers’ major motivation for conducting the invertebrates study was to call attention to the need to include pesticide reduction in the discussion about regenerative agriculture practices, such as no tillage and cover crop use, which have become popular in many food and agriculture circles for their soil health and climate benefits, but often include the use of herbicides.
The U.S. Department of Agriculture (USDA) under Biden is preparing to reward farmers who take up regenerative practices through a public carbon market and other incentives. Private companies such as Indigo Ag are also developing private carbon markets, while a number of “Big Food” players—ranging from General Mills to McDonald’s and Danone—are ramping up plans to incentivize regenerative practices in their supply chains. It worries Klein that pesticides aren’t a larger part of these conversations.
“So often the role of pesticides in harming soil health is left out,” Klein said. “That leaves the growing field of regenerative agriculture open to co-optation by pesticide companies. We’re seeing some of the worst actors trying to ride the momentum of soil carbon sequestration to identify new markets to sell their products,” she said, pointing to Syngenta, Croplife, and Bayer.
The practice of planting cover crops such as cereal rye and legumes is increasingly encouraged by soil health experts as a vital practice for sequestering carbon, retaining water, and increasing farm resiliency to climate change. On conventional farms, however, cover crops are often used in conjunction with reduced tillage, meaning that they’re getting “burned off” or killed with Round Up and other herbicides rather than being tilled into the soil.
Today’s food system is complex.
Some farmers, like Ward in Ohio, however, use mechanical means, or a “roller crimper” to kill off cover crops like cereal rye, and they plant their soybeans or other cash crop into the residue of the rye. Mowing or roto-tilling the cover crop (but not the soil) are other means organic farmers use for mechanically removing cover crops, according to Rodale Institute Midwest Organic Consultant Léa Vereecke.
Klein points to research showing that organic farms sequester more carbon than conventional farms, but there is also evidence that organic farming practices can run counter to sequestering carbon in soil, because it tends to require a lot of tillage.
For example, Ward, told Civil Eats that in some fields he is constantly tilling the soil to manage weeds. Three days after he plants, he’s in the field running a rotary hoe, and then he’s back three days later. “That’s one of the biggest downfalls—the fact that you do have to keep working that soil over and over and over again to get good weed protection,” said Ward.
Some organic farmers are working to dramatically reduce their tillage, but Vereecke doesn’t think “there is such thing as a 10-year-long rotation that doesn’t involve any tillage and is 100 percent organic.” Still, she points to research showing that if tillage is done wisely, optimal soil health and some carbon storage is obtainable. “Science is now showing us that herbicide use is more harmful than tillage,” she added.
There are no easy answers, Smith said. “There are tradeoffs in every aspect of agriculture, and I’m not sure that we have really figured that balance” between pesticide reduction and carbon sequestration. “But we’re working toward it.”
Meg Wilcox is a freelance writer based in Boston focused on solutions-oriented stories about the ways people are fighting climate change, protecting the environment and making our agriculture systems more sustainable, including by addressing poverty. Read more >
Featured
The team at Tree-Range Farms is pioneering an approach to raising chickens and trees in tandem, storing more carbon and water in the soil while providing an entry point for new and BIPOC farmers often left out of the conventional system. | Beetles and other soil insects feed on the seeds of weeds, or prey on crop pests such as aphids.
But those critical functions are jeopardized by more than a billion pounds of pesticides used in the U.S. every year, according to a new peer-reviewed study. Compiling data from nearly 400 laboratory and field studies, researchers at the Center for Biological Diversity, Friends of the Earth, and the University of Maryland found that pesticides harmed beneficial soil invertebrates in 70.5 percent of cases reviewed . Studies conducted in the field alone, however, resulted in fewer significant negative impacts (about 50 percent of cases reviewed).
“What this study really drives home is that pesticide use is incompatible with healthy ecosystems, across organisms, pesticide classes, and a whole set of different health outcomes, including death,” said Kendra Klein, senior scientist at Friends of the Earth and co-author of the study. “We have to be talking about pesticide reduction in conversations about regenerative agriculture.”
Herbicide use has risen steadily in the U.S. in past decades, particularly on genetically modified crops. Recent USDA surveys show 98 percent of soybean acres and 97 percent of corn acres are sprayed with herbicides with known health and environmental impacts, including glyphosate (Roundup), atrazine, and dicamba. Neonicotinoid insecticide use has also risen in recent decades as a seed treatment for field crops, even though pesticides in this class are implicated in colony collapse disorder in bees and potential endocrine disruption in humans. The U.S. lags behind the world’s largest agricultural producers, including Europe, China, and Brazil, in banning harmful pesticides, according to a 2016 study that found that more than a quarter of all agricultural pesticides used in the U.S. are banned in Europe.
| no |
Ecophysiology | Are pesticides harmful to all insects? | no_statement | "pesticides" are not "harmful" to all "insects".. some "insects" are not "harmed" by "pesticides". | https://link.springer.com/article/10.1007/s10806-022-09889-0 | The Responsibility of Farmers, Public Authorities and Consumers for ... | Abstract
The worldwide decline in bees and other pollinating insects is a threat to biodiversity and food security, and urgent action must be taken to stop and then reverse this decline. An established cause of the insect decline is the use of harmful pesticides in agriculture. This case study focuses on the use of pesticides in Norwegian apple production and considers who among farmers, consumers and public authorities is most responsible for protecting bees against harmful pesticides. The extent to which these three different groups consider themselves responsible and the degree to which they are trusted by each of the other groups are also studied. This empirical study involves both qualitative interviews with Norwegian apple farmers, consumers and public authorities and survey data from consumers and farmers. The results show that consumers consider public authorities and farmers equally responsible for protecting bees, while farmers are inclined to consider themselves more responsible. Farmers, consumers and public authorities do not consider consumers significantly responsible for protecting bees, and consumers have a high level of trust in both farmers and public authorities regarding this matter. This study also finds that a low level of consumer trust in farmers or public authorities increases consumers’ propensity to purchase organic food, suggesting that those who do not trust that enough action is adopted to protect the environment take on more individual responsibility. This paper adds to the existing literature concerning the allocation of responsibility for environmental outcomes, with empirical evidence focusing specifically on pesticides and bees.
Working on a manuscript?
Introduction
Several recent studies show that insects, including pollinating insects, such as bees and butterflies, are declining in diversity and biomass (Biesmeijer et al., 2006; Conrad et al., 2006; Dirzo et al., 2014; Hallmann et al., 2017). Although some recent studies (Van Klink, 2020) give a less dramatic impression of insect decline, there are good reasons to be concerned—not only because of the loss of biodiversity but also because pollinating insects are critical for the survival of a wide range of plants in nature and those used by humans for food. An estimated one-third of food production worldwide is at risk (IPBES, 2016). Therefore, urgent action must be taken to both stop the decline and restore populations of pollinating insects, including bees.
Land-use changes and pollution have been indicated as two of the more important factors (Sánchez-Bayo & Wyckhuys, 2019) driving the observed insect decline. In recent decades, agricultural intensification has had profound global effects in terms of both changing landscapes and increasing the use of pesticides, especially insecticides that have the greatest effects on bees. The prevention of further insect population decline inevitably entails keeping bee-harmful pesticide use at a suitably low level. However, for such prevention measures to be implemented effectively, someone must assume responsibility.
In this paper, the authors’ main research question is as follows: what are the perceptions regarding who should assume responsibility for protecting bees and how should this responsibility be acted upon? This study contributes to the literature concerning responsibility for environmental outcomes in agriculture by considering the responsibility of the following three specific groups: (a) farmers who apply pesticides to their crops; (b) consumers who buy food products that may have been produced with pesticides that harm bees; and (c) public authorities, including both regulatory agencies and elected authorities. The extent to which farmers and public authorities are trusted to safeguard the wellbeing of bees is also considered. Apple production in Norway is used as the case study. The authors recognize that other groups, particularly pesticide manufacturers, may also be responsible for minimising any pesticide harm, but these groups are not a main focus in this study, as there are no pesticide manufacturers in Norway. Later in the paper, the literature on the responsibility of pesticide manufacturers will be reviewed.
Responsibility in relation to pesticide use has previously been investigated, i.e., Karlsson (2007), Drivdal and van der Sluijs (2021) and Hu (2020), and numerous studies have investigated perceptions regarding who is responsible for environmental challenges, such as climate change (see, for instance, Bickerstaff et al., 2008); Neuteleers, 2019; Schlenker et al., 1994). However, to the authors’ knowledge, this study is the first to specifically examine the responsibility of consumers, farmers and public authorities in safeguarding bees against harmful pesticides.
The topics in this study include how the three identified groups (farmers, consumers and public authorities) hold each other responsible for the disappearance of bees and the extent to which they consider themselves and their group responsible. In addition, the study questions address trust relations, which are relevant for accountability. We also estimated whether consumers’ attitudes regarding responsibility and trust affect their willingness to take action to protect bees, i.e., in this case, their propensity to purchase organic food. The first part of this paper reviews the relevant literature on insect decline and responsibility. In the next part, the methodology and the results of the qualitative and quantitative studies of farmers, consumers and public authorities in Norway are detailed. Finally, the authors briefly discuss the research questions based on the findings and analyses.
Background
Over the last couple of decades, the disappearance of honeybees has been observed, especially in North America. This phenomenon has been partly ascribed to colony collapse disorder, and various causes of this disorder have been proposed, such as pesticides, pathogens, parasites and habitat degradation (Cox-Foster et al., 2007; Henry et al., 2012). Long-term studies in several countries have shown drastic changes in the community of bumblebees (Bommarco et al., 2012; Cameron et al., 2011). Furthermore, recent studies have shown that wild bees are often more important for many crops than are domesticated honeybees (Blitzer et al., 2016; Holzschuh et al., 2012).
Neonicotinoids constitute a group of systemic neurotoxic insecticides that are used against several pest insects. Over the last decade, numerous studies have revealed the sub-lethal effects of neonicotinoids on pollinating insects. These sub-lethal effects on bees include reduced immunocompetence (Brandt et al., 2016), reduced colony growth and reproduction (Rundlöf et al., 2015) and an impairment in the ability to remember the location of their hives (Henry et al., 2012). Ultimately, these harmful effects reduce the bees’ ability to provide pollination services to crops, such as apples (Stanley et al., 2015).
The EU sustainable use directive (Directive 2009/128/EC) adopted by Norway in 2015 makes integrated pest management (IPM) mandatory for crops used in food production. According to the guidance of IPM, chemical insecticides should only be applied when deemed necessary according to a defined set of principles (Barzman et al., 2015), such as when farmers observe pest insects over certain damage thresholds or when decision support systems (e.g., forecast models) predict attacks of pest insects. In Norway, several specific regulations restrict the use of pesticides. For instance, there are often restrictions on the number of times that a pesticide can be applied during the season or how close to harvest pesticides can be applied. There are also restrictions that prohibit insecticides from being sprayed over flowering vegetation or at certain times of day to avoid doing so when bees are present in the areas to be treated.
Farmers in Norway are obliged to train and become certified to apply any pesticides. The Norwegian Food Safety Authorities (NFSA) oversee the training and certification of farmers who use pesticides. The NFSA are also in charge of controlling safe pesticide application. Random on-farm checks are carried out during which the NFSA review the pesticide protocols that the farmers are using. The NFSA also carry out national surveys that check harvested apples for pesticide residues. Fruit and vegetable wholesalers also carry out pesticide residue checks in Norway.
The Question of Responsibility
Since bees and other pollinating insects are of paramount importance for both the natural world and farming, it is crucial to address the question of where responsibility lies in terms of pesticide use. In this study, we examine the following three groups that can be considered to have key responsibility for preventing the decline in bee populations caused by pesticide use in food production: farmers, consumers and public authorities.
According to Karlsson (2007), responsibility for an unwanted event can be ascribed to someone who is culpable of contributing to it (the Culpability Principle) and who has the capacity to do something about it (the Capacity Principle). Someone can also be considered held responsible if a clear set of prescriptions to which the actor is bound applies to the event in question (Schlenker et al., 1994). In the case of pesticides and bees, farmers clearly play a crucial role in preventing pesticides from causing harm to bees, as they are the ones who apply the pesticides and thus have the ultimate control over the event (Mohring et al., 2020). Farmers also have a clear set of rules and regulations regarding pesticide use that they are expected to follow. However, the responsibility of farmers extends beyond simply following regulations without question, as there are no sanctions if farmers use the maximum amount of pesticides allowed instead of only applying pesticide when it is very much necessary and when harmful insects are indeed present in their crops.
Farmers’ use of pesticides should be based on knowledge and competence, and a lack of these skills can be a cause for the overuse of pesticides (Hu, 2020). Moreover, farmers may make ethical considerations that are be based on, for instance, the extent to which they believe the potential damage caused by pesticide use is acceptable (Sulemana & James, 2014).
It is also possible to argue that consumers have responsibility for bees. Consumers buy food produced with pesticides that may have harmed insects, and if they specifically do not purchase these products, farmers will also stop using bee-harmful pesticides. However, an average consumer usually has limited detailed information regarding which pesticides are applied and their effects on health or the environment. Moreover, even those who have such knowledge are unable to determine, when buying apples, which pesticides were used and how much was applied. However, it is possible to buy organically certified apples. An organically certified farmer is required to not use any chemical synthetic pesticides, and in apple production in Norway, only specific plant protection products can be used. When consumers request and purchase organically labelled products, the market for organic products increases, and more farmers will be incentivized to stop using chemical synthetic pesticides. According to Eden (1993), whether individuals consider themselves responsible for taking care of environmental problems depends on whether they believe that they can have an impact through pro-environmental behaviour and the extent to which they can choose this behaviour. Similarly, Bickerstaff et al. (2008) found in their study based on focus group interviews that the participants expressed a stronger sense of responsibility when the risk problems were framed in terms of choice and personal control than when the problems were framed as demanding collective or institutional responses.
For consumers to be responsible for bees and change their purchasing behaviour accordingly, they must be aware of the consequences of their purchases (Johnston & Szabo, 2011), which, in turn, depends on whether relevant information is readily available (Wells et al., 2011). However, even if such information is available, many consumers feel confused due to conflicting information regarding food safety and sustainability (Johnston & Szabo, 2011; Moisander, 2007). Furthermore, ethical aspects are only one of many reasons for making a shopping choice; other aspects, such as cost, comfort and habits, also play an important role, and consumers rarely have the time, energy or ability to make food choices based on reflective processes that aim to achieve social and environmental justice goals (Johnston & Szabo, 2011).
The very idea of “green consumption” as a solution to environmental problems has been criticized for being a part of a neoliberal political culture in which political decision-making is replaced by market rationality and stakeholder responsibilisation (Burchell, 1993; Shamir, 2008). The neoliberal responsibilisation of consumers downplays the pro-environmental roles of government and businesses and might “undermine a collective sense of civic responsibility and state regulation of ecological issues” (Johnston & Szabo, 2011). Organic certification and labelling can make it easier for consumers to choose ethically but can also represent a devolution, a transfer of regulatory control from public authorities to “the site of the cash register”. How broad public benefits can be result from these individual consumption decisions is highly questionable (Guthman 2007).
Several authors studying individuals’ responsibility for mitigating climate change claim that individuals’ duty is not to make lifestyle choices to reduce their environmental impact but, rather, to promote collective arrangements (Caney, 2014). This perspective points to public authorities, who, when there is an existential threat, such as climate change or the massive loss of bees, have a responsibility to protect people and the power to ensure that agents comply with their first-order responsibilities (Caney, 2014). By electing their politicians, a country’s citizens entrust the public authorities with a mandate; thus, the authorities are considered morally and legally responsible to the citizens (Pellizzoni, 2004). Responsibility is strongly linked to the notion of trust, and a perceived failure of responsibility can result in a loss of trust in organisations and institutions (Bickerstaff et al., 2008).
Finding and interpreting information about pesticides may be difficult non-specialist individual citizens and farmers. Compared with farmers and consumers, public regulatory agencies have a better grasp of the knowledge base and thus can make informed judgements when prescribing regulations, although challenges related to research gaps and diverging interpretations of scientific results still exist (Milner & Boyd, 2017; Robinson et al., 2020).
The disappearance of bees can be shown to exemplify how a neoliberal and de-politicized ‘laissez-faire’ market economy is failing to deliver an optimal outcome for society. To “moralize” markets through the responsibilisation of stakeholders is insufficient (Shamir, 2008), and regulatory intervention by public authorities is necessary. With collective action problems, where so-called free riders have incentives to not cooperate for the benefit of all, there is a need for public institutions to enforce such cooperation (Neuteleers, 2019). In most countries, regulations governing pesticide use have been implemented to prevent unacceptably harmful pesticides from being applied to the degree that they cause fatal damage to pollinating insects. Regulation for environmental protection is in place in many different areas, and a wide range of regulatory techniques are used, such as certification schemes, education and information provision, and may include voluntary agreements and self-regulation (Lofmarck et al., 2017). Regulations are also implemented at higher levels such as the EU, and international bodies such as the FAO, WHO and WTO, who have developed standards and set maximum use and residue limits for pesticides used in food and feed.
Notably, although this study focusses on the responsibility of farmers, consumers and public authorities, other stakeholders also affect the wellbeing of bees, particularly pesticide manufacturers. Before authorisation decisions for new pesticides are made by regulators, pesticide companies have to provide scientific evidence that they do not cause unacceptable harm (Hamlyn 2019). This is done with scientific assessment studies that pesticide manufacturers have funded (Robinson et al., 2020). This position gives these manufacturers a strong responsibility in addition to the responsibility they have to provide label information regarding pesticide dosage for safe use (Hu, 2020).
Pesticide approval procedures have been criticized because of potential sources of conflicts of interest (Storck et al., 2017). In evaluations of pesticide risks, both social and ecological uncertainty and data gaps are present (Drivdal & van der Sluijs, 2021, Hamlyn 2019), and there is concern regarding the lack of transparency in pesticide regulation processes. Scientific misconduct is frequently found in pesticide risk assessments, but misconduct is generally difficult to identify, denounce or stop (Robinson et al., 2020). Furthermore, although regulations are usually formulated at the international level, both pesticide governance and vigilance widely differ at the country level (Milner & Boyd, 2017), and the power and responsibility of pesticide manufacturers are stronger in countries that are less democratic, with less developed legislative and executive institutions (Hu, 2020).
Methodology
In this study, a mixed-methods approach was used to generate both qualitative and quantitative data. The data collection and storage methods were approved by the Norwegian Centre for Research Data (NSD) and were compliant with ethical and legal privacy regulations.
The qualitative data were generated from semi-structured interviews with six apple farmers, including five men and one woman, in the three main apple-producing regions in Norway (two producers from Hardanger, Sogn and Telemark each). The selection of farmers was performed with the help of the Norwegian agricultural extension service (NLR) such that they represented different age groups, genders, levels of experience and types of practice. Interviews were also conducted with two employees at the NFSA who had responsibilities related to pesticide regulations and use. Two focus group interviews with consumers were conducted; the participants were recruited by the market research company Norstat and represented members of the public. Each focus group included eight participants, and each interview lasted approximately two hours. In one group, the participants were aged between 18 and 35 years, and in the other group, the participants were aged between 36 and 70 years. The qualitative material was recorded, transcribed and coded with the software NVivo.
The quantitative data were gathered from a survey of Norwegian apple farmers who were recruited from fruit warehouses in Norway. Of the 460 farmers who received the questionnaire, 185 replied, but not all farmers replied to all questions. An internet-based survey of 1010 consumers was also carried-out. The consumer respondents were recruited through the market research company Norstat. For this survey, Norstat ensured that the respondents were representative of the Norwegian population in terms of location, age and gender.
The quantitative data were collated, summarized and analysed using an ordinary least square regression analysis of the data from the consumer survey using the software STATA.
Qualitative and Quantitative Results: Responsibility and Trust
The Responsibility of Farmers
When asked who they thought had the greatest responsibility for ensuring that pesticides are used safely, the farmers, employees of the NFSA and participants in the consumer focus groups quickly indicated farmers. In particular, the farmers expressed that they had a great degree of responsibility. One farmer (female) explained that farmers are responsible because they are the ones who use the pesticides and that “you choose yourself whether you want to spray or not”. Another farmer (male) claimed that “as long as pesticides are allowed, it is the farmer’s responsibility to follow the criteria and rules that are set up; so, at the end of the day, it is the one using the pesticides that has the responsibility”. Another farmer expressed the opinion that when pesticides cause problems, it is because mistakes were made by farmers and that it is not the fault of the NFSA.
This perspective is in line with what an NFSA employee described, i.e., the authorities are responsible for ensuring that pesticides are safe to use, but farmers are responsible for using them safely. Another NFSA employee (female) expressed that those applying pesticides have duties and must fulfil certain criteria and that the NFSA expect the farmers to “familiarize themselves with the various pesticides and how they should be applied in a safe manner”.
The participants in the consumer focus group discussions also considered farmers responsible for the application of pesticides. In line with the farmers interviewed, one consumer (male, 54) explained that “it does not matter what the NFSA does or what the market demand is, if the farmer misuses the pesticides, he is the one who is going to cause damage to the environment”. The consumer groups also expressed that they had confidence in farmers. One participant (female, 28) said she trusted farmers because among all different professions, farmers are the ones who “think of the generations that will come after them”. Other participants believed that farmers would not use pesticides unnecessarily as this would be against their economic interest and would be throwing “money out of the window”. However, some participants expressed that it was rather the regulatory system that enabled them to trust farmers. Some noted that farmer activities were controlled by the NFSA and wholesalers, preventing deviations from the regulation. Furthermore, some consumers were uncertain whether the control system really detected “rotten eggs” and what type of sanctions a farmer would face if caught misusing pesticides. Although they believed that farmers would keep their pesticide use within the rules and regulations, some consumers still thought farmers would maximise profits at the expense of the environment if they could within the regulations.
I trust that the farmer does what he should within the regulations and laws. So, if he can spray with something and it is efficient for him, I think he will do it even if it may be harmful to the environment. Male consumer (63).
One participant wondered to what extent farmers were forced to make short cuts because of time constraints; others wondered how easy it was for farmers to be well informed about all different types of pesticides and how they should be used.
All farmers interviewed expressed that they would only apply insecticides when it was strictly necessary. One of the main reasons was the fear of killing beneficial insects, which could lead to a build-up of populations of other harmful insects that cause yield damage in later years. This finding reveals an economic motivation for reduced pesticide use and one that requires knowledge regarding the effects of pesticides on insect fauna and a longer-term perspective.
When you are spraying, you think about not killing insects, sparing the bees and other beneficial insects; so, you do it late or early as a night spraying; you don’t do it in the middle of the day (…). You know that the toxin doesn’t separate between the different insects. It’s the harmful insects that you want to do something about. Farmer (male), Hardanger.
Questions regarding trust in farmers were also raised in the consumer and farmer surveys. The question asked was “To what extent do you think the following statements are correct?”, followed by three different statements. The statements and results are shown in Fig. 1.
The results show that most consumers trust that Norwegian apple farmers follow the regulations concerning pesticide use, attempt to minimize the use of chemical pesticides, and have good knowledge of pesticides. Respondents who believe that farmers attempt to minimize pesticide use are fewer than those who believe that farmers have good knowledge of pesticide use and follow the regulations, but the difference is not large. The apple farmers expressed an even greater trust in each other’s competence and propensity to follow regulations and minimize pesticide use.
The Responsibility of Consumers
In the interviews, there was not always a spontaneous mention of consumers’ responsibility, and with the farmers and NFSA employees, it was necessary to ask specifically about consumers. However, once the topic was introduced, all groups mentioned consumers’ possible influence on pesticide use through the purchase of organic food. The consumers emphasised the responsibility of consumers more than the other groups, and one participant stated the following:
I think the ultimate power is with consumers. If we change our behaviour, we will force the farmers to change their behaviour. If we only buy organic, there will only be organic. Female consumer (45).
The idea that growth in demand for organic apples can also lead to innovation in new production methods without chemical pesticides was raised. One consumer voiced that in the same way that the popularity of electric cars had accelerated the development of car batteries, people’s wish to reduce pesticide use could have a similar effect on apple production.
Among both farmers and NFSA employees, there was mention of how consumers can influence pesticide use by purchasing food produced in Norway since regulations for pesticides in Norway are sometimes stricter than those in countries from which food is imported, particularly those outside the EU. Some farmers also mentioned consumers’ preferences for apples with a perfect appearance. One farmer (male) said, “Of course, when they want to have this kind of A4 apple all the time, to manage that, we have to apply some pesticides”. However, in this farmer’s opinion, the demand for perfectness was also to some extent the responsibility of wholesalers and retailers.
The consumer focus group participants noted several shortcomings in ascribing responsibility to consumers. The lack of information regarding pesticides was often mentioned along with the fact that people in general do not know about problems with pesticides or the potential advantages of organic production.
The consumers also argued that controlling pesticide use cannot be made dependent on people’s purchasing behaviour because “people always buy what is the cheapest”. Furthermore, as one participant noted, consumers already have many different issues to consider when shopping, and it was considered too much to ask consumers to take responsibility for pesticide use. Some focus group participants, therefore, expressed that it is better to reduce pesticide use through the legal and political system.
People have enough with themselves and their wallet; so, maybe Big Brother Government has to force people to do what is the best for everyone (…) There is something about the big common decisions, administrations in our society, it isn’t fair to put that on the shoulders of each and every one; someone up there needs to take responsibility for that. Female consumer (32).
This line of thought indicates that ordinary citizens’ influence on pesticide use is based on placing pressure on public authorities, such as through the voting system. A central reason to have a democratically elected public authority is to be able to make collective decisions for the common good.
There is something about that in each country, we choose someone to represent us (…), because that the bees survive is for our own good, and I trust that those who govern do what is for our best as long as they are elected by us. (…) You hope that the government takes responsibility because they have the power to do it, to install the large measures. Female consumer (28).
Here, the participant expressed that it is the task of elected leaders to do what is good and wished for by their voters. Therefore, to some extent, the responsibility still lies with the individual but as a citizen of a community with voting power rather than as an individual consumer with buying power.
The Responsibility of Public Authorities
The responsibility of the public authorities was acknowledged by all groups interviewed in our study. In the focus group interviews, the authorities were considered uniquely placed to gain an overview of the knowledge needed to formulate adequate regulations regarding pesticide use. In addition, they have the power to place constraints on farmers and pesticide manufacturers and sanction deviations from rules and regulations.
The way I understand it, the only ones who can do something efficiently are the ones who have the overview or control over the entire business, hence the public authorities. (…) One particular farmer can spray less, but he doesn’t control the other farmers. And, we can buy organic, but that’s not all the others. Male consumer (63).
The NFSA employees described their responsibilities for approving only pesticides that were considered safe and for writing pesticide use instructions that farmers can easily understand, thus ensuring that they would be applied in safe amounts. These responsibilities were also noted by some farmers, who emphasised the authority’s responsibility for giving them these instructions, such as instructions regarding the time periods when pesticides are safe to use.
Several participants in the consumer focus groups expressed a high degree of trust in the public authorities and expressed feelings of being “looked after” by the government, which was ensuring that all food sold in Norway was controlled and safe to eat.
I think that we live kind of in a «Nanny State». It’s a bit like Big Brother is taking care, Norway is taking care: we can regulate that, you can take that power away from me, take care of that, that’s fine. Female consumer (32).
Although they had the feeling that the public authorities were ensuring that everything was safe, some consumer participants also felt that this was slightly naïve and that “we as well can be surprised” or that “history shows that the authorities come running breathlessly after”. With reference to a current event of contaminated drinking water in a nearby municipality, one participant (male, 57) asked, “Why should this be different?”.
The survey of both consumers and farmers contained questions regarding trust and confidence in the public authorities’ work in ensuring safe pesticide use. The results are shown in Fig. 2. The question asked was “To what extent do you trust that the regulations in Norway ensure that the use of pesticides safeguard…”, followed by various environmental and health aspects that they could select.
Fig. 2
Consumer (N = 1010) and farmer (N = 166) survey answers regarding the extent to which they trust that the regulations in Norway ensure that the use of pesticides safeguard health and environmental aspects
The data show that the level of trust is high among both consumers and farmers but substantially higher among farmers. Consumers place slightly less trust in regulations to safeguard bees and other beneficial insects compared with consumer health, producer health, water quality and soil life.
In addition to farmers, consumers and public authorities, other stakeholders were also identified as responsible for safe pesticide use. It was noted that pesticide manufacturers, importers and wholesalers exert some control over farmers. Some consumer focus group participants noted the responsibility of researchers to discover the unintended and potentially harmful effects of pesticides. An NFSA employee remarked that the agricultural advisory extension service, which is the most important source of information for the farmers in Norway, has an important responsibility. Finally, one farmer also expressed that wholesalers had a responsibility to not price organic apples too high, making them too expensive to purchase.
Survey Results: Degree of Responsibility
In the questionnaire surveys, both farmers and consumers were asked who they believed had the most responsibility to ensure that pesticides do not harm bees. The question of responsibility was first posed as an open question. Of the 152 apple farmers who answered, only six said something other than “farmer”, “producer” or similar. Eleven answered “farmer” along with someone else, such as public authorities or pesticide producers. No respondent used the word “consumer”. This result indicates that most Norwegian apple farmers first and foremost consider themselves the main stakeholder for responsible pesticide use.
When the question was asked openly in the consumer survey, the answers were much more varied, with almost as many writing “public authorities” as “farmers” or equivalent. Only 27 of the 1010 respondents used the word “consumer”, which was mainly included along with another stakeholder.
The next survey question inquired about the degree to which public authorities, farmers and consumers were responsible for ensuring that pesticides do not harm bees. The results are shown in Fig. 3; 86% of the consumers thought public authorities were responsible to a large or a very large degree, while 88% said the same of farmers. Only 28% thought consumers had a responsibility to a large or a very large degree.
Fig. 3
Consumer (N = 1010) and farmer (N = 155) survey results regarding the extent to which they think consumers, farmers and public authorities have a responsibility to ensure that pesticides do not harm bees
The answers of the farmers were quite different. A much larger share of the respondents (38%) did not think consumers had any responsibility, and only 3.6% thought consumers had a responsibility to a large or a very large degree. In contrast to consumers, farmers consider themselves to have a higher degree of responsibility than public authorities.
When asked to rank the responsibility of the three groups from 1 to 3 (Fig. 4), 58% of the consumers ranked public authorities as the most responsible, while 41% ranked farmers first. Only 2% ranked consumers as the most responsible. In the farmer survey, 45% ranked public authorities first, while 51% ranked farmers first, and 4% ranked consumers first.
Fig. 4
Consumer (N = 1010) and farmer (N = 166) survey results regarding who they believed has the largest responsibility
The results show that consumers are considered as far less responsible for pesticides harming bees than public authorities and farmers. Farmers and public authorities are regarded as highly responsible by both consumers and farmers, but farmers considered themselves more responsible than public authorities. When asked to rank them, a larger share of the consumers placed public authorities before farmers, whereas in the farmer survey, more ranked farmers before public authorities.
These results indicate that it is difficult to determine who is considered to have more responsibility, i.e., public authorities or farmers. However, consumers are clearly considered the least responsible, especially by farmers. This finding indicates that there is limited support for the neoliberal standpoint that consumers should carry the responsibility for environmental outcomes through their purchases. The results also show that apple farmers in Norway have strong feelings of responsibility for ensuring that pesticide use does not harm bees.
Propensity of Consumers to Purchase Organic Food
The survey data were used to perform an ordinary least square (OLS) regression analysis to study the effect of attitudes regarding trust and responsibility on the propensity for consumers to purchase organic food. Purchasing organic food is a concrete action that consumers can take to reduce pesticide use, and the aim of the study was to determine whether this action may be influenced by feelings of responsibility or, more precisely, whether attitudes regarding the responsibility of public authorities, consumers and farmers and trust in farmers and public authorities influence the inclination to purchase organic food.
In the analysis, the dependent variable was frequency of purchasing organic food or food known to have been produced without the use of chemical pesticides. The independent variables are shown in the appendix Table 2, which also provides the summary statistics.
The mean frequency of organic purchases was quite low. Only 5% of the respondents said that they purchase organic food “very often” or “always, if available”, whereas 32% answered “never” or “seldom”. Notably, taste was the most important factor when buying apples, but safety was almost equally important. Price and environmentally friendly production methods had the same average, which was significantly lower than of safety and taste. Table 1 shows the results of the OLS regression analysis.
The results show that the propensity to purchase organic food decreases with increased trust in public authorities and farmers, whereas it increases with the degree to which the respondents think that consumers are responsible for ensuring that pesticides do not harm bees. It is not affected by attitudes regarding the responsibility of farmers and public authorities. The frequency of organic purchases is also not significantly affected by the extent to which price, taste and safe food are important for a respondent when buying apples, but the more the respondents’ value that the apple was produced in an environmentally friendly way, the higher the propensity to purchase organic, and this correlation is very strong. Respondents with a higher education level, female respondents and more urbanized respondents have a higher propensity to buy organic products. The frequency of buying organic food decreases with age but is not significantly affected by income or having children. The results indicate that buying organic food, to some extent, is a choice made by those who do not trust that public authorities and farmers are taking their responsibility seriously enough regarding environmental protection and who believe that consumers have a responsibility for environmental problems, such as the disappearance of bees.
Discussion
Farmers, consumers and public authorities all have an influence on bee disappearance caused by pesticide use, and therefore, all these groups hold a responsibility in some form. The results of this study show that both consumers and farmers feel responsible, but farmers feel much more responsible than consumers. Furthermore, both consumers and farmers ascribe strong responsibility to public authorities, and it is difficult to deduce from our study which of the two, i.e., farmers or public authorities, is regarded as the most responsible for protecting bees. However, clearly, consumers are ascribed very limited responsibility.
In her study of responsibility for safe pesticide use in developing countries, Karlsson (2007) found that there are two opposing assumptions. The first assumption is that all pesticides pose a risk, and thus, the responsibility lies in public authorities and pesticide-producing companies. The second opposing view is that all pesticides are safe if they are used as prescribed, and thus, the responsibility is on farmers. One reason why our survey results do not identify one main responsible actor could be that the respondents have different perceptions regarding the risks related to pesticides and the extent to which they are safe when used as prescribed. The fact that most farmers in our survey noted their own responsibility in the first, open question indicates that they adhere to the second view, i.e., pesticides are safe when used as prescribed. However, when public authorities were mentioned specifically, many famers seemed to be reminded that this stakeholder also has an important responsibility.
Different perceived risks regarding pesticides can partly explain the results of the OLS regression analysis, which showed that the self-reported frequency of purchasing organic food is higher among consumers with a low trust in farmers and public authorities. This result indicates that for some consumers, the risks posed by pesticides are high, and they do not trust that farmers and public authorities are doing enough to prevent damage. These consumers ascribe responsibility to themselves as individuals and take action accordingly by purchasing food free from chemical synthetic pesticides. These consumers can be considered acting according to the position described by Neuteleers (2019), i.e., when the required level of justice is not realised by means of institutions, we have a duty to fill the gap through “actions in our personal lifestyle”. Our results confirm that only a small proportion of Norwegian consumers purchase organic food on a regular basis. This finding could indicate that consumer willingness to take responsibility is low, which contrasts the neoliberal doctrine to leave the fate of bees in the hands of the market forces. The results could also be a sign that many Norwegian consumers do not assume this responsibility because their trust in institutions is high. It could be that they do not think that it is necessary to purchase organic food because the health and environmental standard of conventional food is “good enough”, which is in line with Kvakkestad et al. (2018). This finding is confirmed by the results showing that most respondents trust that Norwegian apple farmers and public authorities safeguard environmental and health outcomes.
It is beyond the scope of this study to judge whether enough is indeed being performed to prevent harm to bees from the pesticides that are being used in Norway. Several studies from other countries and at the international level find that the pesticide regulation systems are flawed and that pesticides that can cause harm to humans, animals and the environment have been authorised (Drivdal & van der Sluijs, 2021; Hu, 2020; Milner & Boyd, 2017; Robinson et al., 2020). However, there are also many examples of how public authorities, at different levels, have taken measures to safeguard bees against pesticides. Pesticides that are dangerous to bees have on several occasions been banned entirely, such as the recent bans of neonicotinoids. Furthermore, the EU Directive on IPM has made it compulsory for all farmers to think more critically about their pesticide use and prioritize preventive measures. Public authorities have the power and, therefore, a responsibility to take such measures and softer measures, such as providing information to farmers regarding the adverse consequences of pesticide use and alternative, less harmful methods for pest control, such as biological control measures (Kvakkestad et al., 2020).
The public authorities’ ability to act and influence explains why they are considered highly responsible for bees, but they share this responsibility with farmers. Farmers apply pesticides to their plants and make individual decisions regarding what, when, how and how much to apply. The control that public authorities can exert over farmers is limited, and farmers are entrusted to consider their own responsibility and act accordingly. Therefore, it is reassuring to find that the apple farmers who answered this survey almost unanimously identified themselves as the main party responsible for safe pesticide use in the first, open question. It remains to be determined whether the same attitudes are found among producers in other sectors and other countries.
In conclusion, ensuring that pesticides do not harm bees is primarily a duty shared by public authorities, who are responsible for ensuring safe pesticide regulations and information, and farmers, who are responsible for following labels and advice. Allowing consumers to carry full responsibility for bees will never be efficient, but some consumers may still take responsibility by purchasing pesticide-free food. One could also say that if other institutions are failing to fulfil their responsibility for bees, individuals have a moral obligation to do so. As citizens, according to (Rawls, 1999), consumers have a duty to obey just institutions and promote just institutions not yet established. Promoting such just institutions can be achieved through voting for responsible politicians and forming part of a public opinion that signals that protecting the bees is a priority, even if it implies extra costs for society. Purchasing organic food can signal the importance of pesticide reduction to producers and public authorities, but this should not take focus away from the importance of acting responsibly as citizens taking informed collective action through public authorities.
Data Availability
The data material from the consumer survey can be made available upon request. The data from the producer survey are confidential.
Hu, Z. P. (2020). What socio-economic and political factors lead to global pesticide dependence? A critical review from a social science perspective. International Journal of Environmental Research and Public Health, 17(21). ARTN 8119, https://doi.org/10.3390/ijerph17218119
Appendix
Appendix
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | NFSA and wholesalers, preventing deviations from the regulation. Furthermore, some consumers were uncertain whether the control system really detected “rotten eggs” and what type of sanctions a farmer would face if caught misusing pesticides. Although they believed that farmers would keep their pesticide use within the rules and regulations, some consumers still thought farmers would maximise profits at the expense of the environment if they could within the regulations.
I trust that the farmer does what he should within the regulations and laws. So, if he can spray with something and it is efficient for him, I think he will do it even if it may be harmful to the environment. Male consumer (63).
One participant wondered to what extent farmers were forced to make short cuts because of time constraints; others wondered how easy it was for farmers to be well informed about all different types of pesticides and how they should be used.
All farmers interviewed expressed that they would only apply insecticides when it was strictly necessary. One of the main reasons was the fear of killing beneficial insects, which could lead to a build-up of populations of other harmful insects that cause yield damage in later years. This finding reveals an economic motivation for reduced pesticide use and one that requires knowledge regarding the effects of pesticides on insect fauna and a longer-term perspective.
When you are spraying, you think about not killing insects, sparing the bees and other beneficial insects; so, you do it late or early as a night spraying; you don’t do it in the middle of the day (…). You know that the toxin doesn’t separate between the different insects. It’s the harmful insects that you want to do something about. Farmer (male), Hardanger.
Questions regarding trust in farmers were also raised in the consumer and farmer surveys. The question asked was “To what extent do you think the following statements are correct?”, followed by three different statements. The statements and results are shown in Fig. 1.
The results show that most consumers trust that Norwegian apple farmers follow the regulations concerning pesticide use, attempt to minimize the use of chemical pesticides, and have good knowledge of pesticides. | yes |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://www.history.com/topics/middle-ages/black-death | Black Death - Causes, Symptoms & Impact | HISTORY | The Black Death was a devastating global epidemic of bubonic plague that struck Europe and Asia in the mid-1300s. The plague arrived in Europe in October 1347, when 12 ships from the Black Sea docked at the Sicilian port of Messina. People gathered on the docks were met with a horrifying surprise: Most sailors aboard the ships were dead, and those still alive were gravely ill and covered in black boils that oozed blood and pus. Sicilian authorities hastily ordered the fleet of “death ships” out of the harbor, but it was too late: Over the next five years, the Black Death would kill more than 20 million people in Europe—almost one-third of the continent’s population.
How Did the Black Plague Start?
Even before the “death ships” pulled into port at Messina, many Europeans had heard rumors about a “Great Pestilence” that was carving a deadly path across the trade routes of the Near and Far East. Indeed, in the early 1340s, the disease had struck China, India, Persia, Syria and Egypt.
The plague is thought to have originated in Asia over 2,000 years ago and was likely spread by trading ships, though recent research has indicated the pathogen responsible for the Black Death may have existed in Europe as early as 3000 B.C.
Symptoms of the Black Plague
Europeans were scarcely equipped for the horrible reality of the Black Death. “In men and women alike,” the Italian poet Giovanni Boccaccio wrote, “at the beginning of the malady, certain swellings, either on the groin or under the armpits…waxed to the bigness of a common apple, others to the size of an egg, some more and some less, and these the vulgar named plague-boils.”
Blood and pus seeped out of these strange swellings, which were followed by a host of other unpleasant symptoms—fever, chills, vomiting, diarrhea, terrible aches and pains—and then, in short order, death.
The Bubonic Plague attacks the lymphatic system, causing swelling in the lymph nodes. If untreated, the infection can spread to the blood or lungs.
How Did the Black Death Spread?
The Black Death was terrifyingly, indiscriminately contagious: “the mere touching of the clothes,” wrote Boccaccio, “appeared to itself to communicate the malady to the toucher.” The disease was also terrifyingly efficient. People who were perfectly healthy when they went to bed at night could be dead by morning.
Did you know? Many scholars think that the nursery rhyme “Ring around the Rosy” was written about the symptoms of the Black Death.
Understanding the Black Death
Today, scientists understand that the Black Death, now known as the plague, is spread by a bacillus called Yersinia pestis. (The French biologist Alexandre Yersin discovered this germ at the end of the 19th century.)
They know that the bacillus travels from person to person through the air, as well as through the bite of infected fleas and rats. Both of these pests could be found almost everywhere in medieval Europe, but they were particularly at home aboard ships of all kinds—which is how the deadly plague made its way through one European port city after another.
Not long after it struck Messina, the Black Death spread to the port of Marseilles in France and the port of Tunis in North Africa. Then it reached Rome and Florence, two cities at the center of an elaborate web of trade routes. By the middle of 1348, the Black Death had struck Paris, Bordeaux, Lyon and London.
Today, this grim sequence of events is terrifying but comprehensible. In the middle of the 14th century, however, there seemed to be no rational explanation for it.
No one knew exactly how the Black Death was transmitted from one patient to another, and no one knew how to prevent or treat it. According to one doctor, for example, “instantaneous death occurs when the aerial spirit escaping from the eyes of the sick man strikes the healthy person standing near and looking at the sick.”
How Do You Treat the Black Death?
Physicians relied on crude and unsophisticated techniques such as bloodletting and boil-lancing (practices that were dangerous as well as unsanitary) and superstitious practices such as burning aromatic herbs and bathing in rosewater or vinegar.
Meanwhile, in a panic, healthy people did all they could to avoid the sick. Doctors refused to see patients; priests refused to administer last rites; and shopkeepers closed their stores. Many people fled the cities for the countryside, but even there they could not escape the disease: It affected cows, sheep, goats, pigs and chickens as well as people.
In fact, so many sheep died that one of the consequences of the Black Death was a European wool shortage. And many people, desperate to save themselves, even abandoned their sick and dying loved ones. “Thus doing,” Boccaccio wrote, “each thought to secure immunity for himself.”
Black Plague: God’s Punishment?
Because they did not understand the biology of the disease, many people believed that the Black Death was a kind of divine punishment—retribution for sins against God such as greed, blasphemy, heresy, fornication and worldliness.
By this logic, the only way to overcome the plague was to win God’s forgiveness. Some people believed that the way to do this was to purge their communities of heretics and other troublemakers—so, for example, many thousands of Jews were massacred in 1348 and 1349. (Thousands more fled to the sparsely populated regions of Eastern Europe, where they could be relatively safe from the rampaging mobs in the cities.)
Some people coped with the terror and uncertainty of the Black Death epidemic by lashing out at their neighbors; others coped by turning inward and fretting about the condition of their own souls.
Flagellants
Some upper-class men joined processions of flagellants that traveled from town to town and engaged in public displays of penance and punishment: They would beat themselves and one another with heavy leather straps studded with sharp pieces of metal while the townspeople looked on. For 33 1/2 days, the flagellants repeated this ritual three times a day. Then they would move on to the next town and begin the process over again.
Though the flagellant movement did provide some comfort to people who felt powerless in the face of inexplicable tragedy, it soon began to worry the Pope, whose authority the flagellants had begun to usurp. In the face of this papal resistance, the movement disintegrated.
How Did the Black Death End?
The plague never really ended and it returned with a vengeance years later. But officials in the port city of Ragusa were able to slow its spread by keeping arriving sailors in isolation until it was clear they were not carrying the disease—creating social distancing that relied on isolation to slow the spread of the disease.
The sailors were initially held on their ships for 30 days (a trentino), a period that was later increased to 40 days, or a quarantine—the origin of the term “quarantine” and a practice still used today.
Does the Black Plague Still Exist?
The Black Death epidemic had run its course by the early 1350s, but the plague reappeared every few generations for centuries. Modern sanitation and public-health practices have greatly mitigated the impact of the disease but have not eliminated it. While antibiotics are available to treat the Black Death, according to The World Health Organization, there are still 1,000 to 3,000 cases of plague every year.
Gallery: Pandemics That Changed History
Though it had been around for ages, leprosy grew into a pandemic in Europe in the Middle Ages. A slow-developing bacterial disease that causes sores and deformities, leprosy was believed to be a punishment from God that ran in families.
The Black Death haunts the world as the worst-case scenario for the speed of disease's spread. It was the second pandemic caused by the bubonic plague, and ravaged Earth’s population. Called the Great Mortality as it caused its devastation, it became known as the Black Death in the late 17th Century.Read more: Social Distancing and Quarantine Were Used in Medieval Times to Fight the Black Death
In another devastating appearance, the bubonic plague led to the deaths of 20 percent of London’s population. The worst of the outbreak tapered off in the fall of 1666, around the same time as another destructive event—the Great Fire of London. Read more: When London Faced a Pandemic—And a Devastating Fire
The first of seven cholera pandemics over the next 150 years, this wave of the small intestine infection originated in Russia, where one million people died. Spreading through feces-infected water and food, the bacterium was passed along to British soldiers who brought it to India where millions more died. Read more: How 5 of History's Worst Pandemics Finally Ended
The first significant flu pandemic started in Siberia and Kazakhstan, traveled to Moscow, and made its way into Finland and then Poland, where it moved into the rest of Europe. By the end of 1890, 360,000 had died.Read more: The Russian Flu of 1889: The Deadly Pandemic Few Americans Took Seriously
The avian-borne flu that resulted in 50 million deaths worldwide, the 1918 flu was first observed in Europe, the United States and parts of Asia before spreading around the world. At the time, there were no effective drugs or vaccines to treat this killer flu strain. Read more: How U.S. Cities Tried to Halt the Spread of the 1918 Spanish Flu
Starting in Hong Kong and spreading throughout China and then into the United States, the Asian flu became widespread in England where, over six months, 14,000 people died. A second wave followed in early 1958, causing about 1.1 million deaths globally, with 116,000 deaths in the United States alone.Read more: How the 1957 Flu Pandemic Was Stopped Early in Its Path
First identified in 1981, AIDS destroys a person’s immune system, resulting in eventual death by diseases that the body would usually fight off. AIDS was first observed in American gay communities but is believed to have developed from a chimpanzee virus from West Africa in the 1920s. Treatments have been developed to slow the progress of the disease, but 35 million people have died of AIDS since its discoveryRead more: The History of AIDS
First identified in 2003, Severe Acute Respiratory Syndrome is believed to have started with bats, spread to cats and then to humans in China, followed by 26 other countries, infecting 8,096 people, with 774 deaths.Read more: SARS Pandemic: How the Virus Spread Around the World in 2003
COVID-19 is caused by a novel coronavirus, the family of viruses that includes the common flu and SARS. The first reported case in China appeared in November 2019, in the Hubei Province. Without a vaccine available, the virus has spread to more than 163 countries. By March 27, 2020, nearly 24,000 people had died.Read more: 12 Times People Confronted a Crisis With Kindness
1 / 10: De Agostini/Getty Images
HISTORY Vault
Stream thousands of hours of acclaimed series, probing documentaries and captivating specials commercial-free in HISTORY Vault
HISTORY.com works with a wide range of writers and editors to create accurate and informative content. All articles are regularly reviewed and updated by the HISTORY.com team. Articles with the “HISTORY.com Editors” byline have been written or edited by the HISTORY.com editors, including Amanda Onion, Missy Sullivan, Matt Mullen and Christian Zapata.
Fact Check
We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us! HISTORY reviews and updates its content regularly to ensure it is complete and accurate. | The plague is thought to have originated in Asia over 2,000 years ago and was likely spread by trading ships, though recent research has indicated the pathogen responsible for the Black Death may have existed in Europe as early as 3000 B.C.
Symptoms of the Black Plague
Europeans were scarcely equipped for the horrible reality of the Black Death. “In men and women alike,” the Italian poet Giovanni Boccaccio wrote, “at the beginning of the malady, certain swellings, either on the groin or under the armpits…waxed to the bigness of a common apple, others to the size of an egg, some more and some less, and these the vulgar named plague-boils.”
Blood and pus seeped out of these strange swellings, which were followed by a host of other unpleasant symptoms—fever, chills, vomiting, diarrhea, terrible aches and pains—and then, in short order, death.
The Bubonic Plague attacks the lymphatic system, causing swelling in the lymph nodes. If untreated, the infection can spread to the blood or lungs.
How Did the Black Death Spread?
The Black Death was terrifyingly, indiscriminately contagious: “the mere touching of the clothes,” wrote Boccaccio, “appeared to itself to communicate the malady to the toucher.” The disease was also terrifyingly efficient. People who were perfectly healthy when they went to bed at night could be dead by morning.
Did you know? Many scholars think that the nursery rhyme “Ring around the Rosy” was written about the symptoms of the Black Death.
Understanding the Black Death
Today, scientists understand that the Black Death, now known as the plague, is spread by a bacillus called Yersinia pestis. (The French biologist Alexandre Yersin discovered this germ at the end of the 19th century.)
They know that the bacillus travels from person to person through the air, as well as through the bite of infected fleas and rats. | yes |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://www.nationalgeographic.com/science/article/the-plague | Plague (Black Death) bacterial infection information and facts | <p>Hell on Earth, the nightmare depicted by Flemish painter Pieter Bruegel in his mid-16th-century "The Triumph of Death" reflects the social upheaval and terror that followed the plague that devastated medieval Europe. Thought by most to be a scourge of the past, the bacteria of the plague still appears from time to time and has even been researched as a biological weapon by some countries.</p>
Pieter Bruegel's "The Triumph of Death"
Hell on Earth, the nightmare depicted by Flemish painter Pieter Bruegel in his mid-16th-century "The Triumph of Death" reflects the social upheaval and terror that followed the plague that devastated medieval Europe. Thought by most to be a scourge of the past, the bacteria of the plague still appears from time to time and has even been researched as a biological weapon by some countries.
Plague was one of history’s deadliest diseases—then we found a cure
Known as the Black Death, the much feared disease spread quickly for centuries, killing millions. The bacterial infection still occurs but can be treated with antibiotics.
ByJenny Howard
Published July 6, 2020
• 8 min read
Plague is one of the deadliest diseases in human history, second only to smallpox. A bacterial infection found mainly in rodents and associated fleas, plague readily leaps to humans in close contact. Plague outbreaks are the most notorious epidemics in history, inciting fears of plague’s use as a biological weapon.
Today, plague cases still pop up sporadically around the world—including in the United States or China, where a suspected case was recently reported in the Inner Mongolia region. But the disease is no longer as deadly as it can be treated with antibiotics when available.
Here’s what you need to know about the plague, including how it spreads, the difference between bubonic and pneumonic plague, the most infamous plague pandemics in history, and why it’s not all that unusual to see modern cases of the disease.
Stages of plague
4:01
Plague 101
What is plague? How many people died from the Black Death and the other plague pandemics? Learn about the bacterium behind the plague disease, how factors like trade and urbanization caused it to spread to every continent except Antarctica, and how three devastating pandemics helped shape modern medicine.
For hundreds of years, what caused plague outbreaks remained mysterious, and shrouded in superstitions. But keen observations and advances in microscopes eventually helped unveil the true culprit. In 1894, Alexandre Yersin discovered the bacterium responsible for causing plague: Yersinia pestis.
Y. pestis is an extraordinarily virulent, rod-shaped bacterium. Y. pestis disables the immune system of its host by injecting toxins into defense cells, such as macrophages, that are tasked with detecting bacterial infections. Once these cells are knocked out, the bacteria can multiply unhindered.
Many small mammals act as hosts to the bacteria, including rats, mice, chipmunks, prairie dogs, rabbits, and squirrels. During an enzootic cycle, Y. pestis can circulate at low rates within populations of rodents, mostly undetected because it doesn’t produce an outbreak. When the bacteria pass to other species, during an epizootic cycle, humans face a greater risk for becoming infected with plague bacteria.
Rats have long been thought to be the main vector of plague outbreaks, because of their intimate connection with humans in urban areas. Scientists have more recently discovered that a flea that lives on rats, Xenopsylla cheopis,primarily causes human cases of plague. When rodents die from the plague, fleas jump to a new host, biting them and transmitting Y. pestis. Transmission also occurs by handling tissue or blood from a plague-infected animal, or inhalation of infected droplets.
Bubonic plague, the disease's most common form, refers to telltale buboes—painfully swollen lymph nodes—that appear around the groin, armpit, or neck. The skin sores become black, leading to its nickname during pandemics as “Black Death.” Initial symptoms of this early stage include vomiting, nausea, and fever.
Pneumonic plague, the most infectious type, is an advanced stage of plague that moves into the lungs. During this stage, the disease is passed directly, person to person, through airborne particles coughed from an infected person’s lungs.
Infamous plagues
Three particularly well-known pandemics occurred before the cause of plague was discovered. The first well-documented crisis was the Plague of Justinian, which began in 542 A.D. Named after the Byzantine emperor Justinian I, the pandemic killed up to 10,000 people a day in Constantinople (modern-day Istanbul, Turkey), according to ancient historians. Modern estimates indicate half of Europe's population—almost 100 million deaths—was wiped out before the plague subsided in the 700s.
Arguably the most infamous plague outbreak was the so-called Black Death, a multi-century pandemic that swept through Asia and Europe. It was believed to start in China in 1334, spreading along trade routes and reaching Europe via Sicilian ports in the late 1340s. The plague killed an estimated 25 million people, almost a third of the continent’s population. The Black Death lingered on for centuries, particularly in cities. Outbreaks included the Great Plague of London (1665-66), in which 70,000 residents died.
The cause of plague wasn't discovered until the most recent global outbreak, which started in China in 1860 and didn't officially end until 1959. The pandemic caused roughly 10 million deaths. The plague was brought to North America in the early 1900s by ships, and thereafter spread to small mammals throughout the United States.
The United States, China, India, Vietnam, and Mongolia are among the other countries that have had confirmed human plague cases in recent years. Within the U.S., on average seven human cases of plague appear each year, emerging primarily in California and the Southwest.
Today, most people survive plague with rapid diagnosis and antibiotic treatment. Good sanitation practices and pest control minimize contact with infected fleas and rodents to help prevent plague pandemics.
Plague is classified as a Category A pathogen, because it readily passes between people and could result in high mortality rates if untreated. This classification has helped stoke fears that Y. pestis could be used as a biological weapon if distributed in aerosol form. As a small airborne particle it would cause pneumonic plague, the most lethal and contagious form. | Stages of plague
4:01
Plague 101
What is plague? How many people died from the Black Death and the other plague pandemics? Learn about the bacterium behind the plague disease, how factors like trade and urbanization caused it to spread to every continent except Antarctica, and how three devastating pandemics helped shape modern medicine.
For hundreds of years, what caused plague outbreaks remained mysterious, and shrouded in superstitions. But keen observations and advances in microscopes eventually helped unveil the true culprit. In 1894, Alexandre Yersin discovered the bacterium responsible for causing plague: Yersinia pestis.
Y. pestis is an extraordinarily virulent, rod-shaped bacterium. Y. pestis disables the immune system of its host by injecting toxins into defense cells, such as macrophages, that are tasked with detecting bacterial infections. Once these cells are knocked out, the bacteria can multiply unhindered.
Many small mammals act as hosts to the bacteria, including rats, mice, chipmunks, prairie dogs, rabbits, and squirrels. During an enzootic cycle, Y. pestis can circulate at low rates within populations of rodents, mostly undetected because it doesn’t produce an outbreak. When the bacteria pass to other species, during an epizootic cycle, humans face a greater risk for becoming infected with plague bacteria.
Rats have long been thought to be the main vector of plague outbreaks, because of their intimate connection with humans in urban areas. Scientists have more recently discovered that a flea that lives on rats, Xenopsylla cheopis,primarily causes human cases of plague. When rodents die from the plague, fleas jump to a new host, biting them and transmitting Y. pestis. Transmission also occurs by handling tissue or blood from a plague-infected animal, or inhalation of infected droplets.
| yes |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://www.theguardian.com/world/shortcuts/2015/feb/24/dirty-rat-giant-gerbils-responsible-black-death | You dirty rat! Turns out giant gerbils were responsible for the Black ... | You dirty rat! Turns out giant gerbils were responsible for the Black Death
For years, black rats have been blamed for spreading bubonic plague, but now scientists in Norway believe it was giant gerbils
Tue 24 Feb 2015 08.00 ESTLast modified on Tue 19 Jun 2018 07.20 EDT
Name: Variable.
Age: Immaterial. Just keep replacing them until your child is old enough to contemplate pet-death with equanimity.
Appearance: Happily, interchangeable.
Oh, gerbils! Cute little snuffly things, racing around their wheels and digging through their sawdust and lapping at their little bottles! So sweet, and so much less of an infinite reproach to existence than goldfish! Yes, until they kill you.
I’m sorry, what? Until they kill you.
I think you might be a bit confused. Are you thinking of lions? If you are, don’t worry. Gerbils are a lot smaller and a lot less fierce. You can generally placate them with a sunflower seed. Or is that just hamsters? No, they’re vectors for disease. Including the Black Death.
Oh, I see. You’re thinking of rats. Black rats spread the Black Death. Well, their fleas did. I read it in Horrible Histories. Wrong, they reckon.
Who reckons? Scientists in Oslo who have just published a study in Proceedings of the National Academy of Sciences purporting to show that while there is no historical correlation between good breeding conditions for rats and the occurrences of plague in the Middle East and Europe, there is between the kind of weather that makes for frisky giant gerbils and millions of bubo-strewn patients all along the Silk Road routes travelled shortly thereafter.
Not my darling Nibbles! He’s no harbinger of doom! Wait – did you say giant gerbils? Yes, Rhombomys opimus, the great gerbil, found throughout the arid, sandy landscapes of central Asia and a known carrier of the plague pathogen Yersinia pestis.
And how great are these great gerbils? Not great at all for us – did you not hear what I just said about the plague pathogen Yersinia pestis?
No, I mean, how big are they? Oh. Up to 8in – or double that if you include the tail.
Wow. What was attractive at one size becomes not at all at another. I know. Like penises.
Mmm. So, are we all safe? As long as one’s charming childhood and/or classroom pet doesn’t look like something from a James Herbert novel? Yes. Nibbles remains a Meriones unguiculatus; the greatest threat is him escaping and gnawing through every electrical wire in the house.
Do say: “Time for your exercise wheel, little one!”
Don’t say: “Could I have a packet of sunflower seeds and some aspirin? I’m not feeling so hot.” | You dirty rat! Turns out giant gerbils were responsible for the Black Death
For years, black rats have been blamed for spreading bubonic plague, but now scientists in Norway believe it was giant gerbils
Tue 24 Feb 2015 08.00 ESTLast modified on Tue 19 Jun 2018 07.20 EDT
Name: Variable.
Age: Immaterial. Just keep replacing them until your child is old enough to contemplate pet-death with equanimity.
Appearance: Happily, interchangeable.
Oh, gerbils! Cute little snuffly things, racing around their wheels and digging through their sawdust and lapping at their little bottles! So sweet, and so much less of an infinite reproach to existence than goldfish! Yes, until they kill you.
I’m sorry, what? Until they kill you.
I think you might be a bit confused. Are you thinking of lions? If you are, don’t worry. Gerbils are a lot smaller and a lot less fierce. You can generally placate them with a sunflower seed. Or is that just hamsters? No, they’re vectors for disease. Including the Black Death.
Oh, I see. You’re thinking of rats. Black rats spread the Black Death. Well, their fleas did. I read it in Horrible Histories. Wrong, they reckon.
Who reckons? Scientists in Oslo who have just published a study in Proceedings of the National Academy of Sciences purporting to show that while there is no historical correlation between good breeding conditions for rats and the occurrences of plague in the Middle East and Europe, there is between the kind of weather that makes for frisky giant gerbils and millions of bubo-strewn patients all along the Silk Road routes travelled shortly thereafter.
Not my darling Nibbles! He’s no harbinger of doom! Wait – did you say giant gerbils? | no |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://en.wikipedia.org/wiki/Bubonic_plague | Bubonic plague - Wikipedia | The three types of plague are the result of the route of infection: bubonic plague, septicemic plague, and pneumonic plague.[1] Bubonic plague is mainly spread by infected fleas from small animals.[1] It may also result from exposure to the body fluids from a dead plague-infected animal.[6] Mammals such as rabbits, hares, and some cat species are susceptible to bubonic plague, and typically die upon contraction.[7] In the bubonic form of plague, the bacteria enter through the skin through a flea bite and travel via the lymphatic vessels to a lymph node, causing it to swell.[1] Diagnosis is made by finding the bacteria in the blood, sputum, or fluid from lymph nodes.[1]
Without treatment, plague results in the death of 30% to 90% of those infected.[1][4] Death, if it occurs, is typically within 10 days.[9] With treatment, the risk of death is around 10%.[4] Globally between 2010 and 2015 there were 3,248 documented cases, which resulted in 584 deaths.[1] The countries with the greatest number of cases are the Democratic Republic of the Congo, Madagascar, and Peru.[1]
Bubonic plague is an infection of the lymphatic system, usually resulting from the bite of an infected flea, Xenopsylla cheopis (the Oriental rat flea).[14] Several flea species carried the bubonic plague, such as Pulex irritans (the human flea), Xenopsylla cheopis, and Ceratophyllus fasciatus.[14]Xenopsylla cheopis was the most effective flea species for transmittal.[14] In very rare circumstances, as in septicemic plague, the disease can be transmitted by direct contact with infected tissue or exposure to the cough of another human.
The flea is parasitic on house and field rats and seeks out other prey when its rodent host dies. Rats were an amplifying factor to bubonic plague due to their common association with humans as well as the nature of their blood.[15] The rat's blood allows the rat to withstand a major concentration of the plague.[15] The bacteria form aggregates in the gut of infected fleas, and this results in the flea regurgitating ingested blood, which is now infected, into the bite site of a rodent or human host. Once established, the bacteria rapidly spread to the lymph nodes of the host and multiply. The fleas that transmit the disease only directly infect humans when the rat population in the area is wiped out from a mass infection.[16] Furthermore, in areas with a large population of rats, the animals can harbor low levels of the plague infection without causing human outbreaks.[15] With no new rat inputs being added to the population from other areas, the infection only spread to humans in very rare cases of overcrowding.[15]
Signs and symptoms
Necrosis of the nose, the lips, and the fingers and residual bruising over both forearms in a person recovering from bubonic plague that disseminated to the blood and the lungs. At one time, the person's entire body was bruised.
After being transmitted via the bite of an infected flea, the Y. pestis bacteria become localized in an inflamed lymph node, where they begin to colonize and reproduce. Infected lymph nodes develop hemorrhages, which result in the death of tissue.[17]Y. pestisbacilli can resist phagocytosis and even reproduce inside phagocytes and kill them. As the disease progresses, the lymph nodes can hemorrhage and become swollen and necrotic. Bubonic plague can progress to lethal septicemic plague in some cases. The plague is also known to spread to the lungs and become the disease known as the pneumonic plague. Symptoms appear 2–7 days after getting bitten and they include:[14]
Smooth, painful lymph gland swelling called a bubo, commonly found in the groin, but may occur in the armpits or neck, most often near the site of the initial infection (bite or scratch)
Pain may occur in the area before the swelling appears
Gangrene of the extremities such as toes, fingers, lips, and tip of the nose.[18]
The best-known symptom of bubonic plague is one or more infected, enlarged, and painful lymph nodes, known as buboes. Buboes associated with the bubonic plague are commonly found in the armpits, upper femoral, groin, and neck region. Symptoms include heavy breathing, continuous vomiting of blood (hematemesis), aching limbs, coughing, and extreme pain caused by the decay or decomposition of the skin while the person is still alive. Additional symptoms include extreme fatigue, gastrointestinal problems, spleen inflammation, lenticulae (black dots scattered throughout the body), delirium, coma, organ failure, and death.[19] Organ failure is a result of the bacteria infecting organs through the bloodstream.[14] Other forms of the disease include septicemic plague and pneumonic plague in which the bacterium reproduces in the person's blood and lungs respectively.[20]
Diagnosis
Gram-negative Yersinia pestis bacteria. The culture was grown over a 72-hour time period
Laboratory testing is required in order to diagnose and confirm plague. Ideally, confirmation is through the identification of Y. pestisculture from a patient sample. Confirmation of infection can be done by examining serum taken during the early and late stages of infection. To quickly screen for the Y. pestisantigen in patients, rapid dipstick tests have been developed for field use.[21]
Buboes: Swollen lymph nodes (buboes) characteristic of bubonic plague, a fluid sample can be taken from them with a needle.
Blood
Lungs
Prevention
Bubonic plague outbreaks are controlled by pest control and modern sanitation techniques. This disease uses fleas commonly found on rats as a vector to jump from animals to humans. The mortality rate hits its peak during the hot and humid months of June, July, and August.[23] Furthermore, the plague most affected those of poor upbringing due to greater exposure, poor sanitation techniques and lack of a healthy immune system due to a poor diet.[23] The successful control of rat populations in dense urban areas is essential to outbreak prevention. One example is the use of a machine called the Sulfurozador, used to deliver sulphur dioxide to eradicate the pest that spread the bubonic plague, in Buenos Aires, Argentina during the early 18th century.[24] Targeted chemoprophylaxis, sanitation, and vector control also played a role in controlling the 2003 Oran outbreak of the bubonic plague.[25] Another means of prevention in large European cities was a city-wide quarantine to not only limit interaction with people who were infected, but also to limit the interaction with the infected rats.[26]
People potentially infected with the plague need immediate treatment and should be given antibiotics within 24 hours of the first symptoms to prevent death. Other treatments include oxygen, intravenous fluids, and respiratory support. People who have had contact with anyone infected by pneumonic plague are given prophylactic antibiotics.[28] Using the broad-based antibiotic streptomycin has proven to be dramatically successful against the bubonic plague within 12 hours of infection.[29]
For over a decade since 2001, Zambia, India, Malawi, Algeria, China, Peru, and the Democratic Republic of the Congo had the most plague cases with over 1,100 cases in the Democratic Republic of the Congo alone. From 1,000 to 2,000 cases are conservatively reported per year to the WHO.[30] From 2012 to 2017, reflecting political unrest and poor hygienic conditions, Madagascar began to host regular epidemics.[30]
Between 1900 and 2015, the United States had 1,036 human plague cases with an average of 9 cases per year. In 2015, 16 people in the Western United States developed plague, including 2 cases in Yosemite National Park.[31] These US cases usually occur in rural northern New Mexico, northern Arizona, southern Colorado, California, southern Oregon, and far western Nevada.[32]
In November 2017, the Madagascar Ministry of Health reported an outbreak to the WHO (World Health Organization) with more cases and deaths than any recent outbreak in the country. Unusually, most of the cases were pneumonic rather than bubonic.[33]
In June 2018, a child was confirmed to be the first person in Idaho to be infected by bubonic plague in nearly 30 years.[34]
A couple died in May 2019, in Mongolia, while hunting marmots.[35] Another two people in the province of Inner Mongolia, China were treated in November 2019 for the disease.[36]
Spread of bubonic plague through time in Europe (2nd pandemic)
In July 2020, in Bayannur, Inner Mongolia of China, a human case of bubonic plague was reported. Officials responded by activating a city-wide plague-prevention system for the remainder of the year.[37] Also in July 2020, in Mongolia, a teenager died from bubonic plague after consuming infected marmot meat.[38]
History
Yersinia pestis has been discovered in archaeological finds from the Late Bronze Age (~3800 BP).[39] The bacteria is identified by ancient DNA in human teeth from Asia and Europe dating from 2,800 to 5,000 years ago.[40] Some authors have suggested that the plague was responsible for the Neolithic decline.[41]
First pandemic
The first recorded epidemic affected the Sasanian Empire and their arch-rivals, the Eastern Roman Empire (Byzantine Empire) and was named the Plague of Justinian (541–549 AD) after emperor Justinian I, who was infected but survived through extensive treatment.[42][43] The pandemic resulted in the deaths of an estimated 25 million (6th century outbreak) to 50 million people (two centuries of recurrence).[44][45] The historian Procopius wrote, in Volume II of History of the Wars, of his personal encounter with the plague and the effect it had on the rising empire.
In the spring of 542, the plague arrived in Constantinople, working its way from port city to port city and spreading around the Mediterranean Sea, later migrating inland eastward into Asia Minor and west into Greece and Italy. The Plague of Justinian is said to have been "completed" in the middle of the 8th century.[15] Because the infectious disease spread inland by the transferring of merchandise through Justinian's efforts in acquiring luxurious goods of the time and exporting supplies, his capital became the leading exporter of the bubonic plague. Procopius, in his work Secret History, declared that Justinian was a demon of an emperor who either created the plague himself or was being punished for his sinfulness.[45]
Second pandemic
Citizens of Tournai bury plague victims. Miniature from The Chronicles of Gilles Li Muisis (1272–1352). Bibliothèque royale de Belgique, MS 13076–77, f. 24v.People who died of bubonic plague in a mass grave from 1720 to 1721 in Martigues, France
In the Late Middle Ages Europe experienced the deadliest disease outbreak in history when the Black Death, the infamous pandemic of bubonic plague, hit in 1347, killing one-third of the European human population. Some historians believe that society subsequently became more violent as the mass mortality rate cheapened life and thus increased warfare, crime, popular revolt, waves of flagellants, and persecution.[46] The Black Death originated in Central Asia and spread from Italy and then throughout other European countries. Arab historians Ibn Al-Wardni and Almaqrizi believed the Black Death originated in Mongolia. Chinese records also show a huge outbreak in Mongolia in the early 1330s.[47]
In 2022, researchers presented evidence that the plague originated near Lake Issyk-Kul in Kyrgyzstan.[48] The Mongols had cut the trade route (the Silk Road) between China and Europe, which halted the spread of the Black Death from eastern Russia to Western Europe. The European epidemic may have begun with the siege of Caffa, an attack that Mongols launched on the Italian merchants' last trading station in the region, Caffa in the Crimea.[29]
In late 1346, plague broke out among the besiegers and from them penetrated the town. The Mongol forces catapulted plague-infested corpses into Caffa as a form of attack, one of the first known instances of biological warfare.[49] When spring arrived, the Italian merchants fled on their ships, unknowingly carrying the Black Death. Carried by the fleas on rats, the plague initially spread to humans near the Black Sea and then outwards to the rest of Europe as a result of people fleeing from one area to another. Rats migrated with humans, traveling among grain bags, clothing, ships, wagons, and grain husks.[19] Continued research indicates that black rats, those that primarily transmitted the disease, prefer grain as a primary meal.[15] Due to this, the major bulk grain fleets that transported major city's food shipments from Africa and Alexandria to heavily populated areas, and were then unloaded by hand, played a role in increasing the transmission effectiveness of the plague.[15]
Third pandemic
The plague resurfaced for a third time in the mid-19th century; this is also known as "the modern pandemic". Like the two previous outbreaks, this one also originated in Eastern Asia, most likely in Yunnan, a province of China, where there are several natural plague foci.[50] The initial outbreaks occurred in the second half of the 18th century.[51][52] The disease remained localized in Southwest China for several years before spreading. In the city of Canton, beginning in January 1894, the disease had killed 80,000 people by June. Daily water-traffic with the nearby city of Hong Kong rapidly spread the plague there, killing over 2,400 within two months during the 1894 Hong Kong plague.[53]
The third pandemic spread the disease to port cities throughout the world in the second half of the 19th century and the early 20th century via shipping routes.[54] The plague infected people in Chinatown in San Francisco from 1900 to 1904,[55] and in the nearby locales of Oakland and the East Bay again from 1907 to 1909.[56] During the former outbreak, in 1902, authorities made permanent the Chinese Exclusion Act, a law originally signed into existence by President Chester A. Arthur in 1882. The Act was supposed to last for 10 years, but was renewed in 1892 with the Geary Act, then followed by the 1902 decision. The last major outbreak in the United States occurred in Los Angeles in 1924,[57] though the disease is still present in wild rodents and can be passed to humans that come in contact with them.[32] According to the World Health Organization, the pandemic was considered active until 1959, when worldwide casualties dropped to 200 per year. In 1994, a plague outbreak in five Indian states caused an estimated 700 infections (including 52 deaths) and triggered a large migration of Indians within India as they tried to avoid the disease.[citation needed]
It was during the 1894 Hong Kong plague outbreak that Alexandre Yersin isolated the bacterium responsible (Yersinia pestis),[58] a few days after Japanese bacteriologist Kitasato Shibasaburō had isolated it.[59][60] However, the latter's description was imprecise and also expressed doubts of its relation to the disease, and thus the bacterium is today only named after Yersin.[60][61]
The scale of death and social upheaval associated with plague outbreaks has made the topic prominent in many historical and fictional accounts since the disease was first recognized. The Black Death in particular is described and referenced in numerous contemporary sources, some of which, including works by Chaucer, Boccaccio, and Petrarch, are considered part of the Western canon. The Decameron, by Boccaccio, is notable for its use of a frame story involving individuals who have fled Florence for a secluded villa to escape the Black Death. First-person, sometimes sensationalized or fictionalized, accounts of living through plague years have also been popular across centuries and cultures. For example, Samuel Pepys's diary makes several references to his first-hand experiences of the Great Plague of London in 1665–6.[62]
Later works, such as Albert Camus's novel The Plague or Ingmar Bergman's film The Seventh Seal have used bubonic plague in settings, such as quarantined cities in either medieval or modern times, as a backdrop to explore a variety of concepts. Common themes include the breakdown of society, institutions, and individuals during the plague, the cultural and psychological existential confrontation with mortality, and the allegorical use of the plague about contemporary moral or spiritual questions.[citation needed]
Biological warfare
Some of the earliest instances of biological warfare were said to have been products of the plague, as armies of the 14th century were recorded catapulting diseased corpses over the walls of towns and villages to spread the pestilence. This was done by Jani Beg when he attacked the city of Kaffa in 1343.[63]
Continued research
Substantial research has been done regarding the origin of the plague and how it traveled through the continent.[15] Mitochondrial DNA of modern rats in Western Europe indicated that these rats came from two different areas, one being Africa and the other unclear.[15] The research regarding this pandemic has greatly increased with technology.[15] Through archaeo-molecular investigation, researchers have discovered the DNA of plague bacillus in the dental core of those that fell ill to the plague.[15] Analysis of teeth of the deceased allows researchers to further understand both the demographics and mortuary patterns of the disease. For example, in 2013 in England, archeologists uncovered a burial mound to reveal 17 bodies, mainly children, who had died of the Bubonic plague. They analyzed these burial remains using radiocarbon dating to determine they were from the 1530s, and dental core analysis revealed the presence of Yersinia pestis.[65]
Other evidence for rats that are currently still being researched consists of gnaw marks on bones, predator pellets and rat remains that were preserved in situ.[15] This research allows individuals to trace early rat remains to track the path traveled and in turn connect the impact of the Bubonic Plague to specific breeds of rats.[15] Burial sites, known as plague pits, offer archaeologists an opportunity to study the remains of people who died from the plague.[66]
Another research study indicates that these separate pandemics were all interconnected.[16] A current computer model indicates that the disease did not go away in between these pandemics.[16] It rather lurked within the rat population for years without causing human epidemics.[16]
^Little LK (2007). "Life and Afterlife of the First Plague Pandemic.". In Little LK (ed.). Plague and the End of Antiquity: The Pandemic of 541–750. Cambridge University Press. pp. 8–15. ISBN978-0-521-84639-4.
^McCormick M (2007). "Toward a Molecular History of the Justinian Pandemic". In Little LK (ed.). Plague and the End of Antiquity: The Pandemic of 541–750. Cambridge University Press. pp. 290–312. ISBN978-0-521-84639-4. | Globally between 2010 and 2015 there were 3,248 documented cases, which resulted in 584 deaths.[1] The countries with the greatest number of cases are the Democratic Republic of the Congo, Madagascar, and Peru.[1]
Bubonic plague is an infection of the lymphatic system, usually resulting from the bite of an infected flea, Xenopsylla cheopis (the Oriental rat flea).[14] Several flea species carried the bubonic plague, such as Pulex irritans (the human flea), Xenopsylla cheopis, and Ceratophyllus fasciatus.[14]Xenopsylla cheopis was the most effective flea species for transmittal.[14] In very rare circumstances, as in septicemic plague, the disease can be transmitted by direct contact with infected tissue or exposure to the cough of another human.
The flea is parasitic on house and field rats and seeks out other prey when its rodent host dies. Rats were an amplifying factor to bubonic plague due to their common association with humans as well as the nature of their blood.[15] The rat's blood allows the rat to withstand a major concentration of the plague.[15] The bacteria form aggregates in the gut of infected fleas, and this results in the flea regurgitating ingested blood, which is now infected, into the bite site of a rodent or human host. Once established, the bacteria rapidly spread to the lymph nodes of the host and multiply. The fleas that transmit the disease only directly infect humans when the rat population in the area is wiped out from a mass infection.[16] Furthermore, in areas with a large population of rats, the animals can harbor low levels of the plague infection without causing human outbreaks.[15] | yes |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://www.snexplores.org/article/dont-blame-rats-spreading-black-death | Don't blame the rats for spreading the Black Death | Don’t blame the rats for spreading the Black Death
People — not rodents — may have spread the most famous plague in history
Towns suffering heavily from the Black Death in the 1300s often hired a plague doctor, illustrated here, to deal with their legions of sick and dying people. Such physicians often wore a beak-like mask filled with scented materials to cope with the smell of death all around.
Share this:
The Black Death was one of the worst disease outbreaks in human history. This bacterial disease swept across Europe from 1346 to 1353, killing millions. For hundreds of years afterward, this plague returned. Each time, it risked wiping out families and towns. Many people thought rats were to blame. After all, their fleas can harbor the plague microbes. But a new study suggests researchers have given those rats too much blame. Human fleas, not rat fleas, may be most to blame for the Black Death.
Black Death was an especially extreme outbreak of bubonic plague.
Bacteria known as Yersinia pestis cause this disease. When these bacteria are not infecting people, they hang out in rodents, such as rats, prairie dogs and ground squirrels. Many rodents can become infected, explains Katharine Dean. She studies ecology — or how organisms relate to one another — at the University of Oslo in Norway.
The plague’s species “persists mostly because the rodents don’t get sick,” she explains. These animals can then form a reservoir for the plague. They serve as hosts in which these germs can survive.
Later, when fleas bite those rodents, they slurp up the germs. These fleas then spread those bacteria when they bite the next critter on their menu. Often, that next entrée is another rodent. But sometimes, it’s a person. “Plague is not picky,” notes Dean. “It’s amazing that it can live with so many hosts and in different places.”
People can become infected with the plague in three different ways. They can be bitten by a rat flea that’s carrying plague. They can be bitten by a human flea carrying the plague. Or they can catch it from another person. (Plague can spread from person to person through an infected individual’s cough or vomit.) Scientists have been trying to figure out, though, which route was most responsible for the Black Death.
Flea vs. flea
The human flea Pulex irritans (top) prefers to bite people and thrives where they don’t bathe or wash their clothes. The rat flea Xenopsylla cheopis (bottom) prefers to bite rats but will dine on human blood if people are around. Both species can carry plague.Katja ZAM/Wikimedia Commons, CDC
The plague may not be a picky disease, but fleas can be picky eaters. Different species of these parasites are adapted to coexist with different animal hosts. People have their own flea: Pulex irritans. It’s an ectoparasite, meaning that it lives outside its host. People often have to deal with another ectoparasite, as well, a species of louse.
The black rats that lived in Europe during the Middle Ages have their own species of flea. It’s called Xenopsylla cheopis. (Another flea species targets the brown rat, which now dominates in Europe.) All these fleas and the louse can carry plague.
Rat fleas prefer to bite rats. But they won’t turn down a human meal if it’s closer. Ever since scientists proved that rat fleas could transmit plague, they assumed those fleas were behind the Black Death. Rat fleas bit people, and people got the plague.
Except that there has been growing evidence that black rats don’t spread plague fast enough to account for how many people died in the Black Death. For one, the fleas found on European black rats don’t like to bite people much.
If scientists needed another explanation, Dean and her colleagues had a candidate: human parasites.
Ancient manuscripts and modern computers
Dean’s team went digging for death records. “We were at the library a lot,” she says. The researchers looked through old books for records of how many people died of plague per day or per week. The records often were quite old and hard to read. “A lot of the records are in Spanish or Italian or Norwegian or Swedish,” Dean notes. “We were so lucky. Our group has so many people that speak so many different languages.”
The team calculated plague death rates from the 1300s to the 1800s for nine cities in Europe and Russia. They graphed the death rates in each city over time. Then the scientists created computer models of the three ways plague can spread — person to person (via human fleas and lice), rat to person (via rat fleas) or person to person (via coughing). Each model predicted what the deaths from each method of spread would look like. Person to person spread might trigger a very quick spike in deaths that fell off quickly. Rat flea-based plague might lead to fewer deaths but those deaths might occur over a long time. Death rates from human flea-based plague would fall somewhere in between.
These skeletons were found in a mass grave in France. They come from an outbreak of plague between 1720 and 1721.S. Tzortzis/Wikimedia Commons
Dean and her colleagues compared their model results to the patterns of real deaths. The model that assumed the disease was spread by human fleas and lice was the winner. It most closely matched the patterns in death rates seen from human transmissions. The scientists published their findings January 16 in the Proceedings of the National Academy of Sciences.
This study doesn’t exonerate rats. Plague is still around, hiding out in rodents. It probably spread from rats to human fleas and lice. From there, it sometimes prompted human outbreaks. Bubonic plague still emerges. In 1994, for example, rats and their fleas spread plague through India, killing almost 700 people.
Rats still spread a lot of plague, Dean explains. “Just probably not the Black Death. I feel more like a champion for the human ectoparasites,” she says. “They did a good job.”
Not a total surprise
Scientists have suspected that rat fleas might not have played a big role in the Black Death, says Michael Antolin. He is a biologist at Colorado State University in Fort Collins. “It’s nice to see a model that shows [it could happen].”
Studying illnesses of the past is important for the future, Antolin notes. Those long-ago outbreaks can teach a lot about how modern diseases might spread and kill. “What we’re looking for are the conditions that allow epidemics or pandemics to occur,” he says. “What can we learn? Can we predict the next big outbreak?”
Even if rats played a role in the Black Death, they wouldn’t have been the biggest factor, Antolin explains. Instead, environmental conditions that allowed rats, fleas and lice to spend so much time around people would have played a larger role.
Until modern times, he notes, people were gross. They didn’t wash often and there were no modern sewers. Not only that, rats and mice could thrive in the straw that many people used in their buildings for roofing and a floor covering. Hard roofs and clean floors mean fewer places for ratty roommates — and the diseases they might pass on to human fleas and lice.
Power Words
bacteria (singular: bacterium) Single-celled organisms. These dwell nearly everywhere on Earth, from the bottom of the sea to inside other living organisms (such as plants and animals).
biology The study of living things. The scientists who study them are known as biologists.
bubonic plague A disease caused by the bacterium Yersinia pestis. It’s transmitted by the bite of a flea that had previously bitten some rodent (or other mammal) infected with the germ. This form of plague causes fever, vomiting and diarrhea. It also inflames the lymph nodes, causing them to swell. Those swollen tissues, called buboes, give this form of the disease its name. Known as the Black Death, bubonic plague killed millions of people in Europe during a series of outbreaks during the Middle Ages.
colleague Someone who works with another; a co-worker or team member.
computer model A program that runs on a computer that creates a model, or simulation, of a real-world feature, phenomenon or event.
death rates The share of people in a particular, defined group that die per year. Those rates can change if the group is affected by disease or other deadly conditions (such as accidents, natural disasters, extreme heat or war and other sources of violence).
ecology A branch of biology that deals with the relations of organisms to one another and to their physical surroundings. A scientist who works in this field is called an ecologist.
ectoparasite A parasite such as a flea or louse, which lives outside of its host.
epidemic A widespread outbreak of an infectious disease that sickens many people (or other organisms) in a community at the same time. The term also may be applied to non-infectious diseases or conditions that have spread in a similar way.
gland A cell, a group of cells or an organ that produces and discharges a substance (or “secretion”) for use elsewhere in the body or in a body cavity, or for elimination from the body.
host (in biology and medicine) The organism (or environment) in which some other thing resides. Humans may be a temporary host for food-poisoning germs or other infective agents.
immune (adj.) Having to do with the immunity. (v.) Able to ward off a particular infection. Alternatively, this term can be used to mean an organism shows no impacts from exposure to a particular poison or process. More generally, the term may signal that something cannot be hurt by a particular drug, disease or chemical.
immune system The collection of cells and their responses that help the body fight off infections and deal with foreign substances that may provoke allergies.
infect To spread a disease from one organism to another. This usually involves introducing some sort of disease-causing germ to an individual.
lymph A colorless fluid produced by lymph glands. This secretion, which contains white blood cells, bathes the tissues and eventually drains into the bloodstream.
lymph glands (or lymph nodes) Small nodules located in the armpits, groin and stomach, these organs are part of the lymph system. They secrete lymph and also serve as a storage place for some cells in the immune system.
model A simulation of a real-world event (usually using a computer) that has been developed to predict one or more likely outcomes. Or an individual that is meant to display how something would work in or look on others.
organ (in biology) Various parts of an organism that perform one or more particular functions. For instance, an ovary is an organ that makes eggs, the brain is an organ that makes sense of nerve signals and a plant’s roots are organs that take in nutrients and moisture.
organism Any living thing, from elephants and plants to bacteria and other types of single-celled life.
outbreak The sudden emergence of disease in a population of people or animals. The term may also be applied to the sudden emergence of devastating natural phenomena, such as earthquakes or tornadoes.
pandemic An epidemic that affects a large proportion of the population across a country or the world.
parasite An organism that gets benefits from another species, called a host, but doesn’t provide that host any benefits. Classic examples of parasites include ticks, fleas and tapeworms.
plague A term for any horrific infection that spreads easily and kills many people, usually quickly. Best known are the infections caused by the bacterium Yersinia pestis. Indeed, they are commonly referred to simply as the plague. In one form, people pick up the germ from the bite of infected fleas. This inflames the lymph nodes, causing them to swell. Those swollen tissues, called buboes, give this form of the disease its name: bubonic plague. When the disease is instead transmitted by inhaling the bacteria, people develop what’s known as pneumonic plague. This form of the disease can be spread when sick people cough. Pneumonic plague is the most deadly form, often killing its victims within 24 hours.
Proceedings of the National Academy of Sciences A prestigious journal publishing original scientific research, begun in 1914. The journal's content spans the biological, physical, and social sciences. Each of the more than 3,000 papers it publishes each year, now, are not only peer reviewed but also approved by a member of the U.S. National Academy of Sciences.
reservoir A large store of something. Lakes are reservoirs that hold water. People who study infections refer to the environment in which germs can survive safely (such as the bodies of birds or pigs) as living reservoirs.
rodent A mammal of the order Rodentia, a group that includes mice, rats, squirrels, guinea pigs, hamsters and porcupines.
sanitation The protection of human health by preventing human contact with our own bodily wastes, through hand washing, use of things like use of toilets or latrines, separation of disposal of wastes from drinking-water sources and water, and cleaning water to rid of disease causing agents disinfecting foods and materials that may be ingested or otherwise enter the body.
sewer A system of water pipes, usually running underground, to move sewage (primarily urine and feces) and stormwater for collection — and often treatment — elsewhere.
species A group of similar organisms capable of producing offspring that can survive and reproduce.
transmit (n. transmission) To send or pass along.
Yersinia pestis The bacterium that causes plague, both the bubonic and pneumonic forms.
Bethany Brookshire was a longtime staff writer at Science News Explores and is the author of the book Pests: How Humans Create Animal Villains. She has a Ph.D. in physiology and pharmacology and likes to write about neuroscience, biology, climate and more. She thinks Porgs are an invasive species.
Classroom Resources for This Article
Free educator resources are available for this article. Register to access:
Founded in 2003, Science News Explores is a free, award-winning online publication dedicated to providing age-appropriate science news to learners, parents and educators. The publication, as well as Science News magazine, are published by the Society for Science, a nonprofit 501(c)(3) membership organization dedicated to public engagement in scientific research and education. | He is a biologist at Colorado State University in Fort Collins. “It’s nice to see a model that shows [it could happen].”
Studying illnesses of the past is important for the future, Antolin notes. Those long-ago outbreaks can teach a lot about how modern diseases might spread and kill. “What we’re looking for are the conditions that allow epidemics or pandemics to occur,” he says. “What can we learn? Can we predict the next big outbreak?”
Even if rats played a role in the Black Death, they wouldn’t have been the biggest factor, Antolin explains. Instead, environmental conditions that allowed rats, fleas and lice to spend so much time around people would have played a larger role.
Until modern times, he notes, people were gross. They didn’t wash often and there were no modern sewers. Not only that, rats and mice could thrive in the straw that many people used in their buildings for roofing and a floor covering. Hard roofs and clean floors mean fewer places for ratty roommates — and the diseases they might pass on to human fleas and lice.
Power Words
bacteria (singular: bacterium) Single-celled organisms. These dwell nearly everywhere on Earth, from the bottom of the sea to inside other living organisms (such as plants and animals).
biology The study of living things. The scientists who study them are known as biologists.
bubonic plague A disease caused by the bacterium Yersinia pestis. It’s transmitted by the bite of a flea that had previously bitten some rodent (or other mammal) infected with the germ. This form of plague causes fever, vomiting and diarrhea. It also inflames the lymph nodes, causing them to swell. Those swollen tissues, called buboes, give this form of the disease its name. Known as the Black Death, bubonic plague killed millions of people in Europe during a series of outbreaks during the Middle Ages.
| no |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://bpca.org.uk/news-and-blog/black-death-it-wasnt-me | Black Death? It wasn't me | Latest News from BPCA
Black Death? It wasn’t me
A new study suggests that rats might not be responsible for spreading the Black Death and subsequent epidemics of bubonic plague that rampaged across Europe, Asia and Africa for over 600 years.
For many people, the first thoughts about pest control came from a primary school history lesson where you learned about the Black Death. Your teacher may have told you how an estimated third of Europe's population (25 million people) was wiped out between 1347 and 1351 because rats spread fleas, which in turn spread bubonic plague.
A particularly good teacher may have even told you that’s why we keep rats away from people – so we stop the spread of deadly diseases and learn the lessons history taught us. The general public is generally pretty ignorant about our sector – but most people will tell you: rats equal plague.
A new study in the Proceedings of the National Academy of Science suggests, while people commonly assume that rats and their fleas spread the plague, “human ectoparasites, like body lice and human fleas, might be more likely than rats to have caused the rapidly developing epidemics in pre-industrial Europe”. In other words, humans spread the plague. A win for the rattus rattus PR team if ever there was one.
In modern instances of plague, such as the outbreak in Madagascar in 2017, rats and other rodents helped spread the disease. If Y. pestis bacterium infects rats, they can pass it to their fleas as they drink the rodents’ blood. When a plague-infected rat dies, its parasites abandon the corpse and can go on to bite humans – often with the help of domestic animals. This was thought to be how medieval plagues spread. However, the new study presents an alternative argument.
Prof Nils Stenseth, from the University of Oslo, told the BBC that, “we have good mortality data from outbreaks in nine cities in Europe, so we could construct models of the disease dynamics.” Using sophisticated computer simulations, the study tested three models for spreading disease outbreaks in each of these cities. They were by rats; airborne transmission; and lice and fleas living on humans and their clothes.
Seven out of the nine cities simulated found the ‘human parasite model’ produced the best match for the pattern of the outbreak.
The study shows they believe that historical plagues spread far too quickly for rats to be the primary transmitter of the disease and instead human lice and fleas are to blame.
Before we give rats a free-pass on history’s most infamous epidemics, it’s worth saying that the study has plenty of room to improve its simulation model, and many scholars are still firmly pointing the finger at rodents.
However historical plagues spread, it’s a fact that from 2010 to 2015 there were 3,248 cases of bubonic plague reported worldwide, including 584 deaths, according to the World Health Organisation (WHO) – and rats and their fleas have been a significant transmitter. So, we’re not letting rats off the hook quite yet.
Plague facts:
3,248 cases of bubonic plague were reported worldwide in 2010-15.
Without treatment, bubonic plague results in the death, in around ten days, of up to 90% of those infected.
‘Black Death’ is a relatively new term. During the event itself it was often called ‘the Pestilence’.
In 2001, a US study tried to map the plague genome using a bacterium that had come from a dead vet. The vet died after a plague-infested cat sneezed on him as he tried to rescue it. | Latest News from BPCA
Black Death? It wasn’t me
A new study suggests that rats might not be responsible for spreading the Black Death and subsequent epidemics of bubonic plague that rampaged across Europe, Asia and Africa for over 600 years.
For many people, the first thoughts about pest control came from a primary school history lesson where you learned about the Black Death. Your teacher may have told you how an estimated third of Europe's population (25 million people) was wiped out between 1347 and 1351 because rats spread fleas, which in turn spread bubonic plague.
A particularly good teacher may have even told you that’s why we keep rats away from people – so we stop the spread of deadly diseases and learn the lessons history taught us. The general public is generally pretty ignorant about our sector – but most people will tell you: rats equal plague.
A new study in the Proceedings of the National Academy of Science suggests, while people commonly assume that rats and their fleas spread the plague, “human ectoparasites, like body lice and human fleas, might be more likely than rats to have caused the rapidly developing epidemics in pre-industrial Europe”. In other words, humans spread the plague. A win for the rattus rattus PR team if ever there was one.
In modern instances of plague, such as the outbreak in Madagascar in 2017, rats and other rodents helped spread the disease. If Y. pestis bacterium infects rats, they can pass it to their fleas as they drink the rodents’ blood. When a plague-infected rat dies, its parasites abandon the corpse and can go on to bite humans – often with the help of domestic animals. This was thought to be how medieval plagues spread. However, the new study presents an alternative argument.
Prof Nils Stenseth, from the University of Oslo, told the BBC that, “we have good mortality data from outbreaks in nine cities in Europe, so we could construct models of the disease dynamics.” | no |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | http://www.greensborowildlife.com/rat-about.html | About rats | About rats
There are two main species of Greensboro rat that humans come into contact with, the first is the black rat which has
a lifespan of around 12 months the second is the Brown rat which is a lifespan of two years, these are for the
wild rats. Rats vary in size but most adults are 10 to 16 inches nose to tail and they can weigh up to a pound.
Rats have incredibly strong teeth, they've been known to chew through glass, cinderblocks, aluminium and lead.
They have generally poor eyesight but their sense of smell, taste, touch and hearing are all excellent.
I know history paints Greensboro rats as pretty nasty creatures as they were credited with spreading the bubonic plague, usually
called the Black Death in the history books. The rats did not spread bubonic plague, the fleas the North Carolina rats carried into
human dwellings spread bubonic plague, so even if the rats did bring them in they were not directly responsible for
spreading the plague. One disease that is linked with rats and they are responsible for helping its spread is foot
and mouth disease.
As far back as the late 19th century people started breeding brown Greensboro rats as pets, these rats depending on how many generations
they've been bred as pets now behave absolutely nothing like their wild counterparts and they also pose no more of a health risk
than a cat or a dog. They are quite intelligent, they learned their name and they can also learn a repertoire of entertaining tricks.
In the last 20 years at least one species of rat has become very helpful to humans, this is the giant African rat and they have taught
these North Carolina rats find and mark land mines in the ground, because of their light weight they do not set off the mines when searching for them.
One little-known fact about rats that I found amazing is that they can swim for three days before they drown. When rats are happy and
playing they make off a sound very similar to human laughter. I know most people think of rats is dirty animals, the fact is they are
one of the cleanest animals in nature, considerably cleaner than the average North Carolina house cat.
To look at a Greensboro rat I doubt he would actually think that they are one of the best climbers in nature, they climbing ability mainly comes
from the balance their tail gives them. The main habit they have which makes them great pets is a sleep for over three quarters of all
daylight hours. | About rats
There are two main species of Greensboro rat that humans come into contact with, the first is the black rat which has
a lifespan of around 12 months the second is the Brown rat which is a lifespan of two years, these are for the
wild rats. Rats vary in size but most adults are 10 to 16 inches nose to tail and they can weigh up to a pound.
Rats have incredibly strong teeth, they've been known to chew through glass, cinderblocks, aluminium and lead.
They have generally poor eyesight but their sense of smell, taste, touch and hearing are all excellent.
I know history paints Greensboro rats as pretty nasty creatures as they were credited with spreading the bubonic plague, usually
called the Black Death in the history books. The rats did not spread bubonic plague, the fleas the North Carolina rats carried into
human dwellings spread bubonic plague, so even if the rats did bring them in they were not directly responsible for
spreading the plague. One disease that is linked with rats and they are responsible for helping its spread is foot
and mouth disease.
As far back as the late 19th century people started breeding brown Greensboro rats as pets, these rats depending on how many generations
they've been bred as pets now behave absolutely nothing like their wild counterparts and they also pose no more of a health risk
than a cat or a dog. They are quite intelligent, they learned their name and they can also learn a repertoire of entertaining tricks.
In the last 20 years at least one species of rat has become very helpful to humans, this is the giant African rat and they have taught
these North Carolina rats find and mark land mines in the ground, because of their light weight they do not set off the mines when searching for them.
One little-known fact about rats that I found amazing is that they can swim for three days before they drown. When rats are happy and
playing they make off a sound very similar to human laughter. | no |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://www.darlington.gov.uk/environmental-health/pest-control/rats/ | Rats - Darlington BC | Cost of treatment and how to book
Payment must be made in advance. An officer will contact you to arrange a visit.
This service charges £10 (this charge does not apply for council tenants as pest control for mice and rats is included in the rent).
Are rats a danger to humans?
Yes. Rats are a serious hazard to public health. Aside from contaminating food with their droppings and urine, fleas from rats were responsible for spreading the bubonic plague. Today, such diseases as salmonella bacteria (food poisoning), leptospira (jaundice), and typhus are commonly spread by rats. Because of their unsanitary habits, secondary infections from rat bites can be serious and sometimes fatal. An infestation of rats must not be tolerated.
When are rats most common?
Rats are year-round pests. Under certain conditions, rats can survive outdoors during the winter. However activity and indoor migration increases as weather gets cooler and outdoor food and water sources decrease.
When am I most likely to see rats?
Rats are most active during the evening and remain so until the middle of the night. If food and water are scarce, or in the case of large infestations, rats become active during daylight hours.
Where do rats build nests?
Rats nest in any safe location near food and water. Outdoors, rats burrow into the ground. Indoors, nesting occurs in double walls, between ceilings and floors, in closed-in areas around worktops and anywhere rubbish is allowed to accumulate.
What are their breeding habits?
The average lifespan of a rat is 18 months. Young rats are born about 22 days after mating and will mature rapidly. Single females may have as many as 6 litters a year, averaging 6 to 14 young each. By 3 months of age, the young are independent and capable of reproduction. If not controlled, an infestation of rats will rapidly increase in numbers.
How can I tell if I have an infestation of rats?
Rat droppings near available food sources is the most common sign of an infestation. Evidence of gnawing, rub marks, tracks, burrows, nests and damage to stored products are indications of the extent of an infestation.
What can I do to prevent an infestation of rats?
Rats will invade almost any business premises. However it is the presence of unsanitary conditions that encourages their activity. All goods must be stored in properly sealed containers and waste should be prevented from accumulating, or kept in containers with tight-fitting lids. Seal all openings to the outside, including wood around doors and windows; repair masonry and seal openings for utility lines, conduits and drains.
Simple actions to help prevent problems with rats:
Rats love compost bins. They are warm and full of food. Place wire mesh (1cm x 1cm) under the base of your compost bin or even better put it on a concrete base to stop rats getting in under the bin.
Keep rubbish in sealed bins with well fitting tops, and keep long grass to a minimum to reduce places for them to live. | Cost of treatment and how to book
Payment must be made in advance. An officer will contact you to arrange a visit.
This service charges £10 (this charge does not apply for council tenants as pest control for mice and rats is included in the rent).
Are rats a danger to humans?
Yes. Rats are a serious hazard to public health. Aside from contaminating food with their droppings and urine, fleas from rats were responsible for spreading the bubonic plague. Today, such diseases as salmonella bacteria (food poisoning), leptospira (jaundice), and typhus are commonly spread by rats. Because of their unsanitary habits, secondary infections from rat bites can be serious and sometimes fatal. An infestation of rats must not be tolerated.
When are rats most common?
Rats are year-round pests. Under certain conditions, rats can survive outdoors during the winter. However activity and indoor migration increases as weather gets cooler and outdoor food and water sources decrease.
When am I most likely to see rats?
Rats are most active during the evening and remain so until the middle of the night. If food and water are scarce, or in the case of large infestations, rats become active during daylight hours.
Where do rats build nests?
Rats nest in any safe location near food and water. Outdoors, rats burrow into the ground. Indoors, nesting occurs in double walls, between ceilings and floors, in closed-in areas around worktops and anywhere rubbish is allowed to accumulate.
What are their breeding habits?
The average lifespan of a rat is 18 months. Young rats are born about 22 days after mating and will mature rapidly. Single females may have as many as 6 litters a year, averaging 6 to 14 young each. By 3 months of age, the young are independent and capable of reproduction. If not controlled, an infestation of rats will rapidly increase in numbers.
How can I tell if I have an infestation of rats?
| yes |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://www.kykopestprevention.com/blog/warning-signs-of-roof-rats | Here are the 5 warning signs of roof rats in the Phoenix metro | Here are the 5 warning signs of roof rats in the Phoenix metro
If there’s a “trinity” of pests here in the Phoenix metro, most experts would agree that bark scorpions and subterranean termites take the two spots. The third? Roof rats. Every Valley homeowner needs to know the warning signs of roof rats and when and how to take action.
In this article, we’ll introduce you to these pests and review how infestations take place. We’ll then discuss the common signs of roof rats, as well as how you should go about having an infestation removed.
What are roof rats?
Typically, roof rats are the black rat. Infamous for once spreading the bubonic plague to Europe, black rats are one of the world’s most successful and widespread species, living anywhere humans do. Roof rats, as the name implies, make their nests in Phoenix area attics. They’re a specialized rat that has adapted to the modern lifestyle of humans here in the desert.
Why are they so common in the Phoenix metro?
Gilbert and Tempe are both in the top-5 for roof rats nationwide. There are many reasons for this, but one of the primary ones is that the citrus trees in many backyards in the Valley provide a consistent food source for rats, while attics are generally undisturbed by local homeowners.
How do they infest new homes?
Roof rats can travel in many different ways to new homes. According to researchers at the University of Arizona, rats move 200-300 feet at night and are most active in the cooler months of the year. Like most rodents, roof rats are most active at dawn and dusk, and try to avoid movement during the light and heat of the day.
Most homeowners see roof rats in the evening as they head out from their shelter in search of food.
True to their name, these rats are experts at traveling high up. They’ll move along power lines and can climb up brick and stucco. When they reach their destination, they can enter homes through any nickel-sized opening or larger. As we’ll discuss below, this makes denying shelter to roof rats complicated.
Are roof rats dangerous?
Potentially, yes. Roof rats are infamous disease carriers. Black rats are responsible for some of the deadliest and most infamous disease outbreaks in human history: in medieval Europe and Asia, ship-borne black rats carried fleas infected with bubonic plague to new cities, killing millions of people and changing the course of history.
While it’s highly unlikely roof rats here in suburban Phoenix have come into contact with plague—cases of which do happen here in Arizona in wilderness areas—they can carry several other serious diseases. This can include typhus, jaundice, salmonellosis, and rat-bite fever. These diseases can be spread to humans through exposure to droppings, urine, carried fleas, or—as you may have inferred—bites.
If you do come into contact with a roof rat or its nest, leave it alone and instead call in one of our rodent professionals. Only a pest professional, wearing proper protective clothing, should clean out or deal with a rodent colony.
Preventing a roof rat infestation
Like all pests, roof rats need regular access to shelter, food, and water. If denied these things, they’ll typically move on from your home to an easier target. Homeowners who take proactive steps to keep their yard clean, pick up food waste, and seal off entry points are typically less likely to have a roof rat infestation.
Denying shelter
True to their name, roof rats prefer to build their colonies in attics and roofs, where they are less likely to be disturbed by people and potential predators, such as feral neighborhood cats or barn owls. Once established, this colony can grow quickly. However, these rats are far from picky. They can also create their nests beneath wood piles, in storage boxes, under shrubs, and in garages and storage sheds. An unkempt yard with abandoned vehicles, multiple storage sheds, and overgrown bushes is just about paradise for a rat.
To prevent an infestation, start by cleaning up your yard. Keep bushes and trees trimmed down, taking special care to remove overhanging tree branches near your roof. Clean up any boxes or waste, and consider storing wood in an elevated and sealed-off place. If you have a garage or storage shed, keep it clean and orderly. Avoid storing cardboard moving boxes on the floor.
Denying sustenance
Roof rats are hardy and adaptable omnivores. During the twilight hours of late dusk and early morning, they’ll leave their shelter to find food. One of the reasons why roof rats have flourished here in the Valley is because of our abundance of backyard citrus trees. Fallen oranges, grapefruit, and lemons are the perfect nearby food source for rats. However, they’ll also eat seeds, nuts, snails, roaches, crickets, and any type of human food waste, such as dropped bread crumbs.
If you want to cut them off from the buffet line, you’ll need to be vigilant about picking up dropped citrus in your backyard and regularly harvesting ripe citrus fruit from your trees. If you feed your pets outdoors, pick up their food as soon as they’re done. Close any outdoor garbage bins. This is a long-term strategy: like their distant pack rat cousins, roof rats are hoarders who will build up a “pantry” in their shelter to sustain them through lean times.
Denying water
Here in Phoenix, this might be the hardest element to control. Our homes have introduced incredible amounts of water to the natural landscape, from our sprinkler heads to our AC condensate drip lines. However, cutting down on standing water can help. Fix irrigation line leaks to prevent water from pooling around plants and trees, and pick up pet water bowls when they are not in use.
What are the common signs of roof rats?
In most cases, a homeowner finds a rat—either dead or alive—inside or outside their home. Roof rats are social creatures who live in colonies, so never assume that one rat is just a random occurrence. You’ll want to take action.
Even if you don’t see a rat, there might be other prominent signs of roof rats around. Here are just a few things to look out for:
Fallen, half-eaten fruit
If you have citrus trees on your property, look at fallen fruit. If the fruit is eaten or hollowed out, that probably means rats have been feasting on it.
Strange noises in the attic
You’ll often hear roof rats making noises in their nests in your attic. In the still of night, listen carefully. If you hear squeaking or scratching coming from your attic, it might be roof rats above you.
Odd pet behavior
If you have cats or dogs that are acting strangely—especially toward the ceiling—that might warrant a closer inspection. Our pets can typically pick up on the scent or sounds of roof rats long before we can.
How do you check for roof rats?
If you suspect you have roof rats, do not personally inspect your attic. Roof rat droppings, urine, and nests can contain dangerous bacteria that can make you and your family very sick. While rare, roof rats can also act aggressively when cornered or confronted in their colony. If you have seen a rat or have seen evidence of their presence, you’re at the point where you should bring in a pest professional for an inspection. Here in the Valley, our team at KY-KO Pest Prevention offers free roof rat inspections.
Getting rid of a roof rat infestation
This requires the expertise of a licensed and experienced pest professional—someone who has worked with a roof rat infestation before. Removing an infestation isn’t just about laying down some traps and calling it a day. It requires a strategic approach that focuses on removing the current infestation and denying food, water, and shelter to roof rats. In other words, one of our pest control experts can help you figure out how to kick your current roof rats out and then keep them out.
In our experience, a strategic combination of traps, home sealing, and habitat denial tends to work. On their own, approaches like the use of poison or “rat-catcher” cats do not end an infestation, and tend to introduce new sets of problems and issues. We recommend you talk to a rodent professional and avoid any do-it-yourself shortcuts. They just tend to not work. | Are roof rats dangerous?
Potentially, yes. Roof rats are infamous disease carriers. Black rats are responsible for some of the deadliest and most infamous disease outbreaks in human history: in medieval Europe and Asia, ship-borne black rats carried fleas infected with bubonic plague to new cities, killing millions of people and changing the course of history.
While it’s highly unlikely roof rats here in suburban Phoenix have come into contact with plague—cases of which do happen here in Arizona in wilderness areas—they can carry several other serious diseases. This can include typhus, jaundice, salmonellosis, and rat-bite fever. These diseases can be spread to humans through exposure to droppings, urine, carried fleas, or—as you may have inferred—bites.
If you do come into contact with a roof rat or its nest, leave it alone and instead call in one of our rodent professionals. Only a pest professional, wearing proper protective clothing, should clean out or deal with a rodent colony.
Preventing a roof rat infestation
Like all pests, roof rats need regular access to shelter, food, and water. If denied these things, they’ll typically move on from your home to an easier target. Homeowners who take proactive steps to keep their yard clean, pick up food waste, and seal off entry points are typically less likely to have a roof rat infestation.
Denying shelter
True to their name, roof rats prefer to build their colonies in attics and roofs, where they are less likely to be disturbed by people and potential predators, such as feral neighborhood cats or barn owls. Once established, this colony can grow quickly. However, these rats are far from picky. They can also create their nests beneath wood piles, in storage boxes, under shrubs, and in garages and storage sheds. An unkempt yard with abandoned vehicles, multiple storage sheds, and overgrown bushes is just about paradise for a rat.
To prevent an infestation, start by cleaning up your yard. | yes |
Mammalogy | Are rats responsible for spreading bubonic plague? | yes_statement | "rats" are "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is "spread" by "rats". | https://www.prestigepestcontrol.com/blog/post/savannah-s-rat-control-problem-and-how-to-combat-it | Blog - Savannah's Rat Control Problem and How To Combat It | Savannah's Rat Control Problem and How To Combat It
As a Savannah home or business owner, you have the potential to experience a long list of problems. Roof leaks, burglary, and broken appliances are only some of the issues you might be ready to endure. But are you ready for a rat infestation? Savannah has a rat problem, and the pests could come to your house next.
What Rats Are In Savannah?
Although there are several types of rats in Savannah, the only one that's making news headlines - roof rats; according to a survey, Savannah has more complaints about roof rats than any other city in the United States. Every year, residents in the city make more complaints to pest control professionals than anywhere else.
Roof rats are agile black rats that tend to invade homes by climbing on trees and roofs. In addition to being good at invading homes, these rats have another strike against them. They are notorious for spreading diseases and may be responsible for spreading the Bubonic Plague.
While roof rats aren't native to Savannah, they certainly have made themselves at home. Originally from Asia, roof rats love the warm and humid climate of Savannah. They have plenty of food, thanks to humans leaving their trash our and pantries accessible.
What's The Big Deal About Rats?
You might be wondering why rats are a problem. After all, they don't go out of their way to bite humans, and they mostly keep to themselves. Unfortunately, rats are much more dangerous than they seem.
First, rats have the ability to spread diseases. When they defecate and urinate in your home, rats have the potential to make you very sick. They also carry parasites that spread diseases, which further increases the risk of you getting sick.
Secondly, rats cause property damage. There's no doubt about it; if you have rats in your home, you will experience property damage. As part of their need to survive, rats chew on things. This keeps their teeth from getting too long, but it's bad news for your home. With the ability to chew through most building materials, rats can cause enough damage to leave you with hefty repair bills. If rats chew on your electrical wiring, you could be in danger of a fire.
What You Can Do About Rats
To avoid the frustrations and dangers that come with rat infestations, you need to take action. There are a few small things you can do to deter these Savannah pests.
Seal Up Openings: If a rat sees an opening in your home, it can chew at it to make an entrance. You should try to make your home harder to access by sealing it up well. Use caulk and foam to seal up potential access points.
Store Trash In Cans With Lids: Your open cans of garbage are a feast for rats. If you don't want to be overrun by these critters, you should store your garbage in cans with lids.
Keep Trees From Touching Your Home: If you have trees touching your home, roof rats have an easy way of getting inside. Keep your tree branches as far away from your home as possible.
Working With A Team Of Professionals
There's only one effective way of keeping rats away or evicting them from your home. For the best result, trust us at Prestige Pest Control. With years of industry experience, we know how to protect your home or business from rats. There's no need to let your home be the next location of a rat infestation. You can trust us for ongoing rat control, as well as other pest control needs. Call now to learn more. | Savannah's Rat Control Problem and How To Combat It
As a Savannah home or business owner, you have the potential to experience a long list of problems. Roof leaks, burglary, and broken appliances are only some of the issues you might be ready to endure. But are you ready for a rat infestation? Savannah has a rat problem, and the pests could come to your house next.
What Rats Are In Savannah?
Although there are several types of rats in Savannah, the only one that's making news headlines - roof rats; according to a survey, Savannah has more complaints about roof rats than any other city in the United States. Every year, residents in the city make more complaints to pest control professionals than anywhere else.
Roof rats are agile black rats that tend to invade homes by climbing on trees and roofs. In addition to being good at invading homes, these rats have another strike against them. They are notorious for spreading diseases and may be responsible for spreading the Bubonic Plague.
While roof rats aren't native to Savannah, they certainly have made themselves at home. Originally from Asia, roof rats love the warm and humid climate of Savannah. They have plenty of food, thanks to humans leaving their trash our and pantries accessible.
What's The Big Deal About Rats?
You might be wondering why rats are a problem. After all, they don't go out of their way to bite humans, and they mostly keep to themselves. Unfortunately, rats are much more dangerous than they seem.
First, rats have the ability to spread diseases. When they defecate and urinate in your home, rats have the potential to make you very sick. They also carry parasites that spread diseases, which further increases the risk of you getting sick.
Secondly, rats cause property damage. There's no doubt about it; if you have rats in your home, you will experience property damage. As part of their need to survive, rats chew on things. This keeps their teeth from getting too long, but it's bad news for your home. | yes |
Mammalogy | Are rats responsible for spreading bubonic plague? | no_statement | "rats" are not "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is not "spread" by "rats". | https://www.zmescience.com/research/studies/black-plague-18082011/ | Rats not responsible for black plague | Rats not responsible for black plague
A recent study has shown that the plague spread so quickly that the carriers couldn't have been rats, as is commonly believed.
The black plague, or black death as it is sometimes referred to was a disease outburst so horrible that it killed some 30-60% of the population of Europe. Even to this day, few diseases have been even nearly as deadly; and most of the blame was taken by rats, who were believed to be the carriers. However, a study conducted in London by Barney Sloane, archaeologist, showed that the disease spread so fast that the carriers couldn't be rats, and that there is only one possible explanation - the carriers were humans.
"The evidence just isn't there to support it," said Barney Sloane, author of The Black Death in London. "We ought to be finding great heaps of dead rats in all the waterfront sites but they just aren't there. And all the evidence I've looked at suggests the plague spread too fast for the traditional explanation of transmission by rats and fleas. It has to be person to person – there just isn't time for the rats to be spreading it."
He even raised some questions about the disease itself, casting some doubt on whether it was in fact bubonic plague or not.
"It was certainly the Black Death but it is by no means certain what that disease was, whether in fact it was bubonic plague."
The study is still a work in progress, and probably more light will be cast on the matter soon enough. | Rats not responsible for black plague
A recent study has shown that the plague spread so quickly that the carriers couldn't have been rats, as is commonly believed.
The black plague, or black death as it is sometimes referred to was a disease outburst so horrible that it killed some 30-60% of the population of Europe. Even to this day, few diseases have been even nearly as deadly; and most of the blame was taken by rats, who were believed to be the carriers. However, a study conducted in London by Barney Sloane, archaeologist, showed that the disease spread so fast that the carriers couldn't be rats, and that there is only one possible explanation - the carriers were humans.
"The evidence just isn't there to support it," said Barney Sloane, author of The Black Death in London. "We ought to be finding great heaps of dead rats in all the waterfront sites but they just aren't there. And all the evidence I've looked at suggests the plague spread too fast for the traditional explanation of transmission by rats and fleas. It has to be person to person – there just isn't time for the rats to be spreading it. "
He even raised some questions about the disease itself, casting some doubt on whether it was in fact bubonic plague or not.
"It was certainly the Black Death but it is by no means certain what that disease was, whether in fact it was bubonic plague. "
The study is still a work in progress, and probably more light will be cast on the matter soon enough. | no |
Mammalogy | Are rats responsible for spreading bubonic plague? | no_statement | "rats" are not "responsible" for "spreading" "bubonic" "plague".. "bubonic" "plague" is not "spread" by "rats". | https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003039 | Host Resistance, Population Structure and the Long-Term ... | Figures
Abstract
Although bubonic plague is an endemic zoonosis in many countries around the world, the factors responsible for the persistence of this highly virulent disease remain poorly known. Classically, the endemic persistence of plague is suspected to be due to the coexistence of plague resistant and plague susceptible rodents in natural foci, and/or to a metapopulation structure of reservoirs. Here, we test separately the effect of each of these factors on the long-term persistence of plague. We analyse the dynamics and equilibria of a model of plague propagation, consistent with plague ecology in Madagascar, a major focus where this disease is endemic since the 1920s in central highlands. By combining deterministic and stochastic analyses of this model, and including sensitivity analyses, we show that (i) endemicity is favoured by intermediate host population sizes, (ii) in large host populations, the presence of resistant rats is sufficient to explain long-term persistence of plague, and (iii) the metapopulation structure of susceptible host populations alone can also account for plague endemicity, thanks to both subdivision and the subsequent reduction in the size of subpopulations, and extinction-recolonization dynamics of the disease. In the light of these results, we suggest scenarios to explain the localized presence of plague in Madagascar.
Author Summary
Bubonic plague, known to have marked human history by three deadly pandemics, is an infectious disease which mainly circulates in wild rodent populations and is transmitted by fleas. Although this disease can be quickly lethal to its host, it has persisted on long-term in many rodent populations around the world. The reasons for this persistence remain poorly known. Two mechanisms have been invoked, but not yet explicitly and independently tested: first, the spatial structure of rodent populations (subdivision into several subpopulations) and secondly, the presence of, not only plague-susceptible rodents, but also plague-resistant ones. To gain insight into the role of the above two factors in plague persistence, we analysed a mathematical model of plague propagation. We applied our analyses to the case of Madagascar, where plague has persisted on central highlands since the 1920s and is responsible for about 30% of the human cases worldwide. We found that the long-term persistence of plague can be explained by the presence of any of the above two factors. These results allowed us to propose scenarios to explain the localized presence of plague in the Malagasy highlands, and help understand the persistence of plague in many wild foci.
Funding: This work was funded by an ATI-type III program of the Institut de Recherche pour le Développement (CB) and by a NSF grant DMS 453 0540392 (FD). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Although bubonic plague has marked human history by three pandemics (Justinian plague in the – centuries, Medieval plague in the – centuries and Asiatic plague since 1894 [1]), this zoonosis caused by the coccobacillus Yersinia pestis is primarily a rodent disease. Its persistent circulation in wild reservoirs is responsible for occasional epidemics in human populations [2], [3]. Each plague focus has distinct characteristics, but all have mammal hosts as reservoirs and fleas as vectors.
Two main factors are suspected to explain the endemic persistence of plague despite its high virulence. The first one is the coexistence of plague resistant and plague susceptible rodents in many wild foci of the world. Susceptible hosts are assumed to allow plague transmission by developing the high septicemia needed for the disease to spread, while resistant hosts would help maintain the host and flea populations and would lower the effective rate of encounter between infectious fleas and susceptible hosts [4]–[6]. The second factor which could explain plague endemism is the host metapopulation structure [7]–[10], which may allow for extinction-recolonization dynamics of plague in local foci, between which the disease spreads slowly [11], [12]. Theoretical models have shown that these extinction-recolonization dynamics are involved in the persistence of various infectious diseases, e.g. measles [8], [13], [14]. A few other mechanisms are thought to favour the persistence of plague, such as the presence of multiple hosts [5], [15], the possible persistence of Y. pestis in soils [16], [17], the direct transmission between rats inside burrows [18], [19] or the heterogeneity in the phenology of the host reproduction [15]. These alternative explanations will be discussed at the end of the article.
Explaining the endemism of plague has been the objective of several theoretical studies [15], [20]–[22]. However, the roles of resistance and of metapopulation structure of rodent reservoirs for plague persistence have rarely been explored separately. For instance, the model developed by Keeling & Gilligan [21] showed that metapopulation structure can explain the long-term persistence of plague via extinction-recolonization dynamics of the disease, but the theoretical population that they modelled included some resistant individuals, so that the roles of both factors cannot be disentangled. This is also the case for theoretical studies on the Kazakh focus [22], [23], where the hosts are modelled as partly resistant. In populations of susceptible hosts, such as prairie dogs (Cynomys spp.) in the United States, the link between spatial structure and plague persistence has been empirically observed [24]–[26] and theoretically confirmed, at least on short time scales (Salkeld et al. found that plague transmission between adjacent coteries of a structured susceptible host population lead to enzootic phases which last for more than 1 year in about 25% of model runs [27]). Also, several studies pointed out the need to test for the different mechanisms involved in plague endemism [5], [28].
Madagascar is one of the major plague foci in the world, accounting for 31% of the 50,000 reported human cases worldwide between 1987 and 2009 [29]. Bubonic plague was introduced in Madagascar in 1898 [30] and spread to central highlands in the 1920s [31]. Since that time, the disease persists in this region at the landscape level. In coastal areas and regions below 800 m of altitude, only sporadic urban epidemics occurred, due to human-mediated translocation of infected rodents from central highlands [30], [32]. In Madagascar, the main host of plague is the black rat, Rattus rattus[30], that is widespread throughout the island [30], while two species of fleas are involved as vectors [31]: Xenopsylla cheopis, the oriental rat flea, which has a cosmopolitan distribution, and Synopsyllus fonquerniei, an endemic flea from Madagascar whose distribution is restricted to central highlands. Compared to many other natural plague foci, only a few species are involved in the transmission of the disease [3]. Nevertheless, and despite its importance regarding public health, the causes of plague persistence in Madagascar have never been explored. Preliminary population genetic studies suggested that rat populations from the highlands are more geographically structured than those of the coastal areas, probably because of the more rugged physical landscape that limits migration [33]. Also, consistent with the hypothesis of a causal relationship between plague persistence and host resistance, at least 50% of the rats caught in Malagasy highlands are plague-resistant, whereas they are all susceptible in low altitude plague-free areas [34], [35]. However, as highlands were colonised by rats from coastal areas several centuries ago [34], the evolution of plague resistance in Malagasy black rats may be recent and posterior to the spread of the disease. It might thus be a consequence rather than a primary cause of plague persistence in rural areas of central highlands.
Building on the model of Keeling & Gilligan [20], [21], we developed a theoretical approach to evaluate independently the roles of host population structure and of host resistance in the long-term persistence of plague. The model was parameterized using data from plague ecology in Madagascar when they are available or from the literature otherwise. The sensitivity of the model to a range of parameters was tested. We evaluated the consistency of the hypothesis suggesting a recent evolution of plague resistance in Madagascar, and identified the parameters that need to be measured in order to test it.
Materials and Methods
Model
Our model of plague epidemiological dynamics is built on the framework developed by Keeling and Gilligan [20], [21] and is parameterized using data from studies on plague ecology in Madagascar when available [18], [34]–[36]. The system of differential equations (1) accounts for the number of individuals and epidemiological status of the rat (host) and fleas (vector) populations. The rodent host population is composed of three phenotypes: healthy, plague susceptible rats (whose number is in system (1)), healthy, plague resistant rats (), and infectious rats (). Two categories of vectors are taken into account: the mean number of fleas living on a rat (pulicidian index, ) and the number of free infectious fleas ().
The birth rate of rats is assumed to be density dependent [37] and modelled by a logistic equation, being the maximal birth rate and the carrying capacity of the rat population. The rats are assumed to die naturally at constant rate . We assume no direct cost to resistance, however only a proportion of the offspring of resistant rats are resistant ( is the heritability of resistance; ), the other offspring () being all susceptible to plague [36]. In contrast, all the offspring of susceptible rats are born susceptible to plague [36]. Susceptible rats () can contract the disease and become infectious () while resistant rats () always remain uninfected, which is a realistic assumption in the context of the Malagasy plague [36]. Infection happens when free infectious fleas () land on susceptible rats () and transmit the bacillus according to the transmission parameter . Free infectious fleas () come randomly in contact with rats with a probability of encounter [38]. The parameter measures the search efficiency of fleas. Following [39], the infection of rats by fleas is modelled as a frequency-dependent process. We thus consider that the force of infection is . Infectious rats quickly die from septicemia, which results in an additional mortality term , also called the virulence of the bacillus. The death of each of these rats leads to the release of fleas in the environment, increasing the number of free infectious fleas (). Free infectious fleas die at rate . Fleas on the rats are assumed to have a density-dependent growth, with maximal growth rate and a carrying capacity per rat . All these assumptions result in the following system of differential equations:(1a)(1b)(1c)(1d)(1e)
Our model includes several modifications compared to the model of Keeling & Gilligan [20], [21], in order to better depict wild plague foci, and specifically that of Madagascar. In our model, (i) infectious rats do not recover, as frequently observed [4], [6], [36], (ii) free infectious fleas either find a host or quickly die from starvation, a more explicit modelling of two events that were not distinguished in [21], and (iii) the descendants of resistant rats which are resistant also have a density-dependent birth rate, while they grew exponentially in [20]. Nevertheless, the above changes do not change the main characteristics of the outputs of the model (comparison of Figures 1 and 2 with Supporting Figures S1 and S2).
Figure 1. Equilibrium states for a susceptible population, according to the rats' maximal birth rate, , and the transmission rate, , with (a) rats and (b) rats.
Values for other parameters follow the ones presented in Table 1. Stable equilibrium states: (,) in black, (,) in dark grey and (,) in light grey. The dynamics for four couples of parameters are given on Supporting Figure S4.
Figure 2. Equilibrium states for a rat population including resistant rats, according to the maximal birth rate of rats,, and the transmission rate, .
Parameter values are given in Table 1, rats. Stable equilibrium states: (,,) in black, (,,) in dark grey, (,,) in light grey and (,,) in white. The dynamics for four couples of parameters are given on Supporting Figure S8.
In the first steps of this study, model (1) is also analysed without the class of resistant rats ( and no resistant rats initially in the system; see system (S1.1) in the Supporting Text S1), in order to investigate their role in plague persistence.
Parameter values
The parameter values that we use come preferentially from experiments or field observations done in the context of the Malagasy plague focus. When relevant data are lacking, parameters are derived from values found in the plague literature (see Table 1).
Study of the equilibria
The basic reproductive number of a disease, , is the expected number of secondary cases caused by one infected individual introduced into a susceptible population [40]. A disease is expected to spread only if is greater than unity. We calculated in our model using the Next Generation Approach [40], [41]. Details of the calculations are presented in the Supporting Text S2.
The system of differential equations (1) is then solved numerically, using the deSolve package [42] in R [43]. Deterministic simulations are run on a time long enough to ensure that equilibrium states are reached (typically years with our parameters). When it is possible to find analytical solutions for the equilibria (for example with system (S1.2) without fleas, in the Supporting Text S1), we can check the accuracy of the numerical integrations of the model. There are four qualitative types of equilibrium states for the rat populations: (i) whole population extinction, labelled (, ) or (, , ) for the models without and with resistant rats, respectively; (ii) persistence of susceptible rats only, labelled (, ) or (, , ); (iii) persistence of susceptible and infected rats but extinction of resistant rats (in systems containing resistant rats initially), (, ) or (, , ); and finally (iv), in the model with resistant rats, coexistence of susceptible, infected and resistant rats, (, , ). We consider a class to be extinct when the number of individuals drops at least once below during the last years of the numerical integration (to avoid any influence of the initial state of the system on the extinction criteria).
Spatial structure: modelling metapopulations
To study the effect of spatial structure on disease persistence, a metapopulation of susceptible hosts only (i.e. without resistant hosts) of total carrying capacity equal to rats is modelled as a set of subpopulations of equal sizes. We neglect the effect of distance by assuming that all subpopulations are equidistant (a situation called island model [44] in population genetics). We consider (i) no spatial structure (), (ii) a weak spatial structure (ie, a low population subdivision) () and (iii) a higher population subdivision (). The fraction of infections that occur between subpopulations is given by the parameter . Although the rats in Madagascar may move temporarily to other subpopulations, thereby spreading the disease, capture-recapture studies have shown that these movements are temporary [18], [45]. We therefore model a migrating force of infection, instead of the migration of the rats themselves [46]. The value of was estimated to be around 1% [18], [21].
The force of infection in system (1) thus becomes in the subpopulation , which includes the rats , and the fleas :(2)
Stochastic analyses
To assess the effect of population structure on the persistence of the disease, we use a stochastic version of our model without resistant rats (system (S1.1) in the Supporting Text S1), based on the Gillespie algorithm [47] that is implemented in the GillespieSSA R package [48]. It simulates a Markov stochastic process in continuous time and with discrete state values. We ran simulations both with and without metapopulation structure, in order to compare the persistence of plague (a hundred replications for each set of parameters). The number of simulations where susceptible rats or infectious rats persist was recorded over time, to obtain an estimation of the probability of extinction of rats and through time.
Results
Long-term plague persistence without resistance and without spatial structure
We first investigated the different outcomes of the model without resistance (variable ) and without population structure (), depending on the value of the transmission parameter and of the maximal birth rate .
When there are no resistant rats and no population structure (system (S1.1) in the Supporting Text S1), the rat population is viable if the maximal birth rate of rats is larger than their mortality rate (Figures 1 and 2). The disease propagation threshold, , sets the limit between the rat population equilibria (,) and (,). Using the Next Generation Method, we found(3)
The propagation of plague is favoured by a high transmission from fleas to rats (), which increases the number of infectious rats, and by a high flea carrying capacity of rats (), which increases the number of free fleas, the vectors of the disease (equation (3)). It is also favoured by a high carrying capacity of the rat population, , and by a high search efficiency of fleas, , through their direct effect on the probability that a flea finds a host. Finally, a high mortality rate of free infectious fleas, , disadvantages disease propagation by limiting the number of vectors. However, note that the fact that plague can initially spread in a susceptible rat population, although necessary, is not a sufficient condition for the long-term persistence of the disease.
The parameters and cannot for now be estimated from data collected in Madagascar, but the value of is not sensitive to changes in their values (see sensitivity analysis of in Supporting Figure S3). The parameters and have more effect on the value of but their range of possible values is better known [6], [18]. As we did not have a precise estimate of the value of (the transmission rate varies between fleas and depends whether these are blocked or not; transmission efficiency was found to be about or for blocked X. cheopis[49], [50], and for unblocked X. cheopis[51]) and as it had a strong effect on the value of , we investigated the outcomes of the model for a range of values of ( to ) and calculated the critical transmission of the disease, , which is defined as the value of for (see equation (4) below).
In the deterministic model, the disease initially spreads if and only if , which is equivalent to(4)
The threshold transmission parameter is plotted as a horizontal black line in Figures 1, 2 and 3; its values match with the values of obtained by numerical simulations (Figure 1). However, as already mentioned, the condition does not imply long-term persistence: Figure 1 shows that the equilibrium states with disease persistence (,) disappears when increases further above the critical transmission threshold , especially for large host populations. For rats and values of just above , strong oscillations of the number of rats occur in each class, with low values between the peaks (Supporting Figures S4 and S5, ). For higher values of , no oscillations happen but an epidemic wave decimates the host population (Supporting Figure S4, ): both the disease and the rat population therefore go extinct.
Without resistant rats, the disease can thus not persist in the long run within large host populations, except for a thin range of values. However, for smaller population sizes, the amplitude of the dynamics decreases (Supporting Figure S5), which prevents the extinction of the rat population and allows for disease persistence (Figure 1(b)). We thus observe that above the critical transmission, high host population sizes disadvantage the long-term persistence of plague.
It is worth noting that the same system without the flea compartment and with a direct disease transmission instead (system (S1.2) in the Supporting Text S1) shows stable equilibria when is above the critical transmission (Supporting Figure S6): the vectors therefore play a major role in the observed high amplitude dynamics leading to plague extinction by prolonging the infection process after the rats' death (free fleas infected by Y. pestis survive long enough to widely spread the disease). This point highlights the importance of accounting for the flea demography whenever studying the epidemiology of plague.
The above results are not highly dependent on the values of all other parameters linked to the behaviour of fleas and rats: changing these values may have a quantitative effect on the critical transmission but it does not modify the qualitative behaviour of the system (see the sensitivity analysis of the equilibrium states on Supporting Figure S7).
When resistant rats were included into the system (system (1)), three stable equilibrium states existed: (,,), (,,), and (,,) (Figure 2). The threshold for disease propagation, i.e. , corresponds here to the limit between the equilibria (,,) and (,,). The expression of remains the same as without resistant hosts (see equation (3)).
Contrasting with the system without resistant rats, changing the carrying capacity of the rat population does not influence the equilibrium states: when including resistant rats in the model, plague persists as long as (Figure 2 and Supporting Figure S8). The sensitivity analysis showed that this result is not sensitive to changes in parameter values (Supporting Figure S9). The pattern of short epidemics followed by disease extinction that we previously observed and which was due to a lack of surviving susceptible rats, does not occur anymore because resistant rats allow the maintenance of not only resistant but also susceptible phenotypes (through partial heritability of resistance, ) in the population.
In order to study the effect of spatial structure alone, we here assumed that resistant rats were absent (see system (S1.1) in the Supporting Text S1). The deterministic analysis of the system shows that host population structure alone allows for disease persistence when and (Figure 3). Indeed, when the metapopulation is subdivided into enough subpopulations, oscillations in the number of healthy and infectious rats occur in each subpopulation, but the numbers of individuals stay above unity. Thus, host population structure allows plague persistence for parameter values where the disease would go to extinction in non-structured populations. Fragmentation turns large host populations which undergo high amplitude cycling dynamics ( rats, Figures 1(a) and 3(a)) into small subpopulations which undergo dynamics of decreased amplitude ( rats, Figures 1(b), 3(c) and Supporting Figure S5). However, if the total carrying capacity of the metapopulation is strongly decreased ( rats), then the densities of rats in each subpopulation become too low to allow for disease persistence.
Stochastic analyses with parameter values such that revealed that even a weak spatial structure increases the time of disease extinction by several decades (Figures 4(a) and 4(b)). The effect of population structure is twofold. First, population structure introduces extinction-recolonization dynamics of the disease between local foci (Supporting Figure S10), due to the asynchrony of the dynamics between subpopulations. Secondly, consistent with our deterministic results in non structured populations of susceptible rats, as long as remains above the critical transmission the disease persists for longer in smaller populations ( and rats, no disease persistence after years across all our simulations; see Figure 4(c)) than in larger ones ( and rats, no disease persistence after years in all our simulations; see Figure 4(a)). Thus, the longer persistence of the disease in the four-subpopulation metapopulation of rats (no disease persistence after years; see Figure 4(b)) is due to both a population size reduction (in each subpopulation) and to extinction-recolonization dynamics of the disease. However, if population subdivision is too high, or the subpopulations too isolated (tested with and ), extinction time decreases again, as recolonization events become rare (Supporting Figure S11).
Figure 4. Estimated probability of persistence of susceptible rats and infectious rats through time, in (a) one non structured population of rats, (b) 4 subpopulations with total rats (ie, rats per subpopulation), and (c) one non structured population of rats.
This probability was estimated on 100 simulations. , and other parameter values are given in Table 1. The Supporting Figure S10 illustrates, for one of the simulations in (b), the extinction-recolonization dynamics of the disease which occur between the subpopulations.
Discussion
Host population size and disease persistence
In rat populations without resistant rats, our results show that the persistence of the disease is favoured by intermediate population sizes. This may seem surprising given that the propagation of many infectious diseases is known to be favoured by larger host population sizes [52], [53]. However, disease invasion and persistence are two very different phenomenons [54], and the classically reported effect of population size on [53] is a matter of invasion rather than persistence. Here, the presence of vectors, the fleas, amplifies the spread of the disease (the fleas act as a very short-term external reservoir, [55], [56]) and thus triggers, after an intense epidemic, the extinction of the disease in large populations. Accordingly, other empirical studies reported that high host carrying capacities favour the invasion but not the persistence of plague. In Kazakhstan for instance, plague epidemics have been shown to be preceded by an increase in gerbil abundance over a minimum abundance threshold [22], [27], but the abundance of gerbils would predict plague endemicity (ie, long-term persistence) less than the probability of plague epidemics [23]. Heier et al. [23] suggested that if the initial spread of plague is faster when the population density of rodents is high, so is the extinction of the rodent population [23].
Roles of resistance and structure
In large rat populations, we find that the presence of resistant rats alone may explain plague persistence. This confirms the hypothesis of a role for resistance in the endemism of plague [55]. Keeling & Gilligan [21] showed that if the initial proportion of resistant rats is below 20%, then short epizootics are more likely to occur than disease persistence. Previous theoretical studies [28], [57] assumed that resistant rats could play the role of plague reservoirs, by carrying infectious fleas, or that infectious rats could recover [21], whereby restoring the population of disease-sensitive rats. Our results show that these assumptions are not required to account for the persistence of this highly virulent disease, but that what matters most is the fact that resistant rats provide a source of new sensitive rats (since resistance is not totally heritable, ).
Also, spatial structure alone may account for plague persistence. The possible recovery of infectious rats and the presence of resistant hosts were included in the model of Keeling and Gilligan [21], but we show here that they are not necessary to induce the long-term persistence of plague. A weak structure is enough to explain decades of disease persistence. It confirms what was already suggested by Salkeld et al. [27]. The effect of spatial structure is related to the combined effects of reduced subpopulation sizes and asynchrony between subpopulations. As in [58], [59], we indeed find that plague extinction takes longer for an intermediate force of coupling, , between subpopulations. Interestingly, the extinction-recolonization dynamics we observe happen to have about the same tempo as chronic re-emergences that have been recorded in some plague foci, such as the Kazakh focus, where epizootics last two to five years and occur every two to eight years [60]. Our results on the role of spatial structure are supported by field observations on prairie dogs (Cynomys sp.): in the United States, prairie dog colonies are on average smaller and separated by larger distances in regions where plague has historically been endemic than in regions where plague is historically absent [24]. Mortality due to Y. pestis is, for prairie dogs, close to 100%: the theoretical model we developed for susceptible rats may thus be applied to this example.
Both population subdivision alone and the presence of resistant rats may thus contribute to promote the persistence of plague in natural foci. However, the host population structure allows the persistence of the disease for a duration depending on the degree of spatial structure and on the features of the host and flea populations, while the presence of resistant hosts may allow a stable persistence of plague.
Two hypotheses to explain the focal distribution of plague in Madagascar
In Madagascar, the plague focus is restricted to the central highlands. The focal persistence of plague may be explained by two different (non mutually exclusive) mechanisms, both of which will need to be validated through further field studies.
First, the differential persistence of plague may be due to different parameters in highlands and lowlands, such that is above unity in the highlands and below unity in the lowlands. The few comparative studies that exist have not shown any major difference yet in biological parameter values related to rats between lowlands and highlands [3], [31] (J.-M. Duplantier, unpublished data). However, most of the parameter values linked to the fleas that we used have not been measured in Madagascar (, , , ). Some of these parameters (, ) are among the ones which influence the basic reproductive number of the disease most. Climate is different in highlands and lowlands and may influence plague transmission by fleas [26], [61]. Moreover, flea communities are different, as one of the flea species (S. fonquerniei) is only in central highlands. The two flea species may not have the same demographic and transmission characteristics, and S. fonquerniei could play a role in the endemism of plague by being responsible for a high transmission: it has been shown to carry more bacillus Y. pestis during the plague season than X. cheopis[62]. Thus, further studies would be needed to experimentally compare the two flea species.
Even if both regions had above unity (being or not equal), our results suggest that the persistence or extinction of the disease in each area may be explained by differences in the dynamics of the system, due to the presence/absence of resistant hosts and of population structure. In Madagascar, highlands were colonized by rats from Malagasy coastal populations some 800 years ago [63], long before the introduction of plague in the island. As no resistant phenotype, even at low frequency, has been found in rats from coastal populations [34], it seems likely that resistance evolved secondarily, after the spread of plague in highland rat populations. Population genetic studies showed that Malagasy rat populations are more genetically structured in landscapes characterised by sharp topographical relief, such as those found in some regions of the highlands, than in flat areas (Brouat et al., in revision). Rat population structure may thus have been more favourable to the persistence of the disease in the highlands than in coastal areas, and for periods of time sufficiently long to select resistance alleles. The evolution of resistance in the highlands may have in turn led to a more long-term plague persistence in this area. Host population structure and host resistance could thus have had a synergic effect to maintain plague in the Malagasy highlands. Testing this scenario would require more thorough theoretical studies based on the estimation of numerous biological parameters, especially in Malagasy flea populations (see above). Also, this requires the examination of the time needed for a resistance allele to invade a metapopulation depending on spatial structure. However, it is interesting to note that the scenario of a secondary evolution of host resistance in a restricted geographical range has already been identified for other diseases, such as malaria in Hawaii, for which resistance has only been selected in low altitude [64]. Also, an empirical analysis of the genetic structure of Y. pestis among prairie dogs in Arizona [25] suggests a dispersion dynamic of plague consistent with the above scenario. This latter study highlights the existence of two stages in plague propagation: first a phase of rapid expansion, through the encounter of a highly dense susceptible rodent population, and then a phase of decline of the host population and of extinction of the disease, except if slow and stable transmission cycles can arise either through resistant hosts, or spatially structured or low density susceptible populations [25].
Conclusion
Using a simple model of plague propagation, we showed that both resistance and population subdivision may explain plague endemism. Madagascar may be a good illustration of how these two factors may act together, in synergy, to favour the long-term persistence of this highly virulent disease. However, further comparative field studies should aim at testing our assumptions on plague establishment in Madagascar, by trying to better assess the parameter values in the lowlands and highlands.
It is worth noting that several aspects of the cycle of plague transmission have been neglected in our study. Some of these could play an additional role in Madagascar, others should not have any impact, and all of them could be involved in plague persistence in other foci. Indeed, the existence of multi-plague reservoirs [5], [15], [55] seems unlikely in Madagascar, as R. rattus is margely dominant in rural communities, representing at least 95% of the captures [3], [65]. Also, although plague persistence in soils may exist in very peculiar situations (e.g., [66]) or in steppic environments [16], [17], it has never been demonstrated in Madagascar [17]. Alternatively, the heterogeneity in the phenology of the host reproduction [15], the direct transmission of Y. pestis inside burrows (through for example the release of the bacillus as aerosols [18], [19]), or the fact that resistant rats might be infectious for a short period of time before recovering (not shown for rats but observed for mice, [67]) or might release infectious fleas at their death could play additional roles in Madagascar and remain to be tested.
Supporting Information
Long-term plague persistence without resistant rats and without structure, based on Keeling & Gilligan's model. Equilibrium states for a susceptible population, according to the rat's maximal birth rate, , and the transmission rate, , based on the model presented in [20]. (a) rats, (b) rats. Values for other parameters follow the ones presented in Table 1 and rats are assumed not to recover from plague infection ( in Keeling & Gilligan's model). Stable equilibrium states: (,) in black, (,) in dark grey and (,) in light grey. The outcomes of the model developed by Keeling & Gilligan [20] are very similar to those obtained with our model simulated with the same parameter values (compare this figure with Figure 1 in the main text).
Long-term plague persistence with resistant rats and without structure, based on Keeling & Gilligan's model. Equilibrium states for a rat population including resistant rats, according to the maximal birth rate of rats, , and the transmission rate, , based on the model presented in [20]. Parameter values are given in Table 1, rats and rats are assumed not to recover from plague infection ( in Keeling & Gilligan's model). Stable equilibrium states: (,,) in black, (,,) in dark grey, (,,) in light grey and (,,) in white. The outcomes of the model developed by Keeling & Gilligan [20] are very similar to those obtained with our model simulated with the same parameter values (compare this figure with Figure 2 in the main text).
Sensitivity of the basic reproductive number of the disease,, to parameter values, for (a) rats and (b) rats. The sensitivity corresponds to . It was calculated by increasing each paramater value by 10%.
Dynamics of the deterministic system without resistant rats (system (S1.1) in Supporting Text S1), for rats, and (a) , (b) , (c) and (d) . Values for other parameters follow the ones presented in Table 1. Time in years. Equilibria reached: (a) (,), (b) (,), (c) (,) and (d) (,).
Minimum and maximum values of the oscillations of the number of ratsand(system (S1.1), without resistant rats, in the Supporting Text S1) according to the carrying capacity, for and (a) , (b) . Values for other parameters follow the ones presented in Table 1. Only the numbers of rats and above are shown ().
Equilibrium states for susceptible rat populations in which the disease would spread without vectors (system (S1.2) in the Supporting Text S1).. Values for other parameters follow the ones presented in Table 1. . Stable equilibrium states: (,) in black, (,) in dark grey and (,) in light grey.
Sensitivity of the equilibrium states of the system without resistant rats (system (S1.1) in Supporting Text S1) to each parameter value, given rats (a, c, e, g, i and k) and rats (b, d, f, h, j, l): parameter (a, b), (c, d), (e, f), (g, h), (i, j) and (k, l). Each parameter value is increased by 50% and other parameters values follow the ones presented in Table 1. Stable equilibrium states: (,) in black, (,) in dark grey and (,) in light grey. This figure is to be compared with Figure 1.
Dynamics of the deterministic system with resistant rats (system (1)), for rats, and for (a) and (b) . Values for other parameters follow the ones presented in Table 1. Time in years. Equilibria reached: (a) (,,) and (b) (,,).
Sensitivity of the equilibrium states of the system with resistant rats (system (0)) to each parameter value: (a) parameter , (b) , (c) , (d) , (e) , (f) , and (g) . Each parameter value is increased by 50%, other parameters values follow the ones in Table 1, and rats. Stable equilibrium states: (,,) in black, (,,) in dark grey, (,,) in light grey and (,,) in white. This figure is to be compared with Figure 2.
Extinction-recolonisation dynamics. Number of infectious rats through time in each subpopulation, for one of the stochastic simulations performed in Figure 4(b). In blue: stochastic results, in orange: deterministic results. , and other parameter values given in Table 1. The blue arrows indicate when the plague recolonizes a subpopulation from which it had disappeared (rescue effect). The disease never totally goes extinct in the deterministic model (epidemics still occur even if the number of infectious rats goes through extremely low values between the epidemics).
Effect of decreased force of coupling between subpopulations. Estimated probability of persistence of susceptible rats and infectious rats through time (measured on 60 simulations), for a metapopulation of 4 subpopulations with total rats and with proportion of inter-subpopulation infections (force of coupling between subpopulations) . , and other parameter values given in Table 1. This figure is to be compared with Figure 4(b).
Acknowledgments
We are grateful to the staff of the Plague Laboratory of the Institut Pasteur de Madagascar and to the Malagasy Health Ministry who largely contributed to the acquisition of the biological data that were used in the model, and to Alexandre Dehne-Garcia for his help with the computing cluster of the CBGP lab during preliminary analyses. The manuscript was significantly improved thanks to the comments of Charlotte Tollenaere and of two anonymous reviewers.
63.
Tollenaere C, Brouat C, Duplantier JM, Rahalison L, Rahelinirina S, et al. (2010) Phylogeography of the introduced species Rattus rattus in the western Indian Ocean, with special emphasis on the colonization history of Madagascar. J Biogeogr 37: 398–410. | 009 [29]. Bubonic plague was introduced in Madagascar in 1898 [30] and spread to central highlands in the 1920s [31]. Since that time, the disease persists in this region at the landscape level. In coastal areas and regions below 800 m of altitude, only sporadic urban epidemics occurred, due to human-mediated translocation of infected rodents from central highlands [30], [32]. In Madagascar, the main host of plague is the black rat, Rattus rattus[30], that is widespread throughout the island [30], while two species of fleas are involved as vectors [31]: Xenopsylla cheopis, the oriental rat flea, which has a cosmopolitan distribution, and Synopsyllus fonquerniei, an endemic flea from Madagascar whose distribution is restricted to central highlands. Compared to many other natural plague foci, only a few species are involved in the transmission of the disease [3]. Nevertheless, and despite its importance regarding public health, the causes of plague persistence in Madagascar have never been explored. Preliminary population genetic studies suggested that rat populations from the highlands are more geographically structured than those of the coastal areas, probably because of the more rugged physical landscape that limits migration [33]. Also, consistent with the hypothesis of a causal relationship between plague persistence and host resistance, at least 50% of the rats caught in Malagasy highlands are plague-resistant, whereas they are all susceptible in low altitude plague-free areas [34], [35]. However, as highlands were colonised by rats from coastal areas several centuries ago [34], the evolution of plague resistance in Malagasy black rats may be recent and posterior to the spread of the disease. It might thus be a consequence rather than a primary cause of plague persistence in rural areas of central highlands.
Building on the model of Keeling & Gilligan [20], [21], we developed a theoretical approach to evaluate independently the roles of host population structure and of host resistance in the long-term persistence of plague. | yes |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.