id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
sequence | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
4,012 | Basel Convention | The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to prevent transfer of hazardous waste from developed to less developed countries. It does not, however, address the movement of radioactive waste. The convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist developing countries in environmentally sound management of the hazardous and other wastes they generate.
The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of June 2023, there are 191 parties to the convention. In addition, Haiti and the United States have signed the convention but not ratified it.
Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country.
With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, the globalization of shipping made cross-border movement of waste easier, and many less developed countries were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to poorer countries, grew rapidly. In 1990, OECD countries exported around 1.8 million tons of hazardous waste. Although most of this waste was shipped to other developed countries, a number of high-profile incidents of hazardous waste-dumping led to calls for regulation.
One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States dumped half of its load on a beach in Haiti before being forced away. It sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea.
Another incident was a 1988 case in which five ships transported 8,000 barrels of hazardous waste from Italy to the small Nigerian town of Koko in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland.
At its meeting that took place from 27 November to 1 December 2006, the parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships.
Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a "commodity" and become a "waste".
As of June 2023, there are 191 parties to the treaty, which includes 188 UN member states, the Cook Islands, the European Union, and the State of Palestine. The five UN member states that are not party to the treaty are East Timor, Fiji, Haiti, South Sudan, and United States.
Waste falls under the scope of the convention if it is within the category of wastes listed in Annex I of the convention and it exhibits one of the hazardous characteristics contained in Annex III. In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit.
The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling.
Alternatively, to fall under the scope of the convention, it is sufficient for waste to be included in Annex II, which lists other wastes, such as household wastes and residue that comes from incinerating household waste.
Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships are not covered.
Annex IX attempts to define wastes which are not considered hazardous wastes and which would be excluded from the scope of the Basel Convention. If these wastes however are contaminated with hazardous materials to an extent causing them to exhibit an Annex III characteristic, they are not excluded.
In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. The convention places a general prohibition on the exportation or importation of wastes between parties and non-parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-party to the convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries.
The OECD Council also has its own control system that governs the transboundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention.
Parties to the convention must honor import bans of other parties.
Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention. Parties are generally prohibited from exporting covered wastes to, or importing covered waste from, non-parties to the convention.
The convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions.
According to Article 12, parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders.
The current consensus is that as space is not classed as a "country" under the specific definition, export of e-waste to non-terrestrial locations would not be covered.
After the initial adoption of the convention, some least developed countries and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous waste to developing countries. In particular, the original convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as "prior informed consent" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention.
Lobbying at 1995 Basel conference by developing countries, Greenpeace and several European countries such as Denmark, led to the adoption of an amendment to the convention in 1995 termed the Basel Ban Amendment to the Basel Convention. The amendment has been accepted by 86 countries and the European Union, but has not entered into force (as that requires ratification by three-fourths of the member states to the convention). On 6 September 2019, Croatia became the 97th country to ratify the amendment which will enter into force after 90 days on 5 December 2019. The amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including Australia and Canada. The number of ratification for the entry-into force of the Ban Amendment is under debate: Amendments to the convention enter into force after ratification of "three-fourths of the Parties who accepted them" [Art. 17.5]; so far, the parties of the Basel Convention could not yet agree whether this would be three-fourths of the parties that were party to the Basel Convention when the ban was adopted, or three-fourths of the current parties of the convention [see Report of COP 9 of the Basel Convention]. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states. Norway and Switzerland have similarly fully implemented the Basel Ban in their legislation.
In the light of the blockage concerning the entry into force of the Ban Amendment, Switzerland and Indonesia have launched a "Country-led Initiative" (CLI) to discuss in an informal manner a way forward to ensure that the trans boundary movements of hazardous wastes, especially to developing countries and countries with economies in the transition, do not lead to an unsound management of hazardous wastes. This discussion aims at identifying and finding solutions to the reasons why hazardous wastes are still brought to countries that are not able to treat them in a safe manner. It is hoped that the CLI will contribute to the realization of the objectives of the Ban Amendment. The Basel Convention's website informs about the progress of this initiative.
In the wake of popular outcry, in May 2019 most of the world's countries, but not the United States, agreed to amend the Basel Convention to include plastic waste as a regulated material. The world's oceans are estimated to contain 100 million metric tons of plastic, with up to 90% of this quantity originating in land-based sources. The United States, which produces an annual 42 million metric tons of plastic waste, more than any other country in the world, opposed the amendment, but since it is not a party to the treaty it did not have an opportunity to vote on it to try to block it. Information about, and visual images of, wildlife, such as seabirds, ingesting plastic, and scientific findings that nanoparticles do penetrate through the blood–brain barrier were reported to have fueled public sentiment for coordinated international legally binding action. Over a million people worldwide signed a petition demanding official action. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the Basel Convention as amended in May 2019 prohibits the transportation of plastic waste to just about every other country.
The Basel Convention contains three main entries on plastic wastes in Annex II, VIII and IX of the Convention. The Plastic Waste Amendments of the convention are now binding on 186 States. In addition to ensuring the trade in plastic waste is more transparent and better regulated, under the Basel Convention governments must take steps not only to ensure the environmentally sound management of plastic waste, but also to tackle plastic waste at its source.
The Basel Action Network (BAN) is a charitable civil society non-governmental organization that works as a consumer watchdog for implementation of the Basel Convention. BAN's principal aims is fighting exportation of toxic waste, including plastic waste, from industrialized societies to developing countries. BAN is based in Seattle, Washington, United States, with a partner office in the Philippines. BAN works to curb trans-border trade in hazardous electronic waste, land dumping, incineration, and the use of prison labor.
This article incorporates text from a free content work. Licensed under Cc BY-SA 3.0 IGO (license statement/permission). Text taken from Drowning in Plastics – Marine Litter and Plastic Waste Vital Graphics, United Nations Environment Programme. | [
{
"paragraph_id": 0,
"text": "The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to prevent transfer of hazardous waste from developed to less developed countries. It does not, however, address the movement of radioactive waste. The convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist developing countries in environmentally sound management of the hazardous and other wastes they generate.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of June 2023, there are 191 parties to the convention. In addition, Haiti and the United States have signed the convention but not ratified it.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now \"criminal traffic as soon as the ships get on the high seas,\" according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country.",
"title": ""
},
{
"paragraph_id": 3,
"text": "With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, the globalization of shipping made cross-border movement of waste easier, and many less developed countries were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to poorer countries, grew rapidly. In 1990, OECD countries exported around 1.8 million tons of hazardous waste. Although most of this waste was shipped to other developed countries, a number of high-profile incidents of hazardous waste-dumping led to calls for regulation.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States dumped half of its load on a beach in Haiti before being forced away. It sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Another incident was a 1988 case in which five ships transported 8,000 barrels of hazardous waste from Italy to the small Nigerian town of Koko in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "At its meeting that took place from 27 November to 1 December 2006, the parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a \"commodity\" and become a \"waste\".",
"title": "History"
},
{
"paragraph_id": 8,
"text": "As of June 2023, there are 191 parties to the treaty, which includes 188 UN member states, the Cook Islands, the European Union, and the State of Palestine. The five UN member states that are not party to the treaty are East Timor, Fiji, Haiti, South Sudan, and United States.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Waste falls under the scope of the convention if it is within the category of wastes listed in Annex I of the convention and it exhibits one of the hazardous characteristics contained in Annex III. In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit.",
"title": "Definition of hazardous waste"
},
{
"paragraph_id": 10,
"text": "The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling.",
"title": "Definition of hazardous waste"
},
{
"paragraph_id": 11,
"text": "Alternatively, to fall under the scope of the convention, it is sufficient for waste to be included in Annex II, which lists other wastes, such as household wastes and residue that comes from incinerating household waste.",
"title": "Definition of hazardous waste"
},
{
"paragraph_id": 12,
"text": "Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships are not covered.",
"title": "Definition of hazardous waste"
},
{
"paragraph_id": 13,
"text": "Annex IX attempts to define wastes which are not considered hazardous wastes and which would be excluded from the scope of the Basel Convention. If these wastes however are contaminated with hazardous materials to an extent causing them to exhibit an Annex III characteristic, they are not excluded.",
"title": "Definition of hazardous waste"
},
{
"paragraph_id": 14,
"text": "In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. The convention places a general prohibition on the exportation or importation of wastes between parties and non-parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-party to the convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries.",
"title": "Obligations"
},
{
"paragraph_id": 15,
"text": "The OECD Council also has its own control system that governs the transboundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention.",
"title": "Obligations"
},
{
"paragraph_id": 16,
"text": "Parties to the convention must honor import bans of other parties.",
"title": "Obligations"
},
{
"paragraph_id": 17,
"text": "Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention. Parties are generally prohibited from exporting covered wastes to, or importing covered waste from, non-parties to the convention.",
"title": "Obligations"
},
{
"paragraph_id": 18,
"text": "The convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions.",
"title": "Obligations"
},
{
"paragraph_id": 19,
"text": "According to Article 12, parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders.",
"title": "Obligations"
},
{
"paragraph_id": 20,
"text": "The current consensus is that as space is not classed as a \"country\" under the specific definition, export of e-waste to non-terrestrial locations would not be covered.",
"title": "Obligations"
},
{
"paragraph_id": 21,
"text": "After the initial adoption of the convention, some least developed countries and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous waste to developing countries. In particular, the original convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as \"prior informed consent\" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention.",
"title": "Basel Ban Amendment"
},
{
"paragraph_id": 22,
"text": "Lobbying at 1995 Basel conference by developing countries, Greenpeace and several European countries such as Denmark, led to the adoption of an amendment to the convention in 1995 termed the Basel Ban Amendment to the Basel Convention. The amendment has been accepted by 86 countries and the European Union, but has not entered into force (as that requires ratification by three-fourths of the member states to the convention). On 6 September 2019, Croatia became the 97th country to ratify the amendment which will enter into force after 90 days on 5 December 2019. The amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including Australia and Canada. The number of ratification for the entry-into force of the Ban Amendment is under debate: Amendments to the convention enter into force after ratification of \"three-fourths of the Parties who accepted them\" [Art. 17.5]; so far, the parties of the Basel Convention could not yet agree whether this would be three-fourths of the parties that were party to the Basel Convention when the ban was adopted, or three-fourths of the current parties of the convention [see Report of COP 9 of the Basel Convention]. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states. Norway and Switzerland have similarly fully implemented the Basel Ban in their legislation.",
"title": "Basel Ban Amendment"
},
{
"paragraph_id": 23,
"text": "In the light of the blockage concerning the entry into force of the Ban Amendment, Switzerland and Indonesia have launched a \"Country-led Initiative\" (CLI) to discuss in an informal manner a way forward to ensure that the trans boundary movements of hazardous wastes, especially to developing countries and countries with economies in the transition, do not lead to an unsound management of hazardous wastes. This discussion aims at identifying and finding solutions to the reasons why hazardous wastes are still brought to countries that are not able to treat them in a safe manner. It is hoped that the CLI will contribute to the realization of the objectives of the Ban Amendment. The Basel Convention's website informs about the progress of this initiative.",
"title": "Basel Ban Amendment"
},
{
"paragraph_id": 24,
"text": "In the wake of popular outcry, in May 2019 most of the world's countries, but not the United States, agreed to amend the Basel Convention to include plastic waste as a regulated material. The world's oceans are estimated to contain 100 million metric tons of plastic, with up to 90% of this quantity originating in land-based sources. The United States, which produces an annual 42 million metric tons of plastic waste, more than any other country in the world, opposed the amendment, but since it is not a party to the treaty it did not have an opportunity to vote on it to try to block it. Information about, and visual images of, wildlife, such as seabirds, ingesting plastic, and scientific findings that nanoparticles do penetrate through the blood–brain barrier were reported to have fueled public sentiment for coordinated international legally binding action. Over a million people worldwide signed a petition demanding official action. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now \"criminal traffic as soon as the ships get on the high seas,\" according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the Basel Convention as amended in May 2019 prohibits the transportation of plastic waste to just about every other country.",
"title": "Regulation of plastic waste"
},
{
"paragraph_id": 25,
"text": "The Basel Convention contains three main entries on plastic wastes in Annex II, VIII and IX of the Convention. The Plastic Waste Amendments of the convention are now binding on 186 States. In addition to ensuring the trade in plastic waste is more transparent and better regulated, under the Basel Convention governments must take steps not only to ensure the environmentally sound management of plastic waste, but also to tackle plastic waste at its source.",
"title": "Regulation of plastic waste"
},
{
"paragraph_id": 26,
"text": "The Basel Action Network (BAN) is a charitable civil society non-governmental organization that works as a consumer watchdog for implementation of the Basel Convention. BAN's principal aims is fighting exportation of toxic waste, including plastic waste, from industrialized societies to developing countries. BAN is based in Seattle, Washington, United States, with a partner office in the Philippines. BAN works to curb trans-border trade in hazardous electronic waste, land dumping, incineration, and the use of prison labor.",
"title": "Basel watchdog"
},
{
"paragraph_id": 27,
"text": "This article incorporates text from a free content work. Licensed under Cc BY-SA 3.0 IGO (license statement/permission). Text taken from Drowning in Plastics – Marine Litter and Plastic Waste Vital Graphics, United Nations Environment Programme.",
"title": "References"
}
] | The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to prevent transfer of hazardous waste from developed to less developed countries. It does not, however, address the movement of radioactive waste. The convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist developing countries in environmentally sound management of the hazardous and other wastes they generate. The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of June 2023, there are 191 parties to the convention. In addition, Haiti and the United States have signed the convention but not ratified it. Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country. | 2001-08-04T05:37:49Z | 2023-11-22T05:14:46Z | [
"Template:Waste",
"Template:Infobox Treaty",
"Template:Cite web",
"Template:Cite book",
"Template:Free-content attribution",
"Template:ISBN",
"Template:Official website",
"Template:Pollution",
"Template:Authority control",
"Template:Short description",
"Template:Use dmy dates",
"Template:Pollution sidebar",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Basel_Convention |
4,013 | Bar Kokhba (album) | Bar Kokhba is a double album by John Zorn, recorded between 1994 and 1996. It features music from Zorn's Masada project, rearranged for small ensembles. It also features the original soundtrack from The Art of Remembrance – Simon Wiesenthal, a film by Hannah Heer and Werner Schmiedel (1994–95).
The AllMusic review by Marc Gilman noted: "While some compositions retain their original structure and sound, some are expanded and probed by Zorn's arrangements, and resemble avant-garde classical music more than jazz. But this is the beauty of the album; the ensembles provide a forum for Zorn to expand his compositions. The album consistently impresses."
All compositions by John Zorn | [
{
"paragraph_id": 0,
"text": "Bar Kokhba is a double album by John Zorn, recorded between 1994 and 1996. It features music from Zorn's Masada project, rearranged for small ensembles. It also features the original soundtrack from The Art of Remembrance – Simon Wiesenthal, a film by Hannah Heer and Werner Schmiedel (1994–95).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The AllMusic review by Marc Gilman noted: \"While some compositions retain their original structure and sound, some are expanded and probed by Zorn's arrangements, and resemble avant-garde classical music more than jazz. But this is the beauty of the album; the ensembles provide a forum for Zorn to expand his compositions. The album consistently impresses.\"",
"title": "Reception"
},
{
"paragraph_id": 2,
"text": "All compositions by John Zorn",
"title": "Track listing"
},
{
"paragraph_id": 3,
"text": "",
"title": "References"
},
{
"paragraph_id": 4,
"text": "",
"title": "References"
}
] | Bar Kokhba is a double album by John Zorn, recorded between 1994 and 1996. It features music from Zorn's Masada project, rearranged for small ensembles. It also features the original soundtrack from The Art of Remembrance – Simon Wiesenthal, a film by Hannah Heer and Werner Schmiedel (1994–95). | 2001-08-04T17:02:13Z | 2023-09-24T17:29:40Z | [
"Template:Album ratings",
"Template:Masada",
"Template:1990s-album-stub",
"Template:Use mdy dates",
"Template:Infobox album",
"Template:Reflist",
"Template:John Zorn",
"Template:Authority control",
"Template:About"
] | https://en.wikipedia.org/wiki/Bar_Kokhba_(album) |
4,015 | BASIC | BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn.
In addition to the programming language, Kemeny and Kurtz developed the Dartmouth Time Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC.
The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era, and became the de facto programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge.
BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features (such as Pascal and C) became tenable on such computers. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET, while a hobbyist scene for BASIC more broadly continues to exist.
John G. Kemeny was the chairman of the Dartmouth College Mathematics Department. Based largely on his reputation as an innovator in math teaching, in 1959 the College won an Alfred P. Sloan Foundation award for $500,000 to build a new department building. Thomas E. Kurtz had joined the department in 1956, and from the 1960s Kemeny and Kurtz agreed on the need for programming literacy among students outside the traditional STEM fields. Kemeny later noted that "Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate. It was as simple as that."
Kemeny and Kurtz had made two previous experiments with simplified languages, DARSIMCO (Dartmouth Simplified Code) and DOPE (Dartmouth Oversimplified Programming Experiment). These did not progress past a single freshman class. New experiments using Fortran and ALGOL followed, but Kurtz concluded these languages were too tricky for what they desired. As Kurtz noted, Fortran had numerous oddly-formed commands, notably an "almost impossible-to-memorize convention for specifying a loop: DO 100, I = 1, 10, 2. Is it '1, 10, 2' or '1, 2, 10', and is the comma after the line number required or not?"
Moreover, the lack of any sort of immediate feedback was a key problem; the machines of the era used batch processing and took a long time to complete a run of a program. While Kurtz was visiting MIT, John McCarthy suggested that time-sharing offered a solution; a single machine could divide up its processing time among many users, giving them the illusion of having a (slow) computer to themselves. Small programs would return results in a few seconds. This led to increasing interest in a system using time-sharing and a new language specifically for use by non-STEM students.
Kemeny wrote the first version of BASIC. The acronym BASIC comes from the name of an unpublished paper by Thomas Kurtz. The new language was heavily patterned on FORTRAN II; statements were one-to-a-line, numbers were used to indicate the target of loops and branches, and many of the commands were similar or identical to Fortran. However, the syntax was changed wherever it could be improved. For instance, the difficult to remember DO loop was replaced by the much easier to remember FOR I = 1 TO 10 STEP 2, and the line number used in the DO was instead indicated by the NEXT I. Likewise, the cryptic IF statement of Fortran, whose syntax matched a particular instruction of the machine on which it was originally written, became the simpler IF I=5 THEN GOTO 100. These changes made the language much less idiosyncratic while still having an overall structure and feel similar to the original FORTRAN.
The project received a $300,000 grant from the National Science Foundation, which was used to purchase a GE-225 computer for processing, and a Datanet-30 realtime processor to handle the Teletype Model 33 teleprinters used for input and output. A team of a dozen undergraduates worked on the project for about a year, writing both the DTSS system and the BASIC compiler. The first version BASIC language was released on 1 May 1964.
Initially, BASIC concentrated on supporting straightforward mathematical work, with matrix arithmetic support from its initial implementation as a batch language, and character string functionality being added by 1965. Usage in the university rapidly expanded, requiring the main CPU to be replaced by a GE-235, and still later by a GE-635. By the early 1970s there were hundreds of terminals connected to the machines at Dartmouth, some of them remotely.
Wanting use of the language to become widespread, its designers made the compiler available free of charge. In the 1960s, software became a chargeable commodity; until then, it was provided without charge as a service with expensive computers, usually available only to lease. They also made it available to high schools in the Hanover, New Hampshire, area and regionally throughout New England on Teletype Model 33 and Model 35 teleprinter terminals connected to Dartmouth via dial-up phone lines, and they put considerable effort into promoting the language. In the following years, as other dialects of BASIC appeared, Kemeny and Kurtz's original BASIC dialect became known as Dartmouth BASIC.
New Hampshire recognized the accomplishment in 2019 when it erected a highway historical marker in Hanover describing the creation of "the first user-friendly programming language".
The emergence of BASIC took place as part of a wider movement towards time-sharing systems. First conceptualized during the late 1950s, the idea became so dominant in the computer industry by the early 1960s that its proponents were speaking of a future in which users would "buy time on the computer much the same way that the average household buys power and water from utility companies".
General Electric, having worked on the Dartmouth project, wrote their own underlying operating system and launched an online time-sharing system known as Mark I. It featured BASIC as one of its primary selling points. Other companies in the emerging field quickly followed suit; Tymshare introduced SUPER BASIC in 1968, CompuServe had a version on the DEC-10 at their launch in 1969, and by the early 1970s BASIC was largely universal on general-purpose mainframe computers. Even IBM eventually joined the club with the introduction of VS-BASIC in 1973.
Although time-sharing services with BASIC were successful for a time, the widespread success predicted earlier was not to be. The emergence of minicomputers during the same period, and especially low-cost microcomputers in the mid-1970s, allowed anyone to purchase and run their own systems rather than buy online time which was typically billed at dollars per minute.
BASIC, by its very nature of being small, was naturally suited to porting to the minicomputer market, which was emerging at the same time as the time-sharing services. These machines had small main memory, perhaps as little as 4 KB in modern terminology, and lacked high-performance storage like hard drives that make compilers practical. On these systems, BASIC was normally implemented as an interpreter rather than a compiler due to its lower requirement for working memory.
A particularly important example was HP Time-Shared BASIC, which, like the original Dartmouth system, used two computers working together to implement a time-sharing system. The first, a low-end machine in the HP 2100 series, was used to control user input and save and load their programs to tape or disk. The other, a high-end version of the same underlying machine, ran the programs and generated output. For a cost of about $100,000, one could own a machine capable of running between 16 and 32 users at the same time. The system, bundled as the HP 2000, was the first mini platform to offer time-sharing and was an immediate runaway success, catapulting HP to become the third-largest vendor in the minicomputer space, behind DEC and Data General (DG).
DEC, the leader in the minicomputer space since the mid-1960s, had initially ignored BASIC. This was due to their work with RAND Corporation, who had purchased a PDP-6 to run their JOSS language, which was conceptually very similar to BASIC. This led DEC to introduce a smaller, cleaned up version of JOSS known as FOCAL, which they heavily promoted in the late 1960s. However, with timesharing systems widely offering BASIC, and all of their competition in the minicomputer space doing the same, DEC's customers were clamoring for BASIC. After management repeatedly ignored their pleas, David H. Ahl took it upon himself to buy a BASIC for the PDP-8, which was a major success in the education market. By the early 1970s, FOCAL and JOSS had been forgotten and BASIC had become almost universal in the minicomputer market. DEC would go on to introduce their updated version, BASIC-PLUS, for use on the RSTS/E time-sharing operating system.
During this period a number of simple text-based games were written in BASIC, most notably Mike Mayfield's Star Trek. David Ahl collected these, some ported from FOCAL, and published them in an educational newsletter he compiled. He later collected a number of these into book form, 101 BASIC Computer Games, published in 1973. During the same period, Ahl was involved in the creation of a small computer for education use, an early personal computer. When management refused to support the concept, Ahl left DEC in 1974 to found the seminal computer magazine, Creative Computing. The book remained popular, and was re-published on several occasions.
The introduction of the first microcomputers in the mid-1970s was the start of explosive growth for BASIC. It had the advantage that it was fairly well known to the young designers and computer hobbyists who took an interest in microcomputers, many of whom had seen BASIC on minis or mainframes. Despite Dijkstra's famous judgement in 1975, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration", BASIC was one of the few languages that was both high-level enough to be usable by those without training and small enough to fit into the microcomputers of the day, making it the de facto standard programming language on early microcomputers.
The first microcomputer version of BASIC was co-written by Bill Gates, Paul Allen and Monte Davidoff for their newly formed company, Micro-Soft. This was released by MITS in punch tape format for the Altair 8800 shortly after the machine itself, immediately cementing BASIC as the primary language of early microcomputers. Members of the Homebrew Computer Club began circulating copies of the program, causing Gates to write his Open Letter to Hobbyists, complaining about this early example of software piracy.
Partially in response to Gates's letter, and partially to make an even smaller BASIC that would run usefully on 4 KB machines, Bob Albrecht urged Dennis Allison to write their own variation of the language. How to design and implement a stripped-down version of an interpreter for the BASIC language was covered in articles by Allison in the first three quarterly issues of the People's Computer Company newsletter published in 1975 and implementations with source code published in Dr. Dobb's Journal of Tiny BASIC Calisthenics & Orthodontia: Running Light Without Overbyte. This led to a wide variety of Tiny BASICs with added features or other improvements, with versions from Tom Pittman and Li-Chen Wang becoming particularly well known.
Micro-Soft, by this time Microsoft, ported their interpreter for the MOS 6502, which quickly become one of the most popular microprocessors of the 8-bit era. When new microcomputers began to appear, notably the "1977 trinity" of the TRS-80, Commodore PET and Apple II, they either included a version of the MS code, or quickly introduced new models with it. Ohio Scientific's personal computers also joined this trend at that time. By 1978, MS BASIC was a de facto standard and practically every home computer of the 1980s included it in ROM. Upon boot, a BASIC interpreter in direct mode was presented.
Commodore Business Machines included Commodore BASIC, based on Microsoft BASIC. The Apple II and TRS-80 each had two versions of BASIC, a smaller introductory version introduced with the initial releases of the machines and an MS-based version introduced as interest in the platforms increased. As new companies entered the field, additional versions were added that subtly changed the BASIC family. The Atari 8-bit family had its own Atari BASIC that was modified in order to fit on an 8 KB ROM cartridge. Sinclair BASIC was introduced in 1980 with the Sinclair ZX80, and was later extended for the Sinclair ZX81 and the Sinclair ZX Spectrum. The BBC published BBC BASIC, developed by Acorn Computers Ltd, incorporating many extra structured programming keywords and advanced floating-point operation features.
As the popularity of BASIC grew in this period, computer magazines published complete source code in BASIC for video games, utilities, and other programs. Given BASIC's straightforward nature, it was a simple matter to type in the code from the magazine and execute the program. Different magazines were published featuring programs for specific computers, though some BASIC programs were considered universal and could be used in machines running any variant of BASIC (sometimes with minor adaptations). Many books of type-in programs were also available, and in particular, Ahl published versions of the original 101 BASIC games converted into the Microsoft dialect and published it from Creative Computing as BASIC Computer Games. This book, and its sequels, provided hundreds of ready-to-go programs that could be easily converted to practically any BASIC-running platform. The book reached the stores in 1978, just as the home computer market was starting off, and it became the first million-selling computer book. Later packages, such as Learn to Program BASIC would also have gaming as an introductory focus. On the business-focused CP/M computers which soon became widespread in small business environments, Microsoft BASIC (MBASIC) was one of the leading applications.
In 1978, David Lien published the first edition of The BASIC Handbook: An Encyclopedia of the BASIC Computer Language, documenting keywords across over 78 different computers. By 1981, the second edition documented keywords from over 250 different computers, showcasing the explosive growth of the microcomputer era.
When IBM was designing the IBM PC, they followed the paradigm of existing home computers in having a built-in BASIC interpreter. They sourced this from Microsoft – IBM Cassette BASIC – but Microsoft also produced several other versions of BASIC for MS-DOS/PC DOS including IBM Disk BASIC (BASIC D), IBM BASICA (BASIC A), GW-BASIC (a BASICA-compatible version that did not need IBM's ROM) and QBasic, all typically bundled with the machine. In addition they produced the Microsoft BASIC Compiler aimed at professional programmers. Turbo Pascal-publisher Borland published Turbo Basic 1.0 in 1985 (successor versions are still being marketed under the name PowerBASIC). On Unix-like systems, specialized implementations were created such as XBasic and X11-Basic. XBasic was ported to Microsoft Windows as XBLite, and cross-platform variants such as SmallBasic, yabasic, Bywater BASIC, nuBasic, MyBasic, Logic Basic, Liberty BASIC, and wxBasic emerged. FutureBASIC and Chipmunk Basic meanwhile targeted the Apple Macintosh.
These later variations introduced many extensions, such as improved string manipulation and graphics support, access to the file system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. However, by the latter half of the 1980s, users were increasingly using pre-made applications written by others rather than learning programming themselves; while professional programmers now had a wide range of more advanced languages available on small computers. C and later C++ became the languages of choice for professional "shrink wrap" application development.
A niche that BASIC continued to fill was for hobbyist video game development, as game creation systems and readily available game engines were still in their infancy. The Atari ST had STOS BASIC while the Amiga had AMOS BASIC for this purpose. Microsoft first exhibited BASIC for game development with DONKEY.BAS for GW-BASIC, and later GORILLA.BAS and NIBBLES.BAS for Quick Basic. QBasic maintained an active game development community, which helped later spawn the QB64 and FreeBASIC implementations. In 2013 a game written in QBasic and compiled with QB64 for modern computers entitled Black Annex was released on Steam. Blitz Basic, Dark Basic, SdlBasic, Super Game System Basic, RCBasic, PlayBASIC, CoolBasic, AllegroBASIC, ethosBASIC, NaaLaa, GLBasic and Basic4GL further filled this demand, right up to the modern AppGameKit, Monkey 2 and Cerberus-X.
In 1991, Microsoft introduced Visual Basic, an evolutionary development of QuickBASIC. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as "With" and "For Each". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, "Gosub"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. Microsoft also spun it off as Visual Basic for Applications and Embedded Visual Basic.
While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed "in the blink of an eye" even using a "slow" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language, and also features some cross-platform capability through implementations such as Mono-Basic. The IDE, with its event-driven GUI builder, was also influential on other tools, most notably Borland Software's Delphi for Object Pascal and its own descendants such as Lazarus.
Mainstream support for the final version 6.0 of the original Visual Basic ended on March 31, 2005, followed by extended support in March 2008. Owing to its persistent remaining popularity, third-party attempts to further support it, such as Rubberduck and ModernVB, exist. On February 2, 2017 Microsoft announced that development on VB.NET would no longer be in parallel with that of C#, and on March 11, 2020 it was announced that evolution of the VB.NET language had also concluded. Even so, the language was still supported and the third-party Mercury extension has since been produced. Meanwhile, competitors exist such as B4X, RAD Basic, twinBASIC, VisualFBEditor, InForm, Xojo, and Gambas.
Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, HBasic, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz).
Several web-based simple BASIC interpreters also now exist, including Microsoft's Small Basic and Google's wwwBASIC. A number of compilers also exist that convert BASIC into JavaScript, such as JSBasic which re-implements Applesoft BASIC, Spider BASIC, and NS Basic.
Building from earlier efforts such as Mobile Basic and CellularBASIC, many dialects are now available for smartphones and tablets. Through the Apple App Store for iOS options include Hand BASIC, Learn BASIC, Smart Basic based on Minimal BASIC, Basic! by miSoft, and BASIC by Anastasia Kovba. The Google Play store for Android meanwhile has the touchscreen focused Touch Basic, B4A, the RFO BASIC! interpreter based on Dartmouth Basic, and adaptations of SmallBasic, BBC Basic, Tiny Basic, X11-Basic, and NS Basic.
On game consoles, an application for the Nintendo 3DS and Nintendo DSi called Petit Computer allows for programming in a slightly modified version of BASIC with DS button support. A version has also been released for Nintendo Switch, which has also been supplied a version of the Fuze Code System, a BASIC variant first implemented as a custom Raspberry Pi machine. Previously BASIC was made available on consoles as Family BASIC (for the Nintendo Famicom) and PSX Chipmunk Basic (for the original PlayStation), while yabasic was ported to the PlayStation 2 and FreeBASIC to the original Xbox, with Dragon BASIC created for homebrew on the Game Boy Advance and Nintendo DS.
Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments (TI-BASIC), HP (HP BASIC), Casio (Casio BASIC), and others.
QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories.
The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4Dos, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an old-school interpreter similar to BASICs of the 1970s, is available for Linux, Microsoft Windows and macOS.
The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple "Try It In BASIC" exercises that encouraged students to experiment with mathematical and computational concepts on classroom or home computers. Popular computer magazines of the day typically included type-in programs.
Futurist and sci-fi writer David Brin mourned the loss of ubiquitous BASIC in a 2006 Salon article as have others who first used computers during this era. In turn, the article prompted Microsoft to develop and release Small Basic; it also inspired similar projects like Basic-256. Dartmouth held a 50th anniversary celebration for BASIC on 1 May 2014, as did other organisations; at least one organisation of VBA programmers organised a 35th anniversary observance in 1999.
Dartmouth College celebrated the 50th anniversary of the BASIC language with a day of events on April 30, 2014. A short documentary film was produced for the event.
Minimal versions of BASIC had only integer variables and one- or two-letter variable names, which minimized requirements of limited and expensive memory (RAM). More powerful versions had floating-point arithmetic, and variables could be labelled with names six or more characters long. There were some problems and restrictions in early implementations; for example, Applesoft BASIC allowed variable names to be several characters long, but only the first two were significant, thus it was possible to inadvertently write a program with variables "LOSS" and "LOAN", which would be treated as being the same; assigning a value to "LOAN" would silently overwrite the value intended as "LOSS". Keywords could not be used in variables in many early BASICs; "SCORE" would be interpreted as "SC" OR "E", where OR was a keyword. String variables are usually distinguished in many microcomputer dialects by having $ suffixed to their name as a sigil, and values are often identified as strings by being delimited by "double quotation marks". Arrays in BASIC could contain integers, floating point or string variables.
Some dialects of BASIC supported matrices and matrix operations, which can be used to solve sets of simultaneous linear algebraic equations. These dialects would directly support matrix operations such as assignment, addition, multiplication (of compatible matrix types), and evaluation of a determinant. Many microcomputer BASICs did not support this data type; matrix operations were still possible, but had to be programmed explicitly on array elements.
New BASIC programmers on a home computer might start with a simple program, perhaps using the language's PRINT statement to display a message on the screen; a well-known and often-replicated example is Kernighan and Ritchie's "Hello, World!" program:
An infinite loop could be used to fill the display with the message:
Note that the END statement is optional and has no action in most dialects of BASIC. It was not always included, as is the case in this example. This same program can be modified to print a fixed number of messages using the common FOR...NEXT statement:
Most home computers BASIC versions, such as MSX BASIC and GW-BASIC, supported simple data types, loop cycles, and arrays. The following example is written for GW-BASIC, but will work in most versions of BASIC with minimal changes:
The resulting dialog might resemble:
The original Dartmouth Basic was unusual in having a matrix keyword, MAT. Although not implemented by most later microprocessor derivatives, it is used in this example from the 1968 manual which averages the numbers that are input:
Second-generation BASICs (for example, VAX Basic, SuperBASIC, True BASIC, QuickBASIC, BBC BASIC, Pick BASIC, PowerBASIC, Liberty BASIC, QB64 and (arguably) COMAL) introduced a number of features into the language, primarily related to structured and procedure-oriented programming. Usually, line numbering is omitted from the language and replaced with labels (for GOTO) and procedures to encourage easier and more flexible design. In addition keywords and structures to support repetition, selection and procedures with local variables were introduced.
The following example is in Microsoft QuickBASIC:
Third-generation BASIC dialects such as Visual Basic, Xojo, Gambas, StarOffice Basic, BlitzMax and PureBasic introduced features to support object-oriented and event-driven programming paradigm. Most built-in procedures and functions are now represented as methods of standard objects rather than operators. Also, the operating system became increasingly accessible to the BASIC language.
The following example is in Visual Basic .NET: | [
{
"paragraph_id": 0,
"text": "BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In addition to the programming language, Kemeny and Kurtz developed the Dartmouth Time Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era, and became the de facto programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge.",
"title": ""
},
{
"paragraph_id": 3,
"text": "BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features (such as Pascal and C) became tenable on such computers. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and \"VB\" remains a major programming language in the form of VB.NET, while a hobbyist scene for BASIC more broadly continues to exist.",
"title": ""
},
{
"paragraph_id": 4,
"text": "John G. Kemeny was the chairman of the Dartmouth College Mathematics Department. Based largely on his reputation as an innovator in math teaching, in 1959 the College won an Alfred P. Sloan Foundation award for $500,000 to build a new department building. Thomas E. Kurtz had joined the department in 1956, and from the 1960s Kemeny and Kurtz agreed on the need for programming literacy among students outside the traditional STEM fields. Kemeny later noted that \"Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate. It was as simple as that.\"",
"title": "Origin"
},
{
"paragraph_id": 5,
"text": "Kemeny and Kurtz had made two previous experiments with simplified languages, DARSIMCO (Dartmouth Simplified Code) and DOPE (Dartmouth Oversimplified Programming Experiment). These did not progress past a single freshman class. New experiments using Fortran and ALGOL followed, but Kurtz concluded these languages were too tricky for what they desired. As Kurtz noted, Fortran had numerous oddly-formed commands, notably an \"almost impossible-to-memorize convention for specifying a loop: DO 100, I = 1, 10, 2. Is it '1, 10, 2' or '1, 2, 10', and is the comma after the line number required or not?\"",
"title": "Origin"
},
{
"paragraph_id": 6,
"text": "Moreover, the lack of any sort of immediate feedback was a key problem; the machines of the era used batch processing and took a long time to complete a run of a program. While Kurtz was visiting MIT, John McCarthy suggested that time-sharing offered a solution; a single machine could divide up its processing time among many users, giving them the illusion of having a (slow) computer to themselves. Small programs would return results in a few seconds. This led to increasing interest in a system using time-sharing and a new language specifically for use by non-STEM students.",
"title": "Origin"
},
{
"paragraph_id": 7,
"text": "Kemeny wrote the first version of BASIC. The acronym BASIC comes from the name of an unpublished paper by Thomas Kurtz. The new language was heavily patterned on FORTRAN II; statements were one-to-a-line, numbers were used to indicate the target of loops and branches, and many of the commands were similar or identical to Fortran. However, the syntax was changed wherever it could be improved. For instance, the difficult to remember DO loop was replaced by the much easier to remember FOR I = 1 TO 10 STEP 2, and the line number used in the DO was instead indicated by the NEXT I. Likewise, the cryptic IF statement of Fortran, whose syntax matched a particular instruction of the machine on which it was originally written, became the simpler IF I=5 THEN GOTO 100. These changes made the language much less idiosyncratic while still having an overall structure and feel similar to the original FORTRAN.",
"title": "Origin"
},
{
"paragraph_id": 8,
"text": "The project received a $300,000 grant from the National Science Foundation, which was used to purchase a GE-225 computer for processing, and a Datanet-30 realtime processor to handle the Teletype Model 33 teleprinters used for input and output. A team of a dozen undergraduates worked on the project for about a year, writing both the DTSS system and the BASIC compiler. The first version BASIC language was released on 1 May 1964.",
"title": "Origin"
},
{
"paragraph_id": 9,
"text": "Initially, BASIC concentrated on supporting straightforward mathematical work, with matrix arithmetic support from its initial implementation as a batch language, and character string functionality being added by 1965. Usage in the university rapidly expanded, requiring the main CPU to be replaced by a GE-235, and still later by a GE-635. By the early 1970s there were hundreds of terminals connected to the machines at Dartmouth, some of them remotely.",
"title": "Origin"
},
{
"paragraph_id": 10,
"text": "Wanting use of the language to become widespread, its designers made the compiler available free of charge. In the 1960s, software became a chargeable commodity; until then, it was provided without charge as a service with expensive computers, usually available only to lease. They also made it available to high schools in the Hanover, New Hampshire, area and regionally throughout New England on Teletype Model 33 and Model 35 teleprinter terminals connected to Dartmouth via dial-up phone lines, and they put considerable effort into promoting the language. In the following years, as other dialects of BASIC appeared, Kemeny and Kurtz's original BASIC dialect became known as Dartmouth BASIC.",
"title": "Origin"
},
{
"paragraph_id": 11,
"text": "New Hampshire recognized the accomplishment in 2019 when it erected a highway historical marker in Hanover describing the creation of \"the first user-friendly programming language\".",
"title": "Origin"
},
{
"paragraph_id": 12,
"text": "The emergence of BASIC took place as part of a wider movement towards time-sharing systems. First conceptualized during the late 1950s, the idea became so dominant in the computer industry by the early 1960s that its proponents were speaking of a future in which users would \"buy time on the computer much the same way that the average household buys power and water from utility companies\".",
"title": "Spread on time-sharing services"
},
{
"paragraph_id": 13,
"text": "General Electric, having worked on the Dartmouth project, wrote their own underlying operating system and launched an online time-sharing system known as Mark I. It featured BASIC as one of its primary selling points. Other companies in the emerging field quickly followed suit; Tymshare introduced SUPER BASIC in 1968, CompuServe had a version on the DEC-10 at their launch in 1969, and by the early 1970s BASIC was largely universal on general-purpose mainframe computers. Even IBM eventually joined the club with the introduction of VS-BASIC in 1973.",
"title": "Spread on time-sharing services"
},
{
"paragraph_id": 14,
"text": "Although time-sharing services with BASIC were successful for a time, the widespread success predicted earlier was not to be. The emergence of minicomputers during the same period, and especially low-cost microcomputers in the mid-1970s, allowed anyone to purchase and run their own systems rather than buy online time which was typically billed at dollars per minute.",
"title": "Spread on time-sharing services"
},
{
"paragraph_id": 15,
"text": "BASIC, by its very nature of being small, was naturally suited to porting to the minicomputer market, which was emerging at the same time as the time-sharing services. These machines had small main memory, perhaps as little as 4 KB in modern terminology, and lacked high-performance storage like hard drives that make compilers practical. On these systems, BASIC was normally implemented as an interpreter rather than a compiler due to its lower requirement for working memory.",
"title": "Spread on minicomputers"
},
{
"paragraph_id": 16,
"text": "A particularly important example was HP Time-Shared BASIC, which, like the original Dartmouth system, used two computers working together to implement a time-sharing system. The first, a low-end machine in the HP 2100 series, was used to control user input and save and load their programs to tape or disk. The other, a high-end version of the same underlying machine, ran the programs and generated output. For a cost of about $100,000, one could own a machine capable of running between 16 and 32 users at the same time. The system, bundled as the HP 2000, was the first mini platform to offer time-sharing and was an immediate runaway success, catapulting HP to become the third-largest vendor in the minicomputer space, behind DEC and Data General (DG).",
"title": "Spread on minicomputers"
},
{
"paragraph_id": 17,
"text": "DEC, the leader in the minicomputer space since the mid-1960s, had initially ignored BASIC. This was due to their work with RAND Corporation, who had purchased a PDP-6 to run their JOSS language, which was conceptually very similar to BASIC. This led DEC to introduce a smaller, cleaned up version of JOSS known as FOCAL, which they heavily promoted in the late 1960s. However, with timesharing systems widely offering BASIC, and all of their competition in the minicomputer space doing the same, DEC's customers were clamoring for BASIC. After management repeatedly ignored their pleas, David H. Ahl took it upon himself to buy a BASIC for the PDP-8, which was a major success in the education market. By the early 1970s, FOCAL and JOSS had been forgotten and BASIC had become almost universal in the minicomputer market. DEC would go on to introduce their updated version, BASIC-PLUS, for use on the RSTS/E time-sharing operating system.",
"title": "Spread on minicomputers"
},
{
"paragraph_id": 18,
"text": "During this period a number of simple text-based games were written in BASIC, most notably Mike Mayfield's Star Trek. David Ahl collected these, some ported from FOCAL, and published them in an educational newsletter he compiled. He later collected a number of these into book form, 101 BASIC Computer Games, published in 1973. During the same period, Ahl was involved in the creation of a small computer for education use, an early personal computer. When management refused to support the concept, Ahl left DEC in 1974 to found the seminal computer magazine, Creative Computing. The book remained popular, and was re-published on several occasions.",
"title": "Spread on minicomputers"
},
{
"paragraph_id": 19,
"text": "The introduction of the first microcomputers in the mid-1970s was the start of explosive growth for BASIC. It had the advantage that it was fairly well known to the young designers and computer hobbyists who took an interest in microcomputers, many of whom had seen BASIC on minis or mainframes. Despite Dijkstra's famous judgement in 1975, \"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration\", BASIC was one of the few languages that was both high-level enough to be usable by those without training and small enough to fit into the microcomputers of the day, making it the de facto standard programming language on early microcomputers.",
"title": "Explosive growth: the home computer era"
},
{
"paragraph_id": 20,
"text": "The first microcomputer version of BASIC was co-written by Bill Gates, Paul Allen and Monte Davidoff for their newly formed company, Micro-Soft. This was released by MITS in punch tape format for the Altair 8800 shortly after the machine itself, immediately cementing BASIC as the primary language of early microcomputers. Members of the Homebrew Computer Club began circulating copies of the program, causing Gates to write his Open Letter to Hobbyists, complaining about this early example of software piracy.",
"title": "Explosive growth: the home computer era"
},
{
"paragraph_id": 21,
"text": "Partially in response to Gates's letter, and partially to make an even smaller BASIC that would run usefully on 4 KB machines, Bob Albrecht urged Dennis Allison to write their own variation of the language. How to design and implement a stripped-down version of an interpreter for the BASIC language was covered in articles by Allison in the first three quarterly issues of the People's Computer Company newsletter published in 1975 and implementations with source code published in Dr. Dobb's Journal of Tiny BASIC Calisthenics & Orthodontia: Running Light Without Overbyte. This led to a wide variety of Tiny BASICs with added features or other improvements, with versions from Tom Pittman and Li-Chen Wang becoming particularly well known.",
"title": "Explosive growth: the home computer era"
},
{
"paragraph_id": 22,
"text": "Micro-Soft, by this time Microsoft, ported their interpreter for the MOS 6502, which quickly become one of the most popular microprocessors of the 8-bit era. When new microcomputers began to appear, notably the \"1977 trinity\" of the TRS-80, Commodore PET and Apple II, they either included a version of the MS code, or quickly introduced new models with it. Ohio Scientific's personal computers also joined this trend at that time. By 1978, MS BASIC was a de facto standard and practically every home computer of the 1980s included it in ROM. Upon boot, a BASIC interpreter in direct mode was presented.",
"title": "Explosive growth: the home computer era"
},
{
"paragraph_id": 23,
"text": "Commodore Business Machines included Commodore BASIC, based on Microsoft BASIC. The Apple II and TRS-80 each had two versions of BASIC, a smaller introductory version introduced with the initial releases of the machines and an MS-based version introduced as interest in the platforms increased. As new companies entered the field, additional versions were added that subtly changed the BASIC family. The Atari 8-bit family had its own Atari BASIC that was modified in order to fit on an 8 KB ROM cartridge. Sinclair BASIC was introduced in 1980 with the Sinclair ZX80, and was later extended for the Sinclair ZX81 and the Sinclair ZX Spectrum. The BBC published BBC BASIC, developed by Acorn Computers Ltd, incorporating many extra structured programming keywords and advanced floating-point operation features.",
"title": "Explosive growth: the home computer era"
},
{
"paragraph_id": 24,
"text": "As the popularity of BASIC grew in this period, computer magazines published complete source code in BASIC for video games, utilities, and other programs. Given BASIC's straightforward nature, it was a simple matter to type in the code from the magazine and execute the program. Different magazines were published featuring programs for specific computers, though some BASIC programs were considered universal and could be used in machines running any variant of BASIC (sometimes with minor adaptations). Many books of type-in programs were also available, and in particular, Ahl published versions of the original 101 BASIC games converted into the Microsoft dialect and published it from Creative Computing as BASIC Computer Games. This book, and its sequels, provided hundreds of ready-to-go programs that could be easily converted to practically any BASIC-running platform. The book reached the stores in 1978, just as the home computer market was starting off, and it became the first million-selling computer book. Later packages, such as Learn to Program BASIC would also have gaming as an introductory focus. On the business-focused CP/M computers which soon became widespread in small business environments, Microsoft BASIC (MBASIC) was one of the leading applications.",
"title": "Explosive growth: the home computer era"
},
{
"paragraph_id": 25,
"text": "In 1978, David Lien published the first edition of The BASIC Handbook: An Encyclopedia of the BASIC Computer Language, documenting keywords across over 78 different computers. By 1981, the second edition documented keywords from over 250 different computers, showcasing the explosive growth of the microcomputer era.",
"title": "Explosive growth: the home computer era"
},
{
"paragraph_id": 26,
"text": "When IBM was designing the IBM PC, they followed the paradigm of existing home computers in having a built-in BASIC interpreter. They sourced this from Microsoft – IBM Cassette BASIC – but Microsoft also produced several other versions of BASIC for MS-DOS/PC DOS including IBM Disk BASIC (BASIC D), IBM BASICA (BASIC A), GW-BASIC (a BASICA-compatible version that did not need IBM's ROM) and QBasic, all typically bundled with the machine. In addition they produced the Microsoft BASIC Compiler aimed at professional programmers. Turbo Pascal-publisher Borland published Turbo Basic 1.0 in 1985 (successor versions are still being marketed under the name PowerBASIC). On Unix-like systems, specialized implementations were created such as XBasic and X11-Basic. XBasic was ported to Microsoft Windows as XBLite, and cross-platform variants such as SmallBasic, yabasic, Bywater BASIC, nuBasic, MyBasic, Logic Basic, Liberty BASIC, and wxBasic emerged. FutureBASIC and Chipmunk Basic meanwhile targeted the Apple Macintosh.",
"title": "IBM PC and compatibles"
},
{
"paragraph_id": 27,
"text": "These later variations introduced many extensions, such as improved string manipulation and graphics support, access to the file system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. However, by the latter half of the 1980s, users were increasingly using pre-made applications written by others rather than learning programming themselves; while professional programmers now had a wide range of more advanced languages available on small computers. C and later C++ became the languages of choice for professional \"shrink wrap\" application development.",
"title": "IBM PC and compatibles"
},
{
"paragraph_id": 28,
"text": "A niche that BASIC continued to fill was for hobbyist video game development, as game creation systems and readily available game engines were still in their infancy. The Atari ST had STOS BASIC while the Amiga had AMOS BASIC for this purpose. Microsoft first exhibited BASIC for game development with DONKEY.BAS for GW-BASIC, and later GORILLA.BAS and NIBBLES.BAS for Quick Basic. QBasic maintained an active game development community, which helped later spawn the QB64 and FreeBASIC implementations. In 2013 a game written in QBasic and compiled with QB64 for modern computers entitled Black Annex was released on Steam. Blitz Basic, Dark Basic, SdlBasic, Super Game System Basic, RCBasic, PlayBASIC, CoolBasic, AllegroBASIC, ethosBASIC, NaaLaa, GLBasic and Basic4GL further filled this demand, right up to the modern AppGameKit, Monkey 2 and Cerberus-X.",
"title": "IBM PC and compatibles"
},
{
"paragraph_id": 29,
"text": "In 1991, Microsoft introduced Visual Basic, an evolutionary development of QuickBASIC. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as \"With\" and \"For Each\". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, \"Gosub\"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. Microsoft also spun it off as Visual Basic for Applications and Embedded Visual Basic.",
"title": "Visual Basic"
},
{
"paragraph_id": 30,
"text": "While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed \"in the blink of an eye\" even using a \"slow\" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language, and also features some cross-platform capability through implementations such as Mono-Basic. The IDE, with its event-driven GUI builder, was also influential on other tools, most notably Borland Software's Delphi for Object Pascal and its own descendants such as Lazarus.",
"title": "Visual Basic"
},
{
"paragraph_id": 31,
"text": "Mainstream support for the final version 6.0 of the original Visual Basic ended on March 31, 2005, followed by extended support in March 2008. Owing to its persistent remaining popularity, third-party attempts to further support it, such as Rubberduck and ModernVB, exist. On February 2, 2017 Microsoft announced that development on VB.NET would no longer be in parallel with that of C#, and on March 11, 2020 it was announced that evolution of the VB.NET language had also concluded. Even so, the language was still supported and the third-party Mercury extension has since been produced. Meanwhile, competitors exist such as B4X, RAD Basic, twinBASIC, VisualFBEditor, InForm, Xojo, and Gambas.",
"title": "Visual Basic"
},
{
"paragraph_id": 32,
"text": "Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, HBasic, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz).",
"title": "Post-1990 versions and dialects"
},
{
"paragraph_id": 33,
"text": "Several web-based simple BASIC interpreters also now exist, including Microsoft's Small Basic and Google's wwwBASIC. A number of compilers also exist that convert BASIC into JavaScript, such as JSBasic which re-implements Applesoft BASIC, Spider BASIC, and NS Basic.",
"title": "Post-1990 versions and dialects"
},
{
"paragraph_id": 34,
"text": "Building from earlier efforts such as Mobile Basic and CellularBASIC, many dialects are now available for smartphones and tablets. Through the Apple App Store for iOS options include Hand BASIC, Learn BASIC, Smart Basic based on Minimal BASIC, Basic! by miSoft, and BASIC by Anastasia Kovba. The Google Play store for Android meanwhile has the touchscreen focused Touch Basic, B4A, the RFO BASIC! interpreter based on Dartmouth Basic, and adaptations of SmallBasic, BBC Basic, Tiny Basic, X11-Basic, and NS Basic.",
"title": "Post-1990 versions and dialects"
},
{
"paragraph_id": 35,
"text": "On game consoles, an application for the Nintendo 3DS and Nintendo DSi called Petit Computer allows for programming in a slightly modified version of BASIC with DS button support. A version has also been released for Nintendo Switch, which has also been supplied a version of the Fuze Code System, a BASIC variant first implemented as a custom Raspberry Pi machine. Previously BASIC was made available on consoles as Family BASIC (for the Nintendo Famicom) and PSX Chipmunk Basic (for the original PlayStation), while yabasic was ported to the PlayStation 2 and FreeBASIC to the original Xbox, with Dragon BASIC created for homebrew on the Game Boy Advance and Nintendo DS.",
"title": "Post-1990 versions and dialects"
},
{
"paragraph_id": 36,
"text": "Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments (TI-BASIC), HP (HP BASIC), Casio (Casio BASIC), and others.",
"title": "Calculators"
},
{
"paragraph_id": 37,
"text": "QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories.",
"title": "Windows command-line"
},
{
"paragraph_id": 38,
"text": "The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4Dos, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an old-school interpreter similar to BASICs of the 1970s, is available for Linux, Microsoft Windows and macOS.",
"title": "Other"
},
{
"paragraph_id": 39,
"text": "The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple \"Try It In BASIC\" exercises that encouraged students to experiment with mathematical and computational concepts on classroom or home computers. Popular computer magazines of the day typically included type-in programs.",
"title": "Legacy"
},
{
"paragraph_id": 40,
"text": "Futurist and sci-fi writer David Brin mourned the loss of ubiquitous BASIC in a 2006 Salon article as have others who first used computers during this era. In turn, the article prompted Microsoft to develop and release Small Basic; it also inspired similar projects like Basic-256. Dartmouth held a 50th anniversary celebration for BASIC on 1 May 2014, as did other organisations; at least one organisation of VBA programmers organised a 35th anniversary observance in 1999.",
"title": "Legacy"
},
{
"paragraph_id": 41,
"text": "Dartmouth College celebrated the 50th anniversary of the BASIC language with a day of events on April 30, 2014. A short documentary film was produced for the event.",
"title": "Legacy"
},
{
"paragraph_id": 42,
"text": "Minimal versions of BASIC had only integer variables and one- or two-letter variable names, which minimized requirements of limited and expensive memory (RAM). More powerful versions had floating-point arithmetic, and variables could be labelled with names six or more characters long. There were some problems and restrictions in early implementations; for example, Applesoft BASIC allowed variable names to be several characters long, but only the first two were significant, thus it was possible to inadvertently write a program with variables \"LOSS\" and \"LOAN\", which would be treated as being the same; assigning a value to \"LOAN\" would silently overwrite the value intended as \"LOSS\". Keywords could not be used in variables in many early BASICs; \"SCORE\" would be interpreted as \"SC\" OR \"E\", where OR was a keyword. String variables are usually distinguished in many microcomputer dialects by having $ suffixed to their name as a sigil, and values are often identified as strings by being delimited by \"double quotation marks\". Arrays in BASIC could contain integers, floating point or string variables.",
"title": "Syntax"
},
{
"paragraph_id": 43,
"text": "Some dialects of BASIC supported matrices and matrix operations, which can be used to solve sets of simultaneous linear algebraic equations. These dialects would directly support matrix operations such as assignment, addition, multiplication (of compatible matrix types), and evaluation of a determinant. Many microcomputer BASICs did not support this data type; matrix operations were still possible, but had to be programmed explicitly on array elements.",
"title": "Syntax"
},
{
"paragraph_id": 44,
"text": "New BASIC programmers on a home computer might start with a simple program, perhaps using the language's PRINT statement to display a message on the screen; a well-known and often-replicated example is Kernighan and Ritchie's \"Hello, World!\" program:",
"title": "Syntax"
},
{
"paragraph_id": 45,
"text": "An infinite loop could be used to fill the display with the message:",
"title": "Syntax"
},
{
"paragraph_id": 46,
"text": "Note that the END statement is optional and has no action in most dialects of BASIC. It was not always included, as is the case in this example. This same program can be modified to print a fixed number of messages using the common FOR...NEXT statement:",
"title": "Syntax"
},
{
"paragraph_id": 47,
"text": "Most home computers BASIC versions, such as MSX BASIC and GW-BASIC, supported simple data types, loop cycles, and arrays. The following example is written for GW-BASIC, but will work in most versions of BASIC with minimal changes:",
"title": "Syntax"
},
{
"paragraph_id": 48,
"text": "The resulting dialog might resemble:",
"title": "Syntax"
},
{
"paragraph_id": 49,
"text": "The original Dartmouth Basic was unusual in having a matrix keyword, MAT. Although not implemented by most later microprocessor derivatives, it is used in this example from the 1968 manual which averages the numbers that are input:",
"title": "Syntax"
},
{
"paragraph_id": 50,
"text": "Second-generation BASICs (for example, VAX Basic, SuperBASIC, True BASIC, QuickBASIC, BBC BASIC, Pick BASIC, PowerBASIC, Liberty BASIC, QB64 and (arguably) COMAL) introduced a number of features into the language, primarily related to structured and procedure-oriented programming. Usually, line numbering is omitted from the language and replaced with labels (for GOTO) and procedures to encourage easier and more flexible design. In addition keywords and structures to support repetition, selection and procedures with local variables were introduced.",
"title": "Syntax"
},
{
"paragraph_id": 51,
"text": "The following example is in Microsoft QuickBASIC:",
"title": "Syntax"
},
{
"paragraph_id": 52,
"text": "Third-generation BASIC dialects such as Visual Basic, Xojo, Gambas, StarOffice Basic, BlitzMax and PureBasic introduced features to support object-oriented and event-driven programming paradigm. Most built-in procedures and functions are now represented as methods of standard objects rather than operators. Also, the operating system became increasingly accessible to the BASIC language.",
"title": "Syntax"
},
{
"paragraph_id": 53,
"text": "The following example is in Visual Basic .NET:",
"title": "Syntax"
},
{
"paragraph_id": 54,
"text": "",
"title": "Compilers and interpreters"
},
{
"paragraph_id": 55,
"text": "",
"title": "External links"
}
] | BASIC is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn. In addition to the programming language, Kemeny and Kurtz developed the Dartmouth Time Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC. The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era, and became the de facto programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge. BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features became tenable on such computers. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET, while a hobbyist scene for BASIC more broadly continues to exist. | 2001-09-28T18:34:24Z | 2023-11-26T02:46:43Z | [
"Template:Other uses",
"Template:See also",
"Template:Reflist",
"Template:YouTube",
"Template:Use mdy dates",
"Template:Cite news",
"Template:Programming languages",
"Template:BASIC",
"Template:Infobox programming language",
"Template:Sfn",
"Template:Main",
"Template:Cite encyclopedia",
"Template:Anchor",
"Template:Cite book",
"Template:Cite web",
"Template:Cite interview",
"Template:Excerpt",
"Template:Notelist",
"Template:Cite magazine",
"Template:Authority control",
"Template:Code",
"Template:Cite tech report",
"Template:Curlie",
"Template:Short description",
"Template:Efn",
"Template:Wikibooks",
"Template:HOPL-lang",
"Template:Citation",
"Template:Webarchive",
"Template:Refbegin",
"Template:Refend"
] | https://en.wikipedia.org/wiki/BASIC |
4,016 | List of Byzantine emperors | The foundation of Constantinople in 330 AD marks the conventional start of the Eastern Roman Empire, which to fell to the Ottoman Empire in 1453 AD. Only the emperors who were recognized as legitimate rulers and exercised sovereign authority are included, to the exclusion of junior co-emperors (symbasileis) who never attained the status of sole or senior ruler, as well as of the various usurpers or rebels who claimed the imperial title.
The following list starts with Constantine the Great, the first Christian emperor, who rebuilt the city of Byzantium as an imperial capital, Constantinople, and who was regarded by the later emperors as the model ruler. Modern historians distinguish this later phase of the Roman Empire as Byzantine due to the imperial seat moving from Rome to Byzantium, the Empire's integration of Christianity, and the predominance of Greek instead of Latin.
The Byzantine Empire was the direct legal continuation of the eastern half of the Roman Empire following the division of the Roman Empire in 395. Emperors listed below up to Theodosius I in 395 were sole or joint rulers of the entire Roman Empire. The Western Roman Empire continued until 476. Byzantine emperors considered themselves to be Roman emperors in direct succession from Augustus; the term "Byzantine" became convention in Western historiography in the 19th century. The use of the title "Roman Emperor" by those ruling from Constantinople was not contested until after the papal coronation of the Frankish Charlemagne as Holy Roman emperor (25 December 800).
In practice, according to the Hellenistic political system, the Byzantine emperor had been given total power through God to shape the state and its subjects, he was the last authority and legislator of the empire and all his work was in imitation of the sacred kingdom of God, also according to the Christian principles, he was the ultimate benefecator and protector of his people.
The title of all Emperors preceding Heraclius was officially "Augustus", although other titles such as Dominus were also used. Their names were preceded by Imperator Caesar and followed by Augustus. Following Heraclius, the title commonly became the Greek Basileus (Gr. Βασιλεύς), which had formerly meant sovereign, though Augustus continued to be used in a reduced capacity. Following the establishment of the rival Holy Roman Empire in Western Europe, the title "Autokrator" (Gr. Αὐτοκράτωρ) was increasingly used. In later centuries, the Emperor could be referred to by Western Christians as the "Emperor of the Greeks". Towards the end of the Empire, the standard imperial formula of the Byzantine ruler was "[Emperor's name] in Christ, Emperor and Autocrat of the Romans" (cf. Ῥωμαῖοι and Rûm).
Dynasties were a common tradition and structure for rulers and government systems in the medieval period. The principle or formal requirements for hereditary succession, however, was not a formal part of the Empire's governance, hereditary succession was a custom and tradition, carried on as habit and benefitted from some sense of legitimacy, but not as a "rule" or inviolable or unchallengeable requirement of for office at the time. | [
{
"paragraph_id": 0,
"text": "",
"title": "Constantinian dynasty (306–363)"
},
{
"paragraph_id": 1,
"text": "The foundation of Constantinople in 330 AD marks the conventional start of the Eastern Roman Empire, which to fell to the Ottoman Empire in 1453 AD. Only the emperors who were recognized as legitimate rulers and exercised sovereign authority are included, to the exclusion of junior co-emperors (symbasileis) who never attained the status of sole or senior ruler, as well as of the various usurpers or rebels who claimed the imperial title.",
"title": "Constantinian dynasty (306–363)"
},
{
"paragraph_id": 2,
"text": "The following list starts with Constantine the Great, the first Christian emperor, who rebuilt the city of Byzantium as an imperial capital, Constantinople, and who was regarded by the later emperors as the model ruler. Modern historians distinguish this later phase of the Roman Empire as Byzantine due to the imperial seat moving from Rome to Byzantium, the Empire's integration of Christianity, and the predominance of Greek instead of Latin.",
"title": "Constantinian dynasty (306–363)"
},
{
"paragraph_id": 3,
"text": "The Byzantine Empire was the direct legal continuation of the eastern half of the Roman Empire following the division of the Roman Empire in 395. Emperors listed below up to Theodosius I in 395 were sole or joint rulers of the entire Roman Empire. The Western Roman Empire continued until 476. Byzantine emperors considered themselves to be Roman emperors in direct succession from Augustus; the term \"Byzantine\" became convention in Western historiography in the 19th century. The use of the title \"Roman Emperor\" by those ruling from Constantinople was not contested until after the papal coronation of the Frankish Charlemagne as Holy Roman emperor (25 December 800).",
"title": "Constantinian dynasty (306–363)"
},
{
"paragraph_id": 4,
"text": "In practice, according to the Hellenistic political system, the Byzantine emperor had been given total power through God to shape the state and its subjects, he was the last authority and legislator of the empire and all his work was in imitation of the sacred kingdom of God, also according to the Christian principles, he was the ultimate benefecator and protector of his people.",
"title": "Constantinian dynasty (306–363)"
},
{
"paragraph_id": 5,
"text": "The title of all Emperors preceding Heraclius was officially \"Augustus\", although other titles such as Dominus were also used. Their names were preceded by Imperator Caesar and followed by Augustus. Following Heraclius, the title commonly became the Greek Basileus (Gr. Βασιλεύς), which had formerly meant sovereign, though Augustus continued to be used in a reduced capacity. Following the establishment of the rival Holy Roman Empire in Western Europe, the title \"Autokrator\" (Gr. Αὐτοκράτωρ) was increasingly used. In later centuries, the Emperor could be referred to by Western Christians as the \"Emperor of the Greeks\". Towards the end of the Empire, the standard imperial formula of the Byzantine ruler was \"[Emperor's name] in Christ, Emperor and Autocrat of the Romans\" (cf. Ῥωμαῖοι and Rûm).",
"title": "Constantinian dynasty (306–363)"
},
{
"paragraph_id": 6,
"text": "Dynasties were a common tradition and structure for rulers and government systems in the medieval period. The principle or formal requirements for hereditary succession, however, was not a formal part of the Empire's governance, hereditary succession was a custom and tradition, carried on as habit and benefitted from some sense of legitimacy, but not as a \"rule\" or inviolable or unchallengeable requirement of for office at the time.",
"title": "Constantinian dynasty (306–363)"
}
] | The foundation of Constantinople in 330 AD marks the conventional start of the Eastern Roman Empire, which to fell to the Ottoman Empire in 1453 AD. Only the emperors who were recognized as legitimate rulers and exercised sovereign authority are included, to the exclusion of junior co-emperors (symbasileis) who never attained the status of sole or senior ruler, as well as of the various usurpers or rebels who claimed the imperial title. The following list starts with Constantine the Great, the first Christian emperor, who rebuilt the city of Byzantium as an imperial capital, Constantinople, and who was regarded by the later emperors as the model ruler. Modern historians distinguish this later phase of the Roman Empire as Byzantine due to the imperial seat moving from Rome to Byzantium, the Empire's integration of Christianity, and the predominance of Greek instead of Latin. The Byzantine Empire was the direct legal continuation of the eastern half of the Roman Empire following the division of the Roman Empire in 395. Emperors listed below up to Theodosius I in 395 were sole or joint rulers of the entire Roman Empire. The Western Roman Empire continued until 476. Byzantine emperors considered themselves to be Roman emperors in direct succession from Augustus; the term "Byzantine" became convention in Western historiography in the 19th century. The use of the title "Roman Emperor" by those ruling from Constantinople was not contested until after the papal coronation of the Frankish Charlemagne as Holy Roman emperor. In practice, according to the Hellenistic political system, the Byzantine emperor had been given total power through God to shape the state and its subjects, he was the last authority and legislator of the empire and all his work was in imitation of the sacred kingdom of God, also according to the Christian principles, he was the ultimate benefecator and protector of his people. The title of all Emperors preceding Heraclius was officially "Augustus", although other titles such as Dominus were also used. Their names were preceded by Imperator Caesar and followed by Augustus. Following Heraclius, the title commonly became the Greek Basileus, which had formerly meant sovereign, though Augustus continued to be used in a reduced capacity. Following the establishment of the rival Holy Roman Empire in Western Europe, the title "Autokrator" was increasingly used. In later centuries, the Emperor could be referred to by Western Christians as the "Emperor of the Greeks". Towards the end of the Empire, the standard imperial formula of the Byzantine ruler was "[Emperor's name] in Christ, Emperor and Autocrat of the Romans". Dynasties were a common tradition and structure for rulers and government systems in the medieval period. The principle or formal requirements for hereditary succession, however, was not a formal part of the Empire's governance, hereditary succession was a custom and tradition, carried on as habit and benefitted from some sense of legitimacy, but not as a "rule" or inviolable or unchallengeable requirement of for office at the time. | 2001-10-02T10:39:41Z | 2023-11-21T07:59:37Z | [
"Template:Use dmy dates",
"Template:C.",
"Template:Lang",
"Template:Portal",
"Template:Cite web",
"Template:ODB",
"Template:Cite book",
"Template:Short description",
"Template:Byzantine Empire topics",
"Template:Infobox monarchy",
"Template:See also",
"Template:Small",
"Template:Notelist",
"Template:Reflist",
"Template:For",
"Template:Roman emperors",
"Template:Epochs of Roman Emperors",
"Template:Main",
"Template:Cite journal",
"Template:ISBN",
"Template:Circa"
] | https://en.wikipedia.org/wiki/List_of_Byzantine_emperors |
4,024 | Butterfly effect | In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
The term is closely associated with the work of mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the metaphorical example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome.
The idea that small causes may have large effects in weather was earlier acknowledged by French mathematician and engineer Henri Poincaré. American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos.
The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences.
In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole".
Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology.
In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908.
In 1950, Alan Turing noted: "The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping."
The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury. "A Sound of Thunder" features time travel.
More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published:
"...whatever we do affects everything and everyone else, if even in the tiniest way. Why, when a housefly flaps his wings, a breeze goes round the world."
-- The Princess of Pure Reason
In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario.
Lorenz wrote:
At one point I decided to repeat some of the computations in order to examine what was happening in greater detail. I stopped the computer, typed in a line of numbers that it had printed out a while earlier, and set it running again. I went down the hall for a cup of coffee and returned after about an hour, during which time the computer had simulated about two months of weather. The numbers being printed were nothing like the old ones. I immediately suspected a weak vacuum tube or some other computer trouble, which was not uncommon, but before calling for service I decided to see just where the mistake had occurred, knowing that this could speed up the servicing process. Instead of a sudden break, I found that the new values at first repeated the old ones, but soon afterward differed by one and then several units in the last [decimal] place, and then began to differ in the next to the last place and then in the place before that. In fact, the differences more or less steadily doubled in size every four days or so, until all resemblance with the original output disappeared somewhere in the second month. This was enough to tell me what had happened: the numbers that I had typed in were not the exact original numbers, but were the rounded-off values that had appeared in the original printout. The initial round-off errors were the culprits; they were steadily amplifying until they dominated the solution.
In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated:
One meteorologist remarked that if the theory were correct, one flap of a sea gull's wings would be enough to alter the course of the weather forever. The controversy has not yet been settled, but the most recent evidence seems to favor the sea gulls.
Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly's wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely.
The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing represents a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado.
The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions.
Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos.
While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics.
In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC.
Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately.
A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows:
The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances.
If M is the state space for the map f t {\displaystyle f^{t}} , then f t {\displaystyle f^{t}} displays sensitive dependence to initial conditions if for any x in M and any δ > 0, there are y in M, with distance d(. , .) such that 0 < d ( x , y ) < δ {\displaystyle 0<d(x,y)<\delta } and such that
for some positive parameter a. The definition does not require that all points from a neighborhood separate from the base point x, but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems.
The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map:
which, unlike most chaotic maps, has a closed-form solution:
where the initial condition parameter θ {\displaystyle \theta } is given by θ = 1 π sin − 1 ( x 0 1 / 2 ) {\displaystyle \theta ={\tfrac {1}{\pi }}\sin ^{-1}(x_{0}^{1/2})} . For rational θ {\displaystyle \theta } , after a finite number of iterations x n {\displaystyle x_{n}} maps into a periodic sequence. But almost all θ {\displaystyle \theta } are irrational, and, for irrational θ {\displaystyle \theta } , x n {\displaystyle x_{n}} never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2 shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps x n {\displaystyle x_{n}} folded within the range [0, 1].
The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong." The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance.
According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement:
Recently, a short video has been created to present Lorenz's perspective on predictability limit.
By revealing coexisting chaotic and non-chaotic attractors within Lorenz models, Shen and his colleagues proposed a revised view that "weather possesses chaos and order", in contrast to the conventional view of "weather is chaotic". As a result, sensitive dependence on initial conditions (SDIC) does not always appear. Namely, SDIC appears when two orbits (i.e., solutions) become the chaotic attractor; it does not appear when two orbits move toward the same point attractor. The above animation for double pendulum motion provides an analogy. For large angles of swing the motion of the pendulum is often chaotic. By comparison, for small angles of swing, motions are non-chaotic. Multistability is defined when a system (e.g., the double pendulum system) contains more than one bounded attractor that depends only on initial conditions. The multistability was illustrated using kayaking in Figure on the right side (i.e., Figure 1 of ) where the appearance of strong currents and a stagnant area suggests instability and local stability, respectively. As a result, when two kayaks move along strong currents, their paths display SDIC. On the other hand, when two kayaks move into a stagnant area, they become trapped, showing no typical SDIC (although a chaotic transient may occur). Such features of SDIC or no SDIC suggest two types of solutions and illustrate the nature of multistability.
By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows:
"The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons."
The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist.
Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos. | [
{
"paragraph_id": 0,
"text": "In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term is closely associated with the work of mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the metaphorical example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The idea that small causes may have large effects in weather was earlier acknowledged by French mathematician and engineer Henri Poincaré. American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In The Vocation of Man (1800), Johann Gottlieb Fichte says \"you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole\".",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1950, Alan Turing noted: \"The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping.\"",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in \"A Sound of Thunder\", a 1952 short story by Ray Bradbury. \"A Sound of Thunder\" features time travel.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published:",
"title": "History"
},
{
"paragraph_id": 10,
"text": "\"...whatever we do affects everything and everyone else, if even in the tiniest way. Why, when a housefly flaps his wings, a breeze goes round the world.\"",
"title": "History"
},
{
"paragraph_id": 11,
"text": "-- The Princess of Pure Reason",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Lorenz wrote:",
"title": "History"
},
{
"paragraph_id": 14,
"text": "At one point I decided to repeat some of the computations in order to examine what was happening in greater detail. I stopped the computer, typed in a line of numbers that it had printed out a while earlier, and set it running again. I went down the hall for a cup of coffee and returned after about an hour, during which time the computer had simulated about two months of weather. The numbers being printed were nothing like the old ones. I immediately suspected a weak vacuum tube or some other computer trouble, which was not uncommon, but before calling for service I decided to see just where the mistake had occurred, knowing that this could speed up the servicing process. Instead of a sudden break, I found that the new values at first repeated the old ones, but soon afterward differed by one and then several units in the last [decimal] place, and then began to differ in the next to the last place and then in the place before that. In fact, the differences more or less steadily doubled in size every four days or so, until all resemblance with the original output disappeared somewhere in the second month. This was enough to tell me what had happened: the numbers that I had typed in were not the exact original numbers, but were the rounded-off values that had appeared in the original printout. The initial round-off errors were the culprits; they were steadily amplifying until they dominated the solution.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated:",
"title": "History"
},
{
"paragraph_id": 16,
"text": "One meteorologist remarked that if the theory were correct, one flap of a sea gull's wings would be enough to alter the course of the weather forever. The controversy has not yet been settled, but the most recent evidence seems to favor the sea gulls.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly's wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing represents a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "While the \"butterfly effect\" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be \"observationally indistinguishable\" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: \"The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration.\" This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately.",
"title": "Theory and mathematical definition"
},
{
"paragraph_id": 24,
"text": "A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows:",
"title": "Theory and mathematical definition"
},
{
"paragraph_id": 25,
"text": "The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances.",
"title": "Theory and mathematical definition"
},
{
"paragraph_id": 26,
"text": "If M is the state space for the map f t {\\displaystyle f^{t}} , then f t {\\displaystyle f^{t}} displays sensitive dependence to initial conditions if for any x in M and any δ > 0, there are y in M, with distance d(. , .) such that 0 < d ( x , y ) < δ {\\displaystyle 0<d(x,y)<\\delta } and such that",
"title": "Theory and mathematical definition"
},
{
"paragraph_id": 27,
"text": "for some positive parameter a. The definition does not require that all points from a neighborhood separate from the base point x, but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems.",
"title": "Theory and mathematical definition"
},
{
"paragraph_id": 28,
"text": "The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map:",
"title": "Theory and mathematical definition"
},
{
"paragraph_id": 29,
"text": "which, unlike most chaotic maps, has a closed-form solution:",
"title": "Theory and mathematical definition"
},
{
"paragraph_id": 30,
"text": "where the initial condition parameter θ {\\displaystyle \\theta } is given by θ = 1 π sin − 1 ( x 0 1 / 2 ) {\\displaystyle \\theta ={\\tfrac {1}{\\pi }}\\sin ^{-1}(x_{0}^{1/2})} . For rational θ {\\displaystyle \\theta } , after a finite number of iterations x n {\\displaystyle x_{n}} maps into a periodic sequence. But almost all θ {\\displaystyle \\theta } are irrational, and, for irrational θ {\\displaystyle \\theta } , x n {\\displaystyle x_{n}} never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2 shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps x n {\\displaystyle x_{n}} folded within the range [0, 1].",
"title": "Theory and mathematical definition"
},
{
"paragraph_id": 31,
"text": "The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: \"Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong.\" The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance.",
"title": "In physical systems"
},
{
"paragraph_id": 32,
"text": "According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement:",
"title": "In physical systems"
},
{
"paragraph_id": 33,
"text": "Recently, a short video has been created to present Lorenz's perspective on predictability limit.",
"title": "In physical systems"
},
{
"paragraph_id": 34,
"text": "By revealing coexisting chaotic and non-chaotic attractors within Lorenz models, Shen and his colleagues proposed a revised view that \"weather possesses chaos and order\", in contrast to the conventional view of \"weather is chaotic\". As a result, sensitive dependence on initial conditions (SDIC) does not always appear. Namely, SDIC appears when two orbits (i.e., solutions) become the chaotic attractor; it does not appear when two orbits move toward the same point attractor. The above animation for double pendulum motion provides an analogy. For large angles of swing the motion of the pendulum is often chaotic. By comparison, for small angles of swing, motions are non-chaotic. Multistability is defined when a system (e.g., the double pendulum system) contains more than one bounded attractor that depends only on initial conditions. The multistability was illustrated using kayaking in Figure on the right side (i.e., Figure 1 of ) where the appearance of strong currents and a stagnant area suggests instability and local stability, respectively. As a result, when two kayaks move along strong currents, their paths display SDIC. On the other hand, when two kayaks move into a stagnant area, they become trapped, showing no typical SDIC (although a chaotic transient may occur). Such features of SDIC or no SDIC suggest two types of solutions and illustrate the nature of multistability.",
"title": "In physical systems"
},
{
"paragraph_id": 35,
"text": "By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows:",
"title": "In physical systems"
},
{
"paragraph_id": 36,
"text": "\"The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons.\"",
"title": "In physical systems"
},
{
"paragraph_id": 37,
"text": "The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist.",
"title": "In physical systems"
},
{
"paragraph_id": 38,
"text": "Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which \"measures the rate at which identical initial states diverge when subjected to slightly different dynamics\". They consider fidelity decay to be \"the closest quantum analog to the (purely classical) butterfly effect\". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos.",
"title": "In physical systems"
}
] | In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term is closely associated with the work of mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the metaphorical example of the details of a tornado being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in weather was earlier acknowledged by French mathematician and engineer Henri Poincaré. American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. | 2001-08-08T21:55:49Z | 2023-11-23T09:20:55Z | [
"Template:Math",
"Template:Main",
"Template:Div col end",
"Template:Cite book",
"Template:Wiktionary",
"Template:Time travel",
"Template:See also",
"Template:Div col",
"Template:Cite web",
"Template:Cite journal",
"Template:MathWorld",
"Template:Chaos theory",
"Template:Other uses",
"Template:Blockquote",
"Template:Cite encyclopedia",
"Template:Citation",
"Template:Cite news",
"Template:Unintended consequences",
"Template:Short description",
"Template:Reflist",
"Template:Webarchive",
"Template:Cbignore",
"Template:Creative Commons text attribution notice"
] | https://en.wikipedia.org/wiki/Butterfly_effect |
4,027 | Borland | Borland Software Corporation was a computer technology company founded in 1983 by Niels Jensen, Ole Henriksen, Mogens Glad, and Philippe Kahn. Its main business was the development and sale of software development and software deployment products. Borland was first headquartered in Scotts Valley, California, then in Cupertino, California, and then in Austin, Texas. In 2009, the company became a full subsidiary of the British firm Micro Focus International plc.
Borland Ltd. was founded in August 1981 by three Danish citizens – Niels Jensen, Ole Henriksen, and Mogens Glad – to develop products like Word Index for the CP/M operating system using an off-the-shelf company. However, the response to the company's products at the CP/M-82 show in San Francisco showed that a U.S. company would be needed to reach the American market. They met Philippe Kahn, who had just moved to Silicon Valley and had been a key developer of the Micral. The three Danes had embarked, at first successfully, on marketing software first from Denmark, and later from Ireland, before running into some challenges when they met Philippe Kahn. Kahn was chairman, president, and CEO of Borland Inc. from its beginning in 1983 until 1995. The company name "Borland" was a creation of Kahn's, taking inspiration from the name of an American Astronaut and then-Eastern Air Lines chairperson Frank Borman. The main shareholders at the incorporation of Borland were Niels Jensen (250,000 shares), Ole Henriksen (160,000), Mogens Glad (100,000), and Kahn (80,000).
Borland developed various software development tools. Its first product was Turbo Pascal in 1983, developed by Anders Hejlsberg (who later developed .NET and C# for Microsoft) and before Borland acquired the product which was sold in Scandinavia under the name of Compas Pascal. 1984 saw the launch of Borland Sidekick, a time organization, notebook, and calculator utility that was an early terminate-and-stay-resident program (TSR) for MS-DOS compatible operating systems.
By the mid-1980s, the company had an exhibit at the 1985 West Coast Computer Faire other than IBM or AT&T. Bruce Webster reported that "the legend of Turbo Pascal has by now reached mythic proportions, as evidenced by the number of firms that, in marketing meetings, make plans to become 'the next Borland'". After Turbo Pascal and Sidekick, the company launched other applications such as SuperKey and Lightning, all developed in Denmark. While the Danes remained majority shareholders, board members included Kahn, Tim Berry, John Nash, and David Heller. With the assistance of John Nash and David Heller, both British members of the Borland Board, the company was taken public on London's Unlisted Securities Market (USM) in 1986. Schroders was the lead investment banker. According to the London IPO filings, the management team was Philippe Kahn as president, Spencer Ozawa as VP of Operations, Marie Bourget as CFO, and Spencer Leyton as VP of sales and business development. While all software development continued to take place in Denmark and later London as the Danish co-founders moved there. A first US IPO followed in 1989 after Ben Rosen joined the Borland board with Goldman Sachs as the lead banker and a second offering in 1991 with Lazard as the lead banker.
In 1985, Borland acquired Analytica and its Reflex database product. The engineering team of Analytica, managed by Brad Silverberg and including Reflex co-founder Adam Bosworth, became the core of Borland's engineering team in the US. Brad Silverberg was VP of engineering until he left in early 1990 to head up the Personal Systems division at Microsoft. Adam Bosworth initiated and headed up the Quattro project until moving to Microsoft later in 1990 to take over the project which eventually became Access.
In 1987, Borland purchased Wizard Systems and incorporated portions of the Wizard C technology into Turbo C. Bob Jervis, the author of Wizard C became a Borland employee. Turbo C was released on May 18, 1987. This drove a wedge between Borland and Niels Jensen and the other members of his team who had been working on a brand-new series of compilers at their London development center. They reached an agreement and spun off a company called Jensen & Partners International (JPI), later TopSpeed. JPI first launched an MS-DOS compiler named JPI Modula-2, which later became TopSpeed Modula-2, and followed up with TopSpeed C, TopSpeed C++, and TopSpeed Pascal compilers for both the MS-DOS and OS/2 operating systems. The TopSpeed compiler technology still exists as the underlying technology of the Clarion 4GL programming language, a Windows development tool.
In September 1987, Borland purchased Ansa-Software, including their Paradox (version 2.0) database management tool. Richard Schwartz, a cofounder of Ansa, became Borland's CTO and Ben Rosen joined the Borland board.
The Quattro Pro spreadsheet was launched in 1989, with an improvement and charting capabilities at the time. Lotus Development, under the leadership of Jim Manzi, sued Borland for copyright infringement (see Look and feel). The litigation, Lotus Dev. Corp. v. Borland Int'l, Inc., brought forward Borland's open standards position as opposed to Lotus' closed approach. Borland, under Kahn's leadership, took a position of principle and announced that they would defend against Lotus' legal position and "fight for programmer's rights". After a decision in favor of Borland by the First Circuit Court of Appeals, the case went to the United States Supreme Court. Because Justice John Paul Stevens had recused himself, only eight justices heard the case, and concluded in a 4–4 tie. The result of the First Circuit Court decision remained standing since the Supreme Court result, since it was a tie, did not bind any other court and set no national precedent.
Additionally, Borland's approach towards software piracy and intellectual property (IP) included its "Borland no-nonsense license agreement"; allowing the developer/user to utilize its products "just like a book". The user was allowed to make multiple copies of a program, as long as it was the only copy in use at any point in time.
In September 1991, Borland purchased Ashton-Tate, bringing the dBASE and InterBase databases to the house, in an all-stock transaction. However, competition with Microsoft was fierce. Microsoft launched the competing database Microsoft Access and bought the dBASE clone FoxPro in 1992, undercutting Borland's prices. During the early 1990s, Borland's implementation of C and C++ outsold Microsoft's. Borland survived as a company, but no longer dominated the software tools that it once had. It went through a radical transition in products, financing, and staff, and became a very different company from the one which challenged Microsoft and Lotus in the early 1990s.
The internal problems that arose with the Ashton-Tate merger were a large part of the downfall. Ashton-Tate's product portfolio proved to be weak, with no provision for evolution into the GUI environment of Windows. Almost all product lines were discontinued. The consolidation of duplicate support and development offices was costly and disruptive. Worst of all, the highest revenue earner of the combined company was dBASE with no Windows version ready. Borland had an internal project to clone dBASE which was intended to run on Windows and was part of the strategy of the acquisition, but by late 1992 this was abandoned due to technical flaws and the company had to constitute a replacement team (the ObjectVision team, redeployed) headed by Bill Turpin to redo the job. Borland lacked the financial strength to project its marketing and move internal resources off other products to shore up the dBASE/W effort. Layoffs occurred in 1993 to keep the company afloat, the third instance of this was in five years. By the time dBASE for Windows eventually shipped, the developer community had moved on to other products such as Clipper or FoxBase, and dBASE never regained a significant share of Ashton-Tate's former market. This happened against the backdrop of the rise in Microsoft's combined Office product marketing.
A change in market conditions also contributed to Borland's fall from prominence. In the 1980s, companies had few people who understood the growing personal computer phenomenon and so most technical people were given free rein to purchase whatever software they thought they needed. Borland had done an excellent job marketing to those with a highly technical bent. By the mid-1990s, however, companies were beginning to ask what the return was on the investment they had made in this loosely controlled PC software buying spree. Company executives were starting to ask questions that were hard for technically minded staff to answer, and so corporate standards began to be created. This required new kinds of marketing and support materials from software vendors, but Borland remained focused on the technical side of its products.
In 1993 Borland explored ties with WordPerfect as a possible way to form a suite of programs to rival Microsoft's nascent integration strategy. WordPerfect itself was struggling with a late and troubled transition to Windows. The eventual joint company effort, named Borland Office for Windows (a combination of the WordPerfect word processor, Quattro Pro spreadsheet, and Paradox database) was introduced at the 1993 Comdex computer show. Borland Office never made significant inroads against Microsoft Office. WordPerfect was then bought by Novell. In October 1994, Borland sold Quattro Pro and rights to sell up to million copies of Paradox to Novell for $140 million in cash, repositioning the company on its core software development tools and the Interbase database engine and shifting toward client-server scenarios in corporate applications. This later proved a good foundation for the shift to web development tools.
Philippe Kahn and the Borland board disagreed on how to focus the company, and Kahn resigned as chairman, CEO and president, after 12 years, in January 1995. Kahn remained on the board until November 7, 1996. Borland named Gary Wetsel as CEO, but he resigned in July 1996. William F. Miller was interim CEO until September of that year, when Whitney G. Lynn became interim president and CEO (along with other executive changes), followed by a succession of CEOs including Dale Fuller and Tod Nielsen.
The Delphi 1 rapid application development (RAD) environment was launched in 1995, under the leadership of Anders Hejlsberg.
In 1996 Borland acquired Open Environment Corporation, a Cambridge-based company founded by John J. Donovan.
On November 25, 1996, Del Yocam was hired as Borland CEO and chairman.
In 1997, Borland sold Paradox to Corel, but retained all development rights for the core BDE. In November 1997, Borland acquired Visigenic, a middleware company that was focused on implementations of CORBA.
In April 1998, Borland International, Inc. announced it had become Inprise Corporation.
For several years (both before and during the Inprise name) Borland suffered from serious financial losses and poor public image. When the name was changed to Inprise, many thought Borland had gone out of business. In March 1999, dBase was sold to KSoft, Inc. which was soon renamed dBASE Inc. (In 2004 dBASE Inc. was renamed to DataBased Intelligence, Inc.).
In 1999, Dale L. Fuller replaced Yocam. At this time Fuller's title was "interim president and CEO". The "interim" was dropped in December 2000. Keith Gottfried served in senior executive positions with the company from 2000 to 2004.
A proposed merger between Inprise and Corel was announced in February 2000, aimed at producing Linux-based products. The scheme was abandoned when Corel's shares fell and it became clear that there was no strategic fit.
InterBase 6.0 was made available as open-source software in July 2000.
In November 2000, Inprise Corporation announced the company intended to officially change its name to Borland Software Corporation. The legal name of the company would continue to be Inprise Corporation until the completion of the renaming process during the first quarter of 2001. Once the name change was completed, the company would also expect to change its Nasdaq market symbol from "INPR" to "BORL".
On January 2, 2001, Borland Software Corporation announced it has completed its name change from Inprise Corporation. Effective at the open of trading on Nasdaq, the company's Nasdaq market symbol would also be changed from "INPR" to "BORL".
Under the Borland name and a new management team headed by president and CEO Dale L. Fuller, a now-smaller and profitable Borland refocused on Delphi and created a version of Delphi and C++ Builder for Linux, both under the name Kylix. This brought Borland's expertise in integrated development environments to the Linux platform for the first time. Kylix was launched in 2001.
Plans to spin off the InterBase division as a separate company were abandoned after Borland and the people who were to run the new company could not agree on terms for the separation. Borland stopped open-source releases of InterBase and has developed and sold new versions at a fast pace.
In 2001, Delphi 6 became the first integrated development environment to support web services. All of the company's development platforms now support web services.
C#Builder was released in 2003 as a native C# development tool, competing with Visual Studio .NET. By the 2005 release, C#Builder, Delphi for Win32, and Delphi for .NET were combined into a single IDE called "Borland Developer Studio" (though the combined IDE is still popularly known as "Delphi"). In late 2002 Borland purchased design tool vendor TogetherSoft and tool publisher Starbase, makers of the StarTeam configuration management tool and the CaliberRM requirements management tool (eventually, CaliberRM was renamed as "Caliber"). The latest releases of JBuilder and Delphi integrate these tools to give developers a broader set of tools for development.
Former CEO Dale Fuller quit in July 2005, but remained on the board of directors. Former COO Scott Arnold took the title of interim president and chief executive officer until November 8, 2005, when it was announced that Tod Nielsen would take over as CEO effective November 9, 2005. Nielsen remained with the company until January 2009, when he accepted the position of chief operating officer at VMware; CFO Erik Prusch then took over as acting president and CEO.
In early 2007 Borland announced new branding for its focus around open application life-cycle management. In April 2007 Borland announced that it would relocate its headquarters and development facilities to Austin, Texas. It also has development centers at Singapore, Santa Ana, California, and Linz, Austria.
On May 6, 2009, the company announced it was to be acquired by Micro Focus for $75 million. The transaction was approved by Borland shareholders on July 22, 2009, with Micro Focus acquiring the company for $1.50 per share. Following Micro Focus shareholder approval and the required corporate filings, the transaction was completed in late July 2009. It was estimated to have 750 employees at the time.
On April 5, 2015, Micro Focus announced the completion of integrating Attachmate Group of companies that was merged on November 20, 2014. During the integration period, the affected companies were merged into a single organization. In the announced reorganization, Borland products would be part of Micro Focus portfolio.
The products acquired from Segue Software include Silk Central, Silk Performer, and Silk Test. The Silk line was first announced in 1997. Other programs are:
Along with renaming from Borland International, Inc. to Inprise Corporation, the company refocused its efforts on targeting enterprise applications development. Borland hired a marketing firm Lexicon Branding to come up with a new name for the company. Yocam explained that the new name, Inprise, was meant to evoke "integrating the enterprise". The idea was to integrate Borland's tools, Delphi, C++ Builder, and JBuilder with enterprise environment software, including Visigenic's implementations of CORBA, Visibroker for C++ and Java, and the new product, Application Server.
Frank Borland is a mascot character for Borland products. According to Philippe Kahn, the mascot first appeared in advertisements and cover of Borland Sidekick 1.0 manual, which was in 1984 during Borland International, Inc. era. Frank Borland also appeared in Turbo Tutor - A Turbo Pascal Tutorial, Borland JBuilder 2.
A live action version of Frank Borland was made after Micro Focus plc had acquired Borland Software Corporation. This version was created by True Agency Limited. An introductory film was also made about the mascot. | [
{
"paragraph_id": 0,
"text": "Borland Software Corporation was a computer technology company founded in 1983 by Niels Jensen, Ole Henriksen, Mogens Glad, and Philippe Kahn. Its main business was the development and sale of software development and software deployment products. Borland was first headquartered in Scotts Valley, California, then in Cupertino, California, and then in Austin, Texas. In 2009, the company became a full subsidiary of the British firm Micro Focus International plc.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Borland Ltd. was founded in August 1981 by three Danish citizens – Niels Jensen, Ole Henriksen, and Mogens Glad – to develop products like Word Index for the CP/M operating system using an off-the-shelf company. However, the response to the company's products at the CP/M-82 show in San Francisco showed that a U.S. company would be needed to reach the American market. They met Philippe Kahn, who had just moved to Silicon Valley and had been a key developer of the Micral. The three Danes had embarked, at first successfully, on marketing software first from Denmark, and later from Ireland, before running into some challenges when they met Philippe Kahn. Kahn was chairman, president, and CEO of Borland Inc. from its beginning in 1983 until 1995. The company name \"Borland\" was a creation of Kahn's, taking inspiration from the name of an American Astronaut and then-Eastern Air Lines chairperson Frank Borman. The main shareholders at the incorporation of Borland were Niels Jensen (250,000 shares), Ole Henriksen (160,000), Mogens Glad (100,000), and Kahn (80,000).",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Borland developed various software development tools. Its first product was Turbo Pascal in 1983, developed by Anders Hejlsberg (who later developed .NET and C# for Microsoft) and before Borland acquired the product which was sold in Scandinavia under the name of Compas Pascal. 1984 saw the launch of Borland Sidekick, a time organization, notebook, and calculator utility that was an early terminate-and-stay-resident program (TSR) for MS-DOS compatible operating systems.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "By the mid-1980s, the company had an exhibit at the 1985 West Coast Computer Faire other than IBM or AT&T. Bruce Webster reported that \"the legend of Turbo Pascal has by now reached mythic proportions, as evidenced by the number of firms that, in marketing meetings, make plans to become 'the next Borland'\". After Turbo Pascal and Sidekick, the company launched other applications such as SuperKey and Lightning, all developed in Denmark. While the Danes remained majority shareholders, board members included Kahn, Tim Berry, John Nash, and David Heller. With the assistance of John Nash and David Heller, both British members of the Borland Board, the company was taken public on London's Unlisted Securities Market (USM) in 1986. Schroders was the lead investment banker. According to the London IPO filings, the management team was Philippe Kahn as president, Spencer Ozawa as VP of Operations, Marie Bourget as CFO, and Spencer Leyton as VP of sales and business development. While all software development continued to take place in Denmark and later London as the Danish co-founders moved there. A first US IPO followed in 1989 after Ben Rosen joined the Borland board with Goldman Sachs as the lead banker and a second offering in 1991 with Lazard as the lead banker.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1985, Borland acquired Analytica and its Reflex database product. The engineering team of Analytica, managed by Brad Silverberg and including Reflex co-founder Adam Bosworth, became the core of Borland's engineering team in the US. Brad Silverberg was VP of engineering until he left in early 1990 to head up the Personal Systems division at Microsoft. Adam Bosworth initiated and headed up the Quattro project until moving to Microsoft later in 1990 to take over the project which eventually became Access.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 1987, Borland purchased Wizard Systems and incorporated portions of the Wizard C technology into Turbo C. Bob Jervis, the author of Wizard C became a Borland employee. Turbo C was released on May 18, 1987. This drove a wedge between Borland and Niels Jensen and the other members of his team who had been working on a brand-new series of compilers at their London development center. They reached an agreement and spun off a company called Jensen & Partners International (JPI), later TopSpeed. JPI first launched an MS-DOS compiler named JPI Modula-2, which later became TopSpeed Modula-2, and followed up with TopSpeed C, TopSpeed C++, and TopSpeed Pascal compilers for both the MS-DOS and OS/2 operating systems. The TopSpeed compiler technology still exists as the underlying technology of the Clarion 4GL programming language, a Windows development tool.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In September 1987, Borland purchased Ansa-Software, including their Paradox (version 2.0) database management tool. Richard Schwartz, a cofounder of Ansa, became Borland's CTO and Ben Rosen joined the Borland board.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Quattro Pro spreadsheet was launched in 1989, with an improvement and charting capabilities at the time. Lotus Development, under the leadership of Jim Manzi, sued Borland for copyright infringement (see Look and feel). The litigation, Lotus Dev. Corp. v. Borland Int'l, Inc., brought forward Borland's open standards position as opposed to Lotus' closed approach. Borland, under Kahn's leadership, took a position of principle and announced that they would defend against Lotus' legal position and \"fight for programmer's rights\". After a decision in favor of Borland by the First Circuit Court of Appeals, the case went to the United States Supreme Court. Because Justice John Paul Stevens had recused himself, only eight justices heard the case, and concluded in a 4–4 tie. The result of the First Circuit Court decision remained standing since the Supreme Court result, since it was a tie, did not bind any other court and set no national precedent.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Additionally, Borland's approach towards software piracy and intellectual property (IP) included its \"Borland no-nonsense license agreement\"; allowing the developer/user to utilize its products \"just like a book\". The user was allowed to make multiple copies of a program, as long as it was the only copy in use at any point in time.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In September 1991, Borland purchased Ashton-Tate, bringing the dBASE and InterBase databases to the house, in an all-stock transaction. However, competition with Microsoft was fierce. Microsoft launched the competing database Microsoft Access and bought the dBASE clone FoxPro in 1992, undercutting Borland's prices. During the early 1990s, Borland's implementation of C and C++ outsold Microsoft's. Borland survived as a company, but no longer dominated the software tools that it once had. It went through a radical transition in products, financing, and staff, and became a very different company from the one which challenged Microsoft and Lotus in the early 1990s.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The internal problems that arose with the Ashton-Tate merger were a large part of the downfall. Ashton-Tate's product portfolio proved to be weak, with no provision for evolution into the GUI environment of Windows. Almost all product lines were discontinued. The consolidation of duplicate support and development offices was costly and disruptive. Worst of all, the highest revenue earner of the combined company was dBASE with no Windows version ready. Borland had an internal project to clone dBASE which was intended to run on Windows and was part of the strategy of the acquisition, but by late 1992 this was abandoned due to technical flaws and the company had to constitute a replacement team (the ObjectVision team, redeployed) headed by Bill Turpin to redo the job. Borland lacked the financial strength to project its marketing and move internal resources off other products to shore up the dBASE/W effort. Layoffs occurred in 1993 to keep the company afloat, the third instance of this was in five years. By the time dBASE for Windows eventually shipped, the developer community had moved on to other products such as Clipper or FoxBase, and dBASE never regained a significant share of Ashton-Tate's former market. This happened against the backdrop of the rise in Microsoft's combined Office product marketing.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "A change in market conditions also contributed to Borland's fall from prominence. In the 1980s, companies had few people who understood the growing personal computer phenomenon and so most technical people were given free rein to purchase whatever software they thought they needed. Borland had done an excellent job marketing to those with a highly technical bent. By the mid-1990s, however, companies were beginning to ask what the return was on the investment they had made in this loosely controlled PC software buying spree. Company executives were starting to ask questions that were hard for technically minded staff to answer, and so corporate standards began to be created. This required new kinds of marketing and support materials from software vendors, but Borland remained focused on the technical side of its products.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1993 Borland explored ties with WordPerfect as a possible way to form a suite of programs to rival Microsoft's nascent integration strategy. WordPerfect itself was struggling with a late and troubled transition to Windows. The eventual joint company effort, named Borland Office for Windows (a combination of the WordPerfect word processor, Quattro Pro spreadsheet, and Paradox database) was introduced at the 1993 Comdex computer show. Borland Office never made significant inroads against Microsoft Office. WordPerfect was then bought by Novell. In October 1994, Borland sold Quattro Pro and rights to sell up to million copies of Paradox to Novell for $140 million in cash, repositioning the company on its core software development tools and the Interbase database engine and shifting toward client-server scenarios in corporate applications. This later proved a good foundation for the shift to web development tools.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Philippe Kahn and the Borland board disagreed on how to focus the company, and Kahn resigned as chairman, CEO and president, after 12 years, in January 1995. Kahn remained on the board until November 7, 1996. Borland named Gary Wetsel as CEO, but he resigned in July 1996. William F. Miller was interim CEO until September of that year, when Whitney G. Lynn became interim president and CEO (along with other executive changes), followed by a succession of CEOs including Dale Fuller and Tod Nielsen.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Delphi 1 rapid application development (RAD) environment was launched in 1995, under the leadership of Anders Hejlsberg.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1996 Borland acquired Open Environment Corporation, a Cambridge-based company founded by John J. Donovan.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "On November 25, 1996, Del Yocam was hired as Borland CEO and chairman.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1997, Borland sold Paradox to Corel, but retained all development rights for the core BDE. In November 1997, Borland acquired Visigenic, a middleware company that was focused on implementations of CORBA.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In April 1998, Borland International, Inc. announced it had become Inprise Corporation.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "For several years (both before and during the Inprise name) Borland suffered from serious financial losses and poor public image. When the name was changed to Inprise, many thought Borland had gone out of business. In March 1999, dBase was sold to KSoft, Inc. which was soon renamed dBASE Inc. (In 2004 dBASE Inc. was renamed to DataBased Intelligence, Inc.).",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1999, Dale L. Fuller replaced Yocam. At this time Fuller's title was \"interim president and CEO\". The \"interim\" was dropped in December 2000. Keith Gottfried served in senior executive positions with the company from 2000 to 2004.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "A proposed merger between Inprise and Corel was announced in February 2000, aimed at producing Linux-based products. The scheme was abandoned when Corel's shares fell and it became clear that there was no strategic fit.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "InterBase 6.0 was made available as open-source software in July 2000.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In November 2000, Inprise Corporation announced the company intended to officially change its name to Borland Software Corporation. The legal name of the company would continue to be Inprise Corporation until the completion of the renaming process during the first quarter of 2001. Once the name change was completed, the company would also expect to change its Nasdaq market symbol from \"INPR\" to \"BORL\".",
"title": "History"
},
{
"paragraph_id": 24,
"text": "On January 2, 2001, Borland Software Corporation announced it has completed its name change from Inprise Corporation. Effective at the open of trading on Nasdaq, the company's Nasdaq market symbol would also be changed from \"INPR\" to \"BORL\".",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Under the Borland name and a new management team headed by president and CEO Dale L. Fuller, a now-smaller and profitable Borland refocused on Delphi and created a version of Delphi and C++ Builder for Linux, both under the name Kylix. This brought Borland's expertise in integrated development environments to the Linux platform for the first time. Kylix was launched in 2001.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Plans to spin off the InterBase division as a separate company were abandoned after Borland and the people who were to run the new company could not agree on terms for the separation. Borland stopped open-source releases of InterBase and has developed and sold new versions at a fast pace.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In 2001, Delphi 6 became the first integrated development environment to support web services. All of the company's development platforms now support web services.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "C#Builder was released in 2003 as a native C# development tool, competing with Visual Studio .NET. By the 2005 release, C#Builder, Delphi for Win32, and Delphi for .NET were combined into a single IDE called \"Borland Developer Studio\" (though the combined IDE is still popularly known as \"Delphi\"). In late 2002 Borland purchased design tool vendor TogetherSoft and tool publisher Starbase, makers of the StarTeam configuration management tool and the CaliberRM requirements management tool (eventually, CaliberRM was renamed as \"Caliber\"). The latest releases of JBuilder and Delphi integrate these tools to give developers a broader set of tools for development.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Former CEO Dale Fuller quit in July 2005, but remained on the board of directors. Former COO Scott Arnold took the title of interim president and chief executive officer until November 8, 2005, when it was announced that Tod Nielsen would take over as CEO effective November 9, 2005. Nielsen remained with the company until January 2009, when he accepted the position of chief operating officer at VMware; CFO Erik Prusch then took over as acting president and CEO.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In early 2007 Borland announced new branding for its focus around open application life-cycle management. In April 2007 Borland announced that it would relocate its headquarters and development facilities to Austin, Texas. It also has development centers at Singapore, Santa Ana, California, and Linz, Austria.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "On May 6, 2009, the company announced it was to be acquired by Micro Focus for $75 million. The transaction was approved by Borland shareholders on July 22, 2009, with Micro Focus acquiring the company for $1.50 per share. Following Micro Focus shareholder approval and the required corporate filings, the transaction was completed in late July 2009. It was estimated to have 750 employees at the time.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "On April 5, 2015, Micro Focus announced the completion of integrating Attachmate Group of companies that was merged on November 20, 2014. During the integration period, the affected companies were merged into a single organization. In the announced reorganization, Borland products would be part of Micro Focus portfolio.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The products acquired from Segue Software include Silk Central, Silk Performer, and Silk Test. The Silk line was first announced in 1997. Other programs are:",
"title": "Products"
},
{
"paragraph_id": 34,
"text": "Along with renaming from Borland International, Inc. to Inprise Corporation, the company refocused its efforts on targeting enterprise applications development. Borland hired a marketing firm Lexicon Branding to come up with a new name for the company. Yocam explained that the new name, Inprise, was meant to evoke \"integrating the enterprise\". The idea was to integrate Borland's tools, Delphi, C++ Builder, and JBuilder with enterprise environment software, including Visigenic's implementations of CORBA, Visibroker for C++ and Java, and the new product, Application Server.",
"title": "Marketing"
},
{
"paragraph_id": 35,
"text": "Frank Borland is a mascot character for Borland products. According to Philippe Kahn, the mascot first appeared in advertisements and cover of Borland Sidekick 1.0 manual, which was in 1984 during Borland International, Inc. era. Frank Borland also appeared in Turbo Tutor - A Turbo Pascal Tutorial, Borland JBuilder 2.",
"title": "Marketing"
},
{
"paragraph_id": 36,
"text": "A live action version of Frank Borland was made after Micro Focus plc had acquired Borland Software Corporation. This version was created by True Agency Limited. An introductory film was also made about the mascot.",
"title": "Marketing"
}
] | Borland Software Corporation was a computer technology company founded in 1983 by Niels Jensen, Ole Henriksen, Mogens Glad, and Philippe Kahn. Its main business was the development and sale of software development and software deployment products. Borland was first headquartered in Scotts Valley, California, then in Cupertino, California, and then in Austin, Texas. In 2009, the company became a full subsidiary of the British firm Micro Focus International plc. | 2001-08-09T06:53:45Z | 2023-11-20T15:57:12Z | [
"Template:Cite news",
"Template:Dead link",
"Template:Short description",
"Template:Other uses",
"Template:Infobox company",
"Template:Snd",
"Template:Which",
"Template:Cite press release",
"Template:Refbegin",
"Template:Authority control",
"Template:Columns-list",
"Template:Cite web",
"Template:More citations needed section",
"Template:Cite magazine",
"Template:Cite journal",
"Template:Multiple issues",
"Template:Citation needed",
"Template:US$",
"Template:Reflist",
"Template:Webarchive",
"Template:Refend"
] | https://en.wikipedia.org/wiki/Borland |
4,031 | Buckminster Fuller | Richard Buckminster Fuller (/ˈfʊlər/; July 12, 1895 – July 1, 1983) was an American architect, systems theorist, writer, designer, inventor, philosopher, and futurist. He styled his name as R. Buckminster Fuller in his writings, publishing more than 30 books and coining or popularizing such terms as "Spaceship Earth", "Dymaxion" (e.g., Dymaxion house, Dymaxion car, Dymaxion map), "ephemeralization", "synergetics", and "tensegrity".
Fuller developed numerous inventions, mainly architectural designs, and popularized the widely known geodesic dome; carbon molecules known as fullerenes were later named by scientists for their structural and mathematical resemblance to geodesic spheres. He also served as the second World President of Mensa International from 1974 to 1983.
Fuller was awarded 28 United States patents and many honorary doctorates. In 1960, he was awarded the Frank P. Brown Medal from The Franklin Institute. He was elected an honorary member of Phi Beta Kappa in 1967, on the occasion of the 50-year reunion of his Harvard class of 1917 (from which he was expelled in his first year). He was elected a Fellow of the American Academy of Arts and Sciences in 1968. The same year, he was elected into the National Academy of Design as an Associate member. He became a full Academician in 1970, and he received the Gold Medal award from the American Institute of Architects the same year. Also in 1970, Fuller received the title of Master Architect from Alpha Rho Chi (APX), the national fraternity for architecture and the allied arts. In 1976, he received the St. Louis Literary Award from the Saint Louis University Library Associates. In 1977, he received the Golden Plate Award of the American Academy of Achievement. He also received numerous other awards, including the Presidential Medal of Freedom, presented to him on February 23, 1983, by President Ronald Reagan.
Fuller was born on July 12, 1895, in Milton, Massachusetts, the son of Richard Buckminster Fuller and Caroline Wolcott Andrews, and grand-nephew of Margaret Fuller, an American journalist, critic, and women's rights advocate associated with the American transcendentalism movement. The unusual middle name, Buckminster, was an ancestral family name. As a child, Richard Buckminster Fuller tried numerous variations of his name. He used to sign his name differently each year in the guest register of his family summer vacation home at Bear Island, Maine. He finally settled on R. Buckminster Fuller.
Fuller spent much of his youth on Bear Island, in Penobscot Bay off the coast of Maine. He attended Froebelian Kindergarten. He was dissatisfied with the way geometry was taught in school, disagreeing with the notions that a chalk dot on the blackboard represented an "empty" mathematical point, or that a line could stretch off to infinity. To him these were illogical, and led to his work on synergetics. He often made items from materials he found in the woods, and sometimes made his own tools. He experimented with designing a new apparatus for human propulsion of small boats. By age 12, he had invented a 'push pull' system for propelling a rowboat by use of an inverted umbrella connected to the transom with a simple oar lock which allowed the user to face forward to point the boat toward its destination. Later in life, Fuller took exception to the term "invention".
Years later, he decided that this sort of experience had provided him with not only an interest in design, but also a habit of being familiar with and knowledgeable about the materials that his later projects would require. Fuller earned a machinist's certification, and knew how to use the press brake, stretch press, and other tools and equipment used in the sheet metal trade.
Fuller attended Milton Academy in Massachusetts, and after that began studying at Harvard College, where he was affiliated with Adams House. He was expelled from Harvard twice: first for spending all his money partying with a vaudeville troupe, and then, after having been readmitted, for his "irresponsibility and lack of interest". By his own appraisal, he was a non-conforming misfit in the fraternity environment.
Between his sessions at Harvard, Fuller worked in Canada as a mechanic in a textile mill, and later as a laborer in the meat-packing industry. He also served in the U.S. Navy in World War I, as a shipboard radio operator, as an editor of a publication, and as commander of the crash rescue boat USS Inca. After discharge, he worked again in the meat-packing industry, acquiring management experience. In 1917, he married Anne Hewlett. During the early 1920s, he and his father-in-law developed the Stockade Building System for producing lightweight, weatherproof, and fireproof housing—although the company would ultimately fail in 1927.
Fuller recalled 1927 as a pivotal year of his life. His daughter Alexandra had died in 1922 of complications from polio and spinal meningitis just before her fourth birthday. Barry Katz, a Stanford University scholar who wrote about Fuller, found signs that around this time in his life Fuller had developed depression and anxiety. Fuller dwelled on his daughter's death, suspecting that it was connected with the Fullers' damp and drafty living conditions. This provided motivation for Fuller's involvement in Stockade Building Systems, a business which aimed to provide affordable, efficient housing.
In 1927, at age 32, Fuller lost his job as president of Stockade. The Fuller family had no savings, and the birth of their daughter Allegra in 1927 added to the financial challenges. Fuller drank heavily and reflected upon the solution to his family's struggles on long walks around Chicago. During the autumn of 1927, Fuller contemplated suicide by drowning in Lake Michigan, so that his family could benefit from a life insurance payment.
Fuller said that he had experienced a profound incident which would provide direction and purpose for his life. He felt as though he was suspended several feet above the ground enclosed in a white sphere of light. A voice spoke directly to Fuller, and declared:
From now on you need never await temporal attestation to your thought. You think the truth. You do not have the right to eliminate yourself. You do not belong to you. You belong to the Universe. Your significance will remain forever obscure to you, but you may assume that you are fulfilling your role if you apply yourself to converting your experiences to the highest advantage of others.
Fuller stated that this experience led to a profound re-examination of his life. He ultimately chose to embark on "an experiment, to find what a single individual could contribute to changing the world and benefiting all humanity".
Speaking to audiences later in life, Fuller would frequently recount the story of his Lake Michigan experience, and its transformative impact on his life.
In 1927, Fuller resolved to think independently which included a commitment to "the search for the principles governing the universe and help advance the evolution of humanity in accordance with them ... finding ways of doing more with less to the end that all people everywhere can have more and more". By 1928, Fuller was living in Greenwich Village and spending much of his time at the popular café Romany Marie's, where he had spent an evening in conversation with Marie and Eugene O'Neill several years earlier. Fuller accepted a job decorating the interior of the café in exchange for meals, giving informal lectures several times a week, and models of the Dymaxion house were exhibited at the café. Isamu Noguchi arrived during 1929—Constantin Brâncuși, an old friend of Marie's, had directed him there—and Noguchi and Fuller were soon collaborating on several projects, including the modeling of the Dymaxion car based on recent work by Aurel Persu. It was the beginning of their lifelong friendship.
Fuller taught at Black Mountain College in North Carolina during the summers of 1948 and 1949, serving as its Summer Institute director in 1949. Fuller had been shy and withdrawn, but he was persuaded to participate in a theatrical performance of Erik Satie's Le piège de Méduse produced by John Cage, who was also teaching at Black Mountain. During rehearsals, under the tutelage of Arthur Penn, then a student at Black Mountain, Fuller broke through his inhibitions to become confident as a performer and speaker.
At Black Mountain, with the support of a group of professors and students, he began reinventing a project that would make him famous: the geodesic dome. Although the geodesic dome had been created, built and awarded a German patent on June 19, 1925, by Dr. Walther Bauersfeld, Fuller was awarded United States patents. Fuller's patent application made no mention of Bauersfeld's self-supporting dome built some 26 years prior. Although Fuller undoubtedly popularized this type of structure he is mistakenly given credit for its design.
One of his early models was first constructed in 1945 at Bennington College in Vermont, where he lectured often. Although Bauersfeld's dome could support a full skin of concrete it was not until 1949 that Fuller erected a geodesic dome building that could sustain its own weight with no practical limits. It was 4.3 meters (14 feet) in diameter and constructed of aluminium aircraft tubing and a vinyl-plastic skin, in the form of an icosahedron. To prove his design, Fuller suspended from the structure's framework several students who had helped him build it. The U.S. government recognized the importance of this work, and employed his firm Geodesics, Inc. in Raleigh, North Carolina to make small domes for the Marines. Within a few years, there were thousands of such domes around the world.
Fuller's first "continuous tension – discontinuous compression" geodesic dome (full sphere in this case) was constructed at the University of Oregon Architecture School in 1959 with the help of students. These continuous tension – discontinuous compression structures featured single force compression members (no flexure or bending moments) that did not touch each other and were 'suspended' by the tensional members.
For half of a century, Fuller developed many ideas, designs, and inventions, particularly regarding practical, inexpensive shelter and transportation. He documented his life, philosophy, and ideas scrupulously by a daily diary (later called the Dymaxion Chronofile), and by twenty-eight publications. Fuller financed some of his experiments with inherited funds, sometimes augmented by funds invested by his collaborators, one example being the Dymaxion car project.
International recognition began with the success of huge geodesic domes during the 1950s. Fuller lectured at North Carolina State University in Raleigh in 1949, where he met James Fitzgibbon, who would become a close friend and colleague. Fitzgibbon was director of Geodesics, Inc. and Synergetics, Inc. the first licensees to design geodesic domes. Thomas C. Howard was lead designer, architect, and engineer for both companies. Richard Lewontin, a new faculty member in population genetics at North Carolina State University, provided Fuller with computer calculations for the lengths of the domes' edges.
Fuller began working with architect Shoji Sadao in 1954, together designing a hypothetical Dome over Manhattan in 1960, and in 1964 they co-founded the architectural firm Fuller & Sadao Inc., whose first project was to design the large geodesic dome for the U.S. Pavilion at Expo 67 in Montreal. This building is now the "Montreal Biosphère". In 1962, the artist and searcher John McHale wrote the first monograph on Fuller, published by George Braziller in New York.
After employing several Southern Illinois University Carbondale (SIU) graduate students to rebuild his models following an apartment fire in the summer of 1959, Fuller was recruited by longtime friend Harold Cohen to serve as a research professor of "design science exploration" at the institution's School of Art and Design. According to SIU architecture professor Jon Davey, the position was "unlike most faculty appointments ... more a celebrity role than a teaching job" in which Fuller offered few courses and was only stipulated to spend two months per year on campus. Nevertheless, his time in Carbondale was "extremely productive", and Fuller was promoted to university professor in 1968 and distinguished university professor in 1972.
Working as a designer, scientist, developer, and writer, he continued to lecture for many years around the world. He collaborated at SIU with John McHale. In 1965, they inaugurated the World Design Science Decade (1965 to 1975) at the meeting of the International Union of Architects in Paris, which was, in Fuller's own words, devoted to "applying the principles of science to solving the problems of humanity."
From 1972 until retiring as university professor emeritus in 1975, Fuller held a joint appointment at Southern Illinois University Edwardsville, where he had designed the dome for the campus Religious Center in 1971. During this period, he also held a joint fellowship at a consortium of Philadelphia-area institutions, including the University of Pennsylvania, Bryn Mawr College, Haverford College, Swarthmore College, and the University City Science Center; as a result of this affiliation, the University of Pennsylvania appointed him university professor emeritus in 1975.
Fuller believed human societies would soon rely mainly on renewable sources of energy, such as solar- and wind-derived electricity. He hoped for an age of "omni-successful education and sustenance of all humanity". Fuller referred to himself as "the property of universe" and during one radio interview he gave later in life, declared himself and his work "the property of all humanity". For his lifetime of work, the American Humanist Association named him the 1969 Humanist of the Year.
In 1976, Fuller was a key participant at UN Habitat I, the first UN forum on human settlements.
Fuller's last filmed interview took place on June 21, 1983, in which he spoke at Norman Foster's Royal Gold Medal for architecture ceremony. His speech can be watched in the archives of the AA School of Architecture, in which he spoke after Sir Robert Sainsbury's introductory speech and Foster's keynote address.
In the year of his death, Fuller described himself as follows:
Guinea Pig B: I am now close to 88 and I am confident that the only thing important about me is that I am an average healthy human. I am also a living case history of a thoroughly documented, half-century, search-and-research project designed to discover what, if anything, an unknown, moneyless individual, with a dependent wife and newborn child, might be able to do effectively on behalf of all humanity that could not be accomplished by great nations, great religions or private enterprise, no matter how rich or powerfully armed.
Fuller died on July 1, 1983, 11 days before his 88th birthday. During the period leading up to his death, his wife had been lying comatose in a Los Angeles hospital, dying of cancer. It was while visiting her there that he exclaimed, at a certain point: "She is squeezing my hand!" He then stood up, had a heart attack, and died an hour later, at age 87. His wife of 66 years died 36 hours later. They are buried in Mount Auburn Cemetery in Cambridge, Massachusetts.
Buckminster Fuller was a Unitarian, and, like his grandfather Arthur Buckminster Fuller (brother of Margaret Fuller), a Unitarian minister. Fuller was also an early environmental activist, aware of Earth's finite resources, and promoted a principle he termed "ephemeralization", which, according to futurist and Fuller disciple Stewart Brand, was defined as "doing more with less". Resources and waste from crude, inefficient products could be recycled into making more valuable products, thus increasing the efficiency of the entire process. Fuller also coined the word synergetics, a catch-all term used broadly for communicating experiences using geometric concepts, and more specifically, the empirical study of systems in transformation; his focus was on total system behavior unpredicted by the behavior of any isolated components.
Fuller was a pioneer in thinking globally, and explored energy and material efficiency in the fields of architecture, engineering, and design. In his book Critical Path (1981) he cited the opinion of François de Chadenèdes (1920-1999) that petroleum, from the standpoint of its replacement cost in our current energy "budget" (essentially, the net incoming solar flux), had cost nature "over a million dollars" per U.S. gallon ($300,000 per litre) to produce. From this point of view, its use as a transportation fuel by people commuting to work represents a huge net loss compared to their actual earnings. An encapsulation quotation of his views might best be summed up as: "There is no energy crisis, only a crisis of ignorance."
Though Fuller was concerned about sustainability and human survival under the existing socioeconomic system, he remained optimistic about humanity's future. Defining wealth in terms of knowledge, as the "technological ability to protect, nurture, support, and accommodate all growth needs of life", his analysis of the condition of "Spaceship Earth" caused him to conclude that at a certain time during the 1970s, humanity had attained an unprecedented state. He was convinced that the accumulation of relevant knowledge, combined with the quantities of major recyclable resources that had already been extracted from the earth, had attained a critical level, such that competition for necessities had become unnecessary. Cooperation had become the optimum survival strategy. He declared: "selfishness is unnecessary and hence-forth unrationalizable ... War is obsolete." He criticized previous utopian schemes as too exclusive, and thought this was a major source of their failure. To work, he thought that a utopia needed to include everyone.
Fuller was influenced by Alfred Korzybski's idea of general semantics. In the 1950s, Fuller attended seminars and workshops organized by the Institute of General Semantics, and he delivered the annual Alfred Korzybski Memorial Lecture in 1955. Korzybski is mentioned in the Introduction of his book Synergetics. The two shared a remarkable amount of similarity in their formulations of general semantics.
In his 1970 book I Seem To Be a Verb, he wrote: "I live on Earth at present, and I don't know what I am. I know that I am not a category. I am not a thing—a noun. I seem to be a verb, an evolutionary process—an integral function of the universe."
Fuller wrote that the natural analytic geometry of the universe was based on arrays of tetrahedra. He developed this in several ways, from the close-packing of spheres and the number of compressive or tensile members required to stabilize an object in space. One confirming result was that the strongest possible homogeneous truss is cyclically tetrahedral.
He had become a guru of the design, architecture, and "alternative" communities, such as Drop City, the community of experimental artists to whom he awarded the 1966 "Dymaxion Award" for "poetically economic" domed living structures.
Fuller was most famous for his lattice shell structures – geodesic domes, which have been used as parts of military radar stations, civic buildings, environmental protest camps, and exhibition attractions. An examination of the geodesic design by Walther Bauersfeld for the Zeiss-Planetarium, built some 28 years prior to Fuller's work, reveals that Fuller's Geodesic Dome patent (U.S. 2,682,235; awarded in 1954) is the same design as Bauersfeld's.
Their construction is based on extending some basic principles to build simple "tensegrity" structures (tetrahedron, octahedron, and the closest packing of spheres), making them lightweight and stable. The geodesic dome was a result of Fuller's exploration of nature's constructing principles to find design solutions. The Fuller Dome is referenced in the Hugo Award-winning novel Stand on Zanzibar by John Brunner, in which a geodesic dome is said to cover the entire island of Manhattan, and it floats on air due to the hot-air balloon effect of the large air-mass under the dome (and perhaps its construction of lightweight materials).
The Omni-Media-Transport:With such a vehicle at our disposal, [Fuller] felt that human travel, like that of birds, would no longer be confined to airports, roads, and other bureaucratic boundaries, and that autonomous free-thinking human beings could live and prosper wherever they chose.
—Lloyd S. Sieden, Bucky Fuller's Universe, 2000 To his young daughter Allegra: Fuller described the Dymaxion as a "zoom-mobile, explaining that it could hop off the road at will, fly about, then, as deftly as a bird, settle back into a place in traffic".
The Dymaxion car was a vehicle designed by Fuller, featured prominently at Chicago's 1933-1934 Century of Progress World's Fair. During the Great Depression, Fuller formed the Dymaxion Corporation and built three prototypes with noted naval architect Starling Burgess and a team of 27 workmen — using donated money as well as a family inheritance.
Fuller associated the word Dymaxion, a blend of the words dynamic, maximum, and tension to sum up the goal of his study, "maximum gain of advantage from minimal energy input".
The Dymaxion was not an automobile but rather the 'ground-taxying mode' of a vehicle that might one day be designed to fly, land and drive — an "Omni-Medium Transport" for air, land and water. Fuller focused on the landing and taxiing qualities, and noted severe limitations in its handling. The team made improvements and refinements to the platform, and Fuller noted the Dymaxion "was an invention that could not be made available to the general public without considerable improvements".
The bodywork was aerodynamically designed for increased fuel efficiency and its platform featured a lightweight cromoly-steel hinged chassis, rear-mounted V8 engine, front-drive, and three-wheels. The vehicle was steered via the third wheel at the rear, capable of 90° steering lock. Able to steer in a tight circle, the Dymaxion often caused a sensation, bringing nearby traffic to a halt.
Shortly after launch, a prototype rolled over and crashed, killing the Dymaxion's driver and seriously injuring its passengers. Fuller blamed the accident on a second car that collided with the Dymaxion. Eyewitnesses reported, however, that the other car hit the Dymaxion only after it had begun to roll over.
Despite courting the interest of important figures from the auto industry, Fuller used his family inheritance to finish the second and third prototypes — eventually selling all three, dissolving Dymaxion Corporation and maintaining the Dymaxion was never intended as a commercial venture. One of the three original prototypes survives.
Fuller's energy-efficient and inexpensive Dymaxion house garnered much interest, but only two prototypes were ever produced. Here the term "Dymaxion" is used in effect to signify a "radically strong and light tensegrity structure". One of Fuller's Dymaxion Houses is on display as a permanent exhibit at the Henry Ford Museum in Dearborn, Michigan. Designed and developed during the mid-1940s, this prototype is a round structure (not a dome), shaped something like the flattened "bell" of certain jellyfish. It has several innovative features, including revolving dresser drawers, and a fine-mist shower that reduces water consumption. According to Fuller biographer Steve Crooks, the house was designed to be delivered in two cylindrical packages, with interior color panels available at local dealers. A circular structure at the top of the house was designed to rotate around a central mast to use natural winds for cooling and air circulation.
Conceived nearly two decades earlier, and developed in Wichita, Kansas, the house was designed to be lightweight, adapted to windy climates, cheap to produce and easy to assemble. Because of its light weight and portability, the Dymaxion House was intended to be the ideal housing for individuals and families who wanted the option of easy mobility. The design included a "Go-Ahead-With-Life Room" stocked with maps, charts, and helpful tools for travel "through time and space". It was to be produced using factories, workers, and technologies that had produced World War II aircraft. It looked ultramodern at the time, built of metal, and sheathed in polished aluminum. The basic model enclosed 90 m (970 sq ft) of floor area. Due to publicity, there were many orders during the early Post-War years, but the company that Fuller and others had formed to produce the houses failed due to management problems.
In 1967, Fuller developed a concept for an offshore floating city named Triton City and published a report on the design the following year. Models of the city aroused the interest of President Lyndon B. Johnson who, after leaving office, had them placed in the Lyndon Baines Johnson Library and Museum.
In 1969, Fuller began the Otisco Project, named after its location in Otisco, New York. The project developed and demonstrated concrete spray with mesh-covered wireforms for producing large-scale, load-bearing spanning structures built on-site, without the use of pouring molds, other adjacent surfaces, or hoisting. The initial method used a circular concrete footing in which anchor posts were set. Tubes cut to length and with ends flattened were then bolted together to form a duodeca-rhombicahedron (22-sided hemisphere) geodesic structure with spans ranging to 60 feet (18 m). The form was then draped with layers of ¼-inch wire mesh attached by twist ties. Concrete was sprayed onto the structure, building up a solid layer which, when cured, would support additional concrete to be added by a variety of traditional means. Fuller referred to these buildings as monolithic ferroconcrete geodesic domes. However, the tubular frame form proved problematic for setting windows and doors. It was replaced by an iron rebar set vertically in the concrete footing and then bent inward and welded in place to create the dome's wireform structure and performed satisfactorily. Domes up to three stories tall built with this method proved to be remarkably strong. Other shapes such as cones, pyramids, and arches proved equally adaptable.
The project was enabled by a grant underwritten by Syracuse University and sponsored by U.S. Steel (rebar), the Johnson Wire Corp (mesh), and Portland Cement Company (concrete). The ability to build large complex load bearing concrete spanning structures in free space would open many possibilities in architecture, and is considered one of Fuller's greatest contributions.
Fuller, along with co-cartographer Shoji Sadao, also designed an alternative projection map, called the Dymaxion map. This was designed to show Earth's continents with minimum distortion when projected or printed on a flat surface.
In the 1960s, Fuller developed the World Game, a collaborative simulation game played on a 70-by-35-foot Dymaxion map, in which players attempt to solve world problems. The object of the simulation game is, in Fuller's words, to "make the world work, for 100% of humanity, in the shortest possible time, through spontaneous cooperation, without ecological offense or the disadvantage of anyone".
Buckminster Fuller wore thick-lensed spectacles to correct his extreme hyperopia, a condition that went undiagnosed for the first five years of his life. Fuller's hearing was damaged during his naval service in World War I and deteriorated during the 1960s. After experimenting with bullhorns as hearing aids during the mid-1960s, Fuller adopted electronic hearing aids from the 1970s onward.
In public appearances, Fuller always wore dark-colored suits, appearing like "an alert little clergyman". Previously, he had experimented with unconventional clothing immediately after his 1927 epiphany, but found that breaking social fashion customs made others devalue or dismiss his ideas. Fuller learned the importance of physical appearance as part of one's credibility, and decided to become "the invisible man" by dressing in clothes that would not draw attention to himself. With self-deprecating humor, Fuller described this black-suited appearance as resembling a "second-rate bank clerk".
Writer Guy Davenport met him in 1965 and described him thus:
He's a dwarf, with a worker's hands, all callouses and squared fingers. He carries an ear trumpet, of green plastic, with WORLD SERIES 1965 printed on it. His smile is golden and frequent; the man's temperament is angelic, and his energy is just a touch more than that of [Robert] Gallway (champeen runner, footballeur, and swimmer). One leg is shorter than the other, and the prescription shoe worn to correct the imbalance comes from a country doctor deep in the wilderness of Maine. Blue blazer, Khrushchev trousers, and a briefcase full of Japanese-made wonderments;
Following his global prominence from the 1960s onward, Fuller became a frequent flier, often crossing time zones to lecture. In the 1960s and 1970s, he wore three watches simultaneously; one for the time zone of his office at Southern Illinois University, one for the time zone of the location he would next visit, and one for the time zone he was currently in. In the 1970s, Fuller was only in 'homely' locations (his personal home in Carbondale, Illinois; his holiday retreat in Bear Island, Maine; and his daughter's home in Pacific Palisades, California) roughly 65 nights per year—the other 300 nights were spent in hotel beds in the locations he visited on his lecturing and consulting circuits.
In the 1920s, Fuller experimented with polyphasic sleep, which he called Dymaxion sleep. Inspired by the sleep habits of animals such as dogs and cats, Fuller worked until he was tired, and then slept short naps. This generally resulted in Fuller sleeping 30-minute naps every 6 hours. This allowed him "twenty-two thinking hours a day", which aided his work productivity. Fuller reportedly kept this Dymaxion sleep habit for two years, before quitting the routine because it conflicted with his business associates' sleep habits. Despite no longer personally partaking in the habit, in 1943 Fuller suggested Dymaxion sleep as a strategy that the United States could adopt to win World War II.
Despite only practicing true polyphasic sleep for a period during the 1920s, Fuller was known for his stamina throughout his life. He was described as "tireless" by Barry Farrell in Life magazine, who noted that Fuller stayed up all night replying to mail during Farrell's 1970 trip to Bear Island. In his seventies, Fuller generally slept for 5–8 hours per night.
Fuller documented his life copiously from 1915 to 1983, approximately 270 feet (82 m) of papers in a collection called the Dymaxion Chronofile. He also kept copies of all incoming and outgoing correspondence. The enormous R. Buckminster Fuller Collection is currently housed at Stanford University.
If somebody kept a very accurate record of a human being, going through the era from the Gay 90s, from a very different kind of world through the turn of the century—as far into the twentieth century as you might live. I decided to make myself a good case history of such a human being and it meant that I could not be judge of what was valid to put in or not. I must put everything in, so I started a very rigorous record.
Buckminster Fuller spoke and wrote in a unique style and said it was important to describe the world as accurately as possible. Fuller often created long run-on sentences and used unusual compound words (omniwell-informed, intertransformative, omni-interaccommodative, omniself-regenerative), as well as terms he himself invented. His style of speech was characterized by progressively rapid and breathless delivery and rambling digressions of thought, which Fuller described as "thinking out loud". The effect, combined with Fuller's dry voice and non-rhotic New England accent, was varyingly considered "hypnotic" or "overwhelming".
Fuller used the word Universe without the definite or indefinite article (the or a) and always capitalized the word. Fuller wrote that "by Universe I mean: the aggregate of all humanity's consciously apprehended and communicated (to self or others) Experiences".
The words "down" and "up", according to Fuller, are awkward in that they refer to a planar concept of direction inconsistent with human experience. The words "in" and "out" should be used instead, he argued, because they better describe an object's relation to a gravitational center, the Earth. "I suggest to audiences that they say, 'I'm going "outstairs" and "instairs."' At first that sounds strange to them; They all laugh about it. But if they try saying in and out for a few days in fun, they find themselves beginning to realize that they are indeed going inward and outward in respect to the center of Earth, which is our Spaceship Earth. And for the first time they begin to feel real 'reality.'"
"World-around" is a term coined by Fuller to replace "worldwide". The general belief in a flat Earth died out in classical antiquity, so using "wide" is an anachronism when referring to the surface of the Earth—a spheroidal surface has area and encloses a volume but has no width. Fuller held that unthinking use of obsolete scientific ideas detracts from and misleads intuition. Other neologisms collectively invented by the Fuller family, according to Allegra Fuller Snyder, are the terms "sunsight" and "sunclipse", replacing "sunrise" and "sunset" to overturn the geocentric bias of most pre-Copernican celestial mechanics.
Fuller also invented the word "livingry", as opposed to weaponry (or "killingry"), to mean that which is in support of all human, plant, and Earth life. "The architectural profession—civil, naval, aeronautical, and astronautical—has always been the place where the most competent thinking is conducted regarding livingry, as opposed to weaponry."
As well as contributing significantly to the development of tensegrity technology, Fuller invented the term "tensegrity", a portmanteau of "tensional integrity". "Tensegrity describes a structural-relationship principle in which structural shape is guaranteed by the finitely closed, comprehensively continuous, tensional behaviors of the system and not by the discontinuous and exclusively local compressional member behaviors. Tensegrity provides the ability to yield increasingly without ultimately breaking or coming asunder."
"Dymaxion" is a portmanteau of "dynamic maximum tension". It was invented around 1929 by two admen at Marshall Field's department store in Chicago to describe Fuller's concept house, which was shown as part of a house of the future store display. They created the term using three words that Fuller used repeatedly to describe his design – dynamic, maximum, and tension.
Fuller also helped to popularize the concept of Spaceship Earth: "The most important fact about Spaceship Earth: an instruction manual didn't come with it."
In the preface for his "cosmic fairy tale" Tetrascroll: Goldilocks and the Three Bears, Fuller stated that his distinctive speaking style grew out of years of embellishing the classic tale for the benefit of his daughter, allowing him to explore both his new theories and how to present them. The Tetrascroll narrative was eventually transcribed onto a set of tetrahedral lithographs (hence the name), as well as being published as a traditional book.
Fuller's language posed problems for his credibility. John Julius Norwich recalled commissioning a 600-word introduction for a planned history of world architecture from him, and receiving a 3500-word proposal which ended:
We will see the (1) down-at-the-mouth-ends curvature of land civilisation's retrogression from the (2) straight raft line foundation of the Mayans' building foundation lines historically transformed to the (3) smiling, up-end curvature of maritime technology transformed through the climbing angle of wingfoil aeronautics progressing humanity into the verticality of outward-bound rocketry and inward-bound microcosmy, ergo (4) the ultimately invisible and vertically-lined architecture as humans master local environment with invisible electro-magnetic fields while travelling by radio as immortal pattern-integrities.
Norwich commented: "On reflection, I asked Dr. Nikolaus Pevsner instead."
His concepts and buildings include:
Among the many people who were influenced by Buckminster Fuller are: Constance Abernathy, Ruth Asawa, J. Baldwin, Michael Ben-Eli, Pierre Cabrol, John Cage, Joseph Clinton, Peter Floyd, Norman Foster, Medard Gabel, Michael Hays, Ted Nelson, David Johnston, Peter Jon Pearce, Shoji Sadao, Edwin Schlossberg, Kenneth Snelson, Robert Anton Wilson, Stewart Brand, and Jason McLennan.
An allotrope of carbon, fullerene—and a particular molecule of that allotrope C60 (buckminsterfullerene or buckyball) has been named after him. The Buckminsterfullerene molecule, which consists of 60 carbon atoms, very closely resembles a spherical version of Fuller's geodesic dome. The 1996 Nobel prize in chemistry was given to Kroto, Curl, and Smalley for their discovery of the fullerene.
On July 12, 2004, the United States Post Office released a new commemorative stamp honoring R. Buckminster Fuller on the 50th anniversary of his patent for the geodesic dome and by the occasion of his 109th birthday. The stamp's design replicated the January 10, 1964, cover of Time magazine.
Fuller was the subject of two documentary films: The World of Buckminster Fuller (1971) and Buckminster Fuller: Thinking Out Loud (1996). Additionally, filmmaker Sam Green and the band Yo La Tengo collaborated on a 2012 "live documentary" about Fuller, The Love Song of R. Buckminster Fuller.
In June 2008, the Whitney Museum of American Art presented "Buckminster Fuller: Starting with the Universe", the most comprehensive retrospective to date of his work and ideas. The exhibition traveled to the Museum of Contemporary Art, Chicago in 2009. It presented a combination of models, sketches, and other artifacts, representing six decades of the artist's integrated approach to housing, transportation, communication, and cartography. It also featured the extensive connections with Chicago from his years spent living, teaching, and working in the city.
In 2009, a number of US companies decided to repackage spherical magnets and sell them as toys. One company, Maxfield & Oberton, told The New York Times that they saw the product on YouTube and decided to repackage them as "Buckyballs", because the magnets could self-form and hold together in shapes reminiscent of the Fuller inspired buckyballs. The buckyball toy launched at New York International Gift Fair in 2009 and sold in the hundreds of thousands, but by 2010 began to experience problems with toy safety issues and the company was forced to recall the packages that were labelled as toys.
In 2012, the San Francisco Museum of Modern Art hosted "The Utopian Impulse" – a show about Buckminster Fuller's influence in the Bay Area. Featured were concepts, inventions and designs for creating "free energy" from natural forces, and for sequestering carbon from the atmosphere. The show ran January through July.
Fuller is quoted in "The Tower of Babble" from the musical Godspell: "Man is a complex of patterns and processes."
Belgian rock band dEUS released the song The Architect, inspired by Fuller, on their 2008 album Vantage Point.
Indie band Driftless Pony Club titled their 2011 album Buckminster after Fuller. Each of the album's songs is based upon his life and works.
The design podcast 99% Invisible (2010–present) takes its title from a Fuller quote: "Ninety-nine percent of who you are is invisible and untouchable."
Fuller is briefly mentioned in X-Men: Days of Future Past (2014) when Kitty Pryde is giving a lecture to a group of students regarding utopian architecture.
Robert Kiyosaki's 2015 book Second Chance concerns Kiyosaki's interactions with Fuller as well as Fuller's unusual final book, Grunch of Giants.
In The House of Tomorrow (2017), based on Peter Bognanni's 2010 novel of the same name, Ellen Burstyn's character is obsessed with Fuller and provides retro-futurist tours of her geodesic home that include videos of Fuller sailing and talking with Burstyn, who had in real life befriended Fuller.
(from the Table of Contents of Inventions: The Patented Works of R. Buckminster Fuller (1983) ISBN 0-312-43477-4) | [
{
"paragraph_id": 0,
"text": "Richard Buckminster Fuller (/ˈfʊlər/; July 12, 1895 – July 1, 1983) was an American architect, systems theorist, writer, designer, inventor, philosopher, and futurist. He styled his name as R. Buckminster Fuller in his writings, publishing more than 30 books and coining or popularizing such terms as \"Spaceship Earth\", \"Dymaxion\" (e.g., Dymaxion house, Dymaxion car, Dymaxion map), \"ephemeralization\", \"synergetics\", and \"tensegrity\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Fuller developed numerous inventions, mainly architectural designs, and popularized the widely known geodesic dome; carbon molecules known as fullerenes were later named by scientists for their structural and mathematical resemblance to geodesic spheres. He also served as the second World President of Mensa International from 1974 to 1983.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Fuller was awarded 28 United States patents and many honorary doctorates. In 1960, he was awarded the Frank P. Brown Medal from The Franklin Institute. He was elected an honorary member of Phi Beta Kappa in 1967, on the occasion of the 50-year reunion of his Harvard class of 1917 (from which he was expelled in his first year). He was elected a Fellow of the American Academy of Arts and Sciences in 1968. The same year, he was elected into the National Academy of Design as an Associate member. He became a full Academician in 1970, and he received the Gold Medal award from the American Institute of Architects the same year. Also in 1970, Fuller received the title of Master Architect from Alpha Rho Chi (APX), the national fraternity for architecture and the allied arts. In 1976, he received the St. Louis Literary Award from the Saint Louis University Library Associates. In 1977, he received the Golden Plate Award of the American Academy of Achievement. He also received numerous other awards, including the Presidential Medal of Freedom, presented to him on February 23, 1983, by President Ronald Reagan.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Fuller was born on July 12, 1895, in Milton, Massachusetts, the son of Richard Buckminster Fuller and Caroline Wolcott Andrews, and grand-nephew of Margaret Fuller, an American journalist, critic, and women's rights advocate associated with the American transcendentalism movement. The unusual middle name, Buckminster, was an ancestral family name. As a child, Richard Buckminster Fuller tried numerous variations of his name. He used to sign his name differently each year in the guest register of his family summer vacation home at Bear Island, Maine. He finally settled on R. Buckminster Fuller.",
"title": "Life and work"
},
{
"paragraph_id": 4,
"text": "Fuller spent much of his youth on Bear Island, in Penobscot Bay off the coast of Maine. He attended Froebelian Kindergarten. He was dissatisfied with the way geometry was taught in school, disagreeing with the notions that a chalk dot on the blackboard represented an \"empty\" mathematical point, or that a line could stretch off to infinity. To him these were illogical, and led to his work on synergetics. He often made items from materials he found in the woods, and sometimes made his own tools. He experimented with designing a new apparatus for human propulsion of small boats. By age 12, he had invented a 'push pull' system for propelling a rowboat by use of an inverted umbrella connected to the transom with a simple oar lock which allowed the user to face forward to point the boat toward its destination. Later in life, Fuller took exception to the term \"invention\".",
"title": "Life and work"
},
{
"paragraph_id": 5,
"text": "Years later, he decided that this sort of experience had provided him with not only an interest in design, but also a habit of being familiar with and knowledgeable about the materials that his later projects would require. Fuller earned a machinist's certification, and knew how to use the press brake, stretch press, and other tools and equipment used in the sheet metal trade.",
"title": "Life and work"
},
{
"paragraph_id": 6,
"text": "Fuller attended Milton Academy in Massachusetts, and after that began studying at Harvard College, where he was affiliated with Adams House. He was expelled from Harvard twice: first for spending all his money partying with a vaudeville troupe, and then, after having been readmitted, for his \"irresponsibility and lack of interest\". By his own appraisal, he was a non-conforming misfit in the fraternity environment.",
"title": "Life and work"
},
{
"paragraph_id": 7,
"text": "Between his sessions at Harvard, Fuller worked in Canada as a mechanic in a textile mill, and later as a laborer in the meat-packing industry. He also served in the U.S. Navy in World War I, as a shipboard radio operator, as an editor of a publication, and as commander of the crash rescue boat USS Inca. After discharge, he worked again in the meat-packing industry, acquiring management experience. In 1917, he married Anne Hewlett. During the early 1920s, he and his father-in-law developed the Stockade Building System for producing lightweight, weatherproof, and fireproof housing—although the company would ultimately fail in 1927.",
"title": "Life and work"
},
{
"paragraph_id": 8,
"text": "Fuller recalled 1927 as a pivotal year of his life. His daughter Alexandra had died in 1922 of complications from polio and spinal meningitis just before her fourth birthday. Barry Katz, a Stanford University scholar who wrote about Fuller, found signs that around this time in his life Fuller had developed depression and anxiety. Fuller dwelled on his daughter's death, suspecting that it was connected with the Fullers' damp and drafty living conditions. This provided motivation for Fuller's involvement in Stockade Building Systems, a business which aimed to provide affordable, efficient housing.",
"title": "Life and work"
},
{
"paragraph_id": 9,
"text": "In 1927, at age 32, Fuller lost his job as president of Stockade. The Fuller family had no savings, and the birth of their daughter Allegra in 1927 added to the financial challenges. Fuller drank heavily and reflected upon the solution to his family's struggles on long walks around Chicago. During the autumn of 1927, Fuller contemplated suicide by drowning in Lake Michigan, so that his family could benefit from a life insurance payment.",
"title": "Life and work"
},
{
"paragraph_id": 10,
"text": "Fuller said that he had experienced a profound incident which would provide direction and purpose for his life. He felt as though he was suspended several feet above the ground enclosed in a white sphere of light. A voice spoke directly to Fuller, and declared:",
"title": "Life and work"
},
{
"paragraph_id": 11,
"text": "From now on you need never await temporal attestation to your thought. You think the truth. You do not have the right to eliminate yourself. You do not belong to you. You belong to the Universe. Your significance will remain forever obscure to you, but you may assume that you are fulfilling your role if you apply yourself to converting your experiences to the highest advantage of others.",
"title": "Life and work"
},
{
"paragraph_id": 12,
"text": "Fuller stated that this experience led to a profound re-examination of his life. He ultimately chose to embark on \"an experiment, to find what a single individual could contribute to changing the world and benefiting all humanity\".",
"title": "Life and work"
},
{
"paragraph_id": 13,
"text": "Speaking to audiences later in life, Fuller would frequently recount the story of his Lake Michigan experience, and its transformative impact on his life.",
"title": "Life and work"
},
{
"paragraph_id": 14,
"text": "In 1927, Fuller resolved to think independently which included a commitment to \"the search for the principles governing the universe and help advance the evolution of humanity in accordance with them ... finding ways of doing more with less to the end that all people everywhere can have more and more\". By 1928, Fuller was living in Greenwich Village and spending much of his time at the popular café Romany Marie's, where he had spent an evening in conversation with Marie and Eugene O'Neill several years earlier. Fuller accepted a job decorating the interior of the café in exchange for meals, giving informal lectures several times a week, and models of the Dymaxion house were exhibited at the café. Isamu Noguchi arrived during 1929—Constantin Brâncuși, an old friend of Marie's, had directed him there—and Noguchi and Fuller were soon collaborating on several projects, including the modeling of the Dymaxion car based on recent work by Aurel Persu. It was the beginning of their lifelong friendship.",
"title": "Life and work"
},
{
"paragraph_id": 15,
"text": "Fuller taught at Black Mountain College in North Carolina during the summers of 1948 and 1949, serving as its Summer Institute director in 1949. Fuller had been shy and withdrawn, but he was persuaded to participate in a theatrical performance of Erik Satie's Le piège de Méduse produced by John Cage, who was also teaching at Black Mountain. During rehearsals, under the tutelage of Arthur Penn, then a student at Black Mountain, Fuller broke through his inhibitions to become confident as a performer and speaker.",
"title": "Life and work"
},
{
"paragraph_id": 16,
"text": "At Black Mountain, with the support of a group of professors and students, he began reinventing a project that would make him famous: the geodesic dome. Although the geodesic dome had been created, built and awarded a German patent on June 19, 1925, by Dr. Walther Bauersfeld, Fuller was awarded United States patents. Fuller's patent application made no mention of Bauersfeld's self-supporting dome built some 26 years prior. Although Fuller undoubtedly popularized this type of structure he is mistakenly given credit for its design.",
"title": "Life and work"
},
{
"paragraph_id": 17,
"text": "One of his early models was first constructed in 1945 at Bennington College in Vermont, where he lectured often. Although Bauersfeld's dome could support a full skin of concrete it was not until 1949 that Fuller erected a geodesic dome building that could sustain its own weight with no practical limits. It was 4.3 meters (14 feet) in diameter and constructed of aluminium aircraft tubing and a vinyl-plastic skin, in the form of an icosahedron. To prove his design, Fuller suspended from the structure's framework several students who had helped him build it. The U.S. government recognized the importance of this work, and employed his firm Geodesics, Inc. in Raleigh, North Carolina to make small domes for the Marines. Within a few years, there were thousands of such domes around the world.",
"title": "Life and work"
},
{
"paragraph_id": 18,
"text": "Fuller's first \"continuous tension – discontinuous compression\" geodesic dome (full sphere in this case) was constructed at the University of Oregon Architecture School in 1959 with the help of students. These continuous tension – discontinuous compression structures featured single force compression members (no flexure or bending moments) that did not touch each other and were 'suspended' by the tensional members.",
"title": "Life and work"
},
{
"paragraph_id": 19,
"text": "For half of a century, Fuller developed many ideas, designs, and inventions, particularly regarding practical, inexpensive shelter and transportation. He documented his life, philosophy, and ideas scrupulously by a daily diary (later called the Dymaxion Chronofile), and by twenty-eight publications. Fuller financed some of his experiments with inherited funds, sometimes augmented by funds invested by his collaborators, one example being the Dymaxion car project.",
"title": "Life and work"
},
{
"paragraph_id": 20,
"text": "International recognition began with the success of huge geodesic domes during the 1950s. Fuller lectured at North Carolina State University in Raleigh in 1949, where he met James Fitzgibbon, who would become a close friend and colleague. Fitzgibbon was director of Geodesics, Inc. and Synergetics, Inc. the first licensees to design geodesic domes. Thomas C. Howard was lead designer, architect, and engineer for both companies. Richard Lewontin, a new faculty member in population genetics at North Carolina State University, provided Fuller with computer calculations for the lengths of the domes' edges.",
"title": "Life and work"
},
{
"paragraph_id": 21,
"text": "Fuller began working with architect Shoji Sadao in 1954, together designing a hypothetical Dome over Manhattan in 1960, and in 1964 they co-founded the architectural firm Fuller & Sadao Inc., whose first project was to design the large geodesic dome for the U.S. Pavilion at Expo 67 in Montreal. This building is now the \"Montreal Biosphère\". In 1962, the artist and searcher John McHale wrote the first monograph on Fuller, published by George Braziller in New York.",
"title": "Life and work"
},
{
"paragraph_id": 22,
"text": "After employing several Southern Illinois University Carbondale (SIU) graduate students to rebuild his models following an apartment fire in the summer of 1959, Fuller was recruited by longtime friend Harold Cohen to serve as a research professor of \"design science exploration\" at the institution's School of Art and Design. According to SIU architecture professor Jon Davey, the position was \"unlike most faculty appointments ... more a celebrity role than a teaching job\" in which Fuller offered few courses and was only stipulated to spend two months per year on campus. Nevertheless, his time in Carbondale was \"extremely productive\", and Fuller was promoted to university professor in 1968 and distinguished university professor in 1972.",
"title": "Life and work"
},
{
"paragraph_id": 23,
"text": "Working as a designer, scientist, developer, and writer, he continued to lecture for many years around the world. He collaborated at SIU with John McHale. In 1965, they inaugurated the World Design Science Decade (1965 to 1975) at the meeting of the International Union of Architects in Paris, which was, in Fuller's own words, devoted to \"applying the principles of science to solving the problems of humanity.\"",
"title": "Life and work"
},
{
"paragraph_id": 24,
"text": "From 1972 until retiring as university professor emeritus in 1975, Fuller held a joint appointment at Southern Illinois University Edwardsville, where he had designed the dome for the campus Religious Center in 1971. During this period, he also held a joint fellowship at a consortium of Philadelphia-area institutions, including the University of Pennsylvania, Bryn Mawr College, Haverford College, Swarthmore College, and the University City Science Center; as a result of this affiliation, the University of Pennsylvania appointed him university professor emeritus in 1975.",
"title": "Life and work"
},
{
"paragraph_id": 25,
"text": "Fuller believed human societies would soon rely mainly on renewable sources of energy, such as solar- and wind-derived electricity. He hoped for an age of \"omni-successful education and sustenance of all humanity\". Fuller referred to himself as \"the property of universe\" and during one radio interview he gave later in life, declared himself and his work \"the property of all humanity\". For his lifetime of work, the American Humanist Association named him the 1969 Humanist of the Year.",
"title": "Life and work"
},
{
"paragraph_id": 26,
"text": "In 1976, Fuller was a key participant at UN Habitat I, the first UN forum on human settlements.",
"title": "Life and work"
},
{
"paragraph_id": 27,
"text": "Fuller's last filmed interview took place on June 21, 1983, in which he spoke at Norman Foster's Royal Gold Medal for architecture ceremony. His speech can be watched in the archives of the AA School of Architecture, in which he spoke after Sir Robert Sainsbury's introductory speech and Foster's keynote address.",
"title": "Life and work"
},
{
"paragraph_id": 28,
"text": "In the year of his death, Fuller described himself as follows:",
"title": "Life and work"
},
{
"paragraph_id": 29,
"text": "Guinea Pig B: I am now close to 88 and I am confident that the only thing important about me is that I am an average healthy human. I am also a living case history of a thoroughly documented, half-century, search-and-research project designed to discover what, if anything, an unknown, moneyless individual, with a dependent wife and newborn child, might be able to do effectively on behalf of all humanity that could not be accomplished by great nations, great religions or private enterprise, no matter how rich or powerfully armed.",
"title": "Life and work"
},
{
"paragraph_id": 30,
"text": "Fuller died on July 1, 1983, 11 days before his 88th birthday. During the period leading up to his death, his wife had been lying comatose in a Los Angeles hospital, dying of cancer. It was while visiting her there that he exclaimed, at a certain point: \"She is squeezing my hand!\" He then stood up, had a heart attack, and died an hour later, at age 87. His wife of 66 years died 36 hours later. They are buried in Mount Auburn Cemetery in Cambridge, Massachusetts.",
"title": "Life and work"
},
{
"paragraph_id": 31,
"text": "Buckminster Fuller was a Unitarian, and, like his grandfather Arthur Buckminster Fuller (brother of Margaret Fuller), a Unitarian minister. Fuller was also an early environmental activist, aware of Earth's finite resources, and promoted a principle he termed \"ephemeralization\", which, according to futurist and Fuller disciple Stewart Brand, was defined as \"doing more with less\". Resources and waste from crude, inefficient products could be recycled into making more valuable products, thus increasing the efficiency of the entire process. Fuller also coined the word synergetics, a catch-all term used broadly for communicating experiences using geometric concepts, and more specifically, the empirical study of systems in transformation; his focus was on total system behavior unpredicted by the behavior of any isolated components.",
"title": "Philosophy"
},
{
"paragraph_id": 32,
"text": "Fuller was a pioneer in thinking globally, and explored energy and material efficiency in the fields of architecture, engineering, and design. In his book Critical Path (1981) he cited the opinion of François de Chadenèdes (1920-1999) that petroleum, from the standpoint of its replacement cost in our current energy \"budget\" (essentially, the net incoming solar flux), had cost nature \"over a million dollars\" per U.S. gallon ($300,000 per litre) to produce. From this point of view, its use as a transportation fuel by people commuting to work represents a huge net loss compared to their actual earnings. An encapsulation quotation of his views might best be summed up as: \"There is no energy crisis, only a crisis of ignorance.\"",
"title": "Philosophy"
},
{
"paragraph_id": 33,
"text": "Though Fuller was concerned about sustainability and human survival under the existing socioeconomic system, he remained optimistic about humanity's future. Defining wealth in terms of knowledge, as the \"technological ability to protect, nurture, support, and accommodate all growth needs of life\", his analysis of the condition of \"Spaceship Earth\" caused him to conclude that at a certain time during the 1970s, humanity had attained an unprecedented state. He was convinced that the accumulation of relevant knowledge, combined with the quantities of major recyclable resources that had already been extracted from the earth, had attained a critical level, such that competition for necessities had become unnecessary. Cooperation had become the optimum survival strategy. He declared: \"selfishness is unnecessary and hence-forth unrationalizable ... War is obsolete.\" He criticized previous utopian schemes as too exclusive, and thought this was a major source of their failure. To work, he thought that a utopia needed to include everyone.",
"title": "Philosophy"
},
{
"paragraph_id": 34,
"text": "Fuller was influenced by Alfred Korzybski's idea of general semantics. In the 1950s, Fuller attended seminars and workshops organized by the Institute of General Semantics, and he delivered the annual Alfred Korzybski Memorial Lecture in 1955. Korzybski is mentioned in the Introduction of his book Synergetics. The two shared a remarkable amount of similarity in their formulations of general semantics.",
"title": "Philosophy"
},
{
"paragraph_id": 35,
"text": "In his 1970 book I Seem To Be a Verb, he wrote: \"I live on Earth at present, and I don't know what I am. I know that I am not a category. I am not a thing—a noun. I seem to be a verb, an evolutionary process—an integral function of the universe.\"",
"title": "Philosophy"
},
{
"paragraph_id": 36,
"text": "Fuller wrote that the natural analytic geometry of the universe was based on arrays of tetrahedra. He developed this in several ways, from the close-packing of spheres and the number of compressive or tensile members required to stabilize an object in space. One confirming result was that the strongest possible homogeneous truss is cyclically tetrahedral.",
"title": "Philosophy"
},
{
"paragraph_id": 37,
"text": "He had become a guru of the design, architecture, and \"alternative\" communities, such as Drop City, the community of experimental artists to whom he awarded the 1966 \"Dymaxion Award\" for \"poetically economic\" domed living structures.",
"title": "Philosophy"
},
{
"paragraph_id": 38,
"text": "Fuller was most famous for his lattice shell structures – geodesic domes, which have been used as parts of military radar stations, civic buildings, environmental protest camps, and exhibition attractions. An examination of the geodesic design by Walther Bauersfeld for the Zeiss-Planetarium, built some 28 years prior to Fuller's work, reveals that Fuller's Geodesic Dome patent (U.S. 2,682,235; awarded in 1954) is the same design as Bauersfeld's.",
"title": "Major design projects"
},
{
"paragraph_id": 39,
"text": "Their construction is based on extending some basic principles to build simple \"tensegrity\" structures (tetrahedron, octahedron, and the closest packing of spheres), making them lightweight and stable. The geodesic dome was a result of Fuller's exploration of nature's constructing principles to find design solutions. The Fuller Dome is referenced in the Hugo Award-winning novel Stand on Zanzibar by John Brunner, in which a geodesic dome is said to cover the entire island of Manhattan, and it floats on air due to the hot-air balloon effect of the large air-mass under the dome (and perhaps its construction of lightweight materials).",
"title": "Major design projects"
},
{
"paragraph_id": 40,
"text": "The Omni-Media-Transport:With such a vehicle at our disposal, [Fuller] felt that human travel, like that of birds, would no longer be confined to airports, roads, and other bureaucratic boundaries, and that autonomous free-thinking human beings could live and prosper wherever they chose.",
"title": "Major design projects"
},
{
"paragraph_id": 41,
"text": "—Lloyd S. Sieden, Bucky Fuller's Universe, 2000 To his young daughter Allegra: Fuller described the Dymaxion as a \"zoom-mobile, explaining that it could hop off the road at will, fly about, then, as deftly as a bird, settle back into a place in traffic\".",
"title": "Major design projects"
},
{
"paragraph_id": 42,
"text": "The Dymaxion car was a vehicle designed by Fuller, featured prominently at Chicago's 1933-1934 Century of Progress World's Fair. During the Great Depression, Fuller formed the Dymaxion Corporation and built three prototypes with noted naval architect Starling Burgess and a team of 27 workmen — using donated money as well as a family inheritance.",
"title": "Major design projects"
},
{
"paragraph_id": 43,
"text": "Fuller associated the word Dymaxion, a blend of the words dynamic, maximum, and tension to sum up the goal of his study, \"maximum gain of advantage from minimal energy input\".",
"title": "Major design projects"
},
{
"paragraph_id": 44,
"text": "The Dymaxion was not an automobile but rather the 'ground-taxying mode' of a vehicle that might one day be designed to fly, land and drive — an \"Omni-Medium Transport\" for air, land and water. Fuller focused on the landing and taxiing qualities, and noted severe limitations in its handling. The team made improvements and refinements to the platform, and Fuller noted the Dymaxion \"was an invention that could not be made available to the general public without considerable improvements\".",
"title": "Major design projects"
},
{
"paragraph_id": 45,
"text": "The bodywork was aerodynamically designed for increased fuel efficiency and its platform featured a lightweight cromoly-steel hinged chassis, rear-mounted V8 engine, front-drive, and three-wheels. The vehicle was steered via the third wheel at the rear, capable of 90° steering lock. Able to steer in a tight circle, the Dymaxion often caused a sensation, bringing nearby traffic to a halt.",
"title": "Major design projects"
},
{
"paragraph_id": 46,
"text": "Shortly after launch, a prototype rolled over and crashed, killing the Dymaxion's driver and seriously injuring its passengers. Fuller blamed the accident on a second car that collided with the Dymaxion. Eyewitnesses reported, however, that the other car hit the Dymaxion only after it had begun to roll over.",
"title": "Major design projects"
},
{
"paragraph_id": 47,
"text": "Despite courting the interest of important figures from the auto industry, Fuller used his family inheritance to finish the second and third prototypes — eventually selling all three, dissolving Dymaxion Corporation and maintaining the Dymaxion was never intended as a commercial venture. One of the three original prototypes survives.",
"title": "Major design projects"
},
{
"paragraph_id": 48,
"text": "Fuller's energy-efficient and inexpensive Dymaxion house garnered much interest, but only two prototypes were ever produced. Here the term \"Dymaxion\" is used in effect to signify a \"radically strong and light tensegrity structure\". One of Fuller's Dymaxion Houses is on display as a permanent exhibit at the Henry Ford Museum in Dearborn, Michigan. Designed and developed during the mid-1940s, this prototype is a round structure (not a dome), shaped something like the flattened \"bell\" of certain jellyfish. It has several innovative features, including revolving dresser drawers, and a fine-mist shower that reduces water consumption. According to Fuller biographer Steve Crooks, the house was designed to be delivered in two cylindrical packages, with interior color panels available at local dealers. A circular structure at the top of the house was designed to rotate around a central mast to use natural winds for cooling and air circulation.",
"title": "Major design projects"
},
{
"paragraph_id": 49,
"text": "Conceived nearly two decades earlier, and developed in Wichita, Kansas, the house was designed to be lightweight, adapted to windy climates, cheap to produce and easy to assemble. Because of its light weight and portability, the Dymaxion House was intended to be the ideal housing for individuals and families who wanted the option of easy mobility. The design included a \"Go-Ahead-With-Life Room\" stocked with maps, charts, and helpful tools for travel \"through time and space\". It was to be produced using factories, workers, and technologies that had produced World War II aircraft. It looked ultramodern at the time, built of metal, and sheathed in polished aluminum. The basic model enclosed 90 m (970 sq ft) of floor area. Due to publicity, there were many orders during the early Post-War years, but the company that Fuller and others had formed to produce the houses failed due to management problems.",
"title": "Major design projects"
},
{
"paragraph_id": 50,
"text": "In 1967, Fuller developed a concept for an offshore floating city named Triton City and published a report on the design the following year. Models of the city aroused the interest of President Lyndon B. Johnson who, after leaving office, had them placed in the Lyndon Baines Johnson Library and Museum.",
"title": "Major design projects"
},
{
"paragraph_id": 51,
"text": "In 1969, Fuller began the Otisco Project, named after its location in Otisco, New York. The project developed and demonstrated concrete spray with mesh-covered wireforms for producing large-scale, load-bearing spanning structures built on-site, without the use of pouring molds, other adjacent surfaces, or hoisting. The initial method used a circular concrete footing in which anchor posts were set. Tubes cut to length and with ends flattened were then bolted together to form a duodeca-rhombicahedron (22-sided hemisphere) geodesic structure with spans ranging to 60 feet (18 m). The form was then draped with layers of ¼-inch wire mesh attached by twist ties. Concrete was sprayed onto the structure, building up a solid layer which, when cured, would support additional concrete to be added by a variety of traditional means. Fuller referred to these buildings as monolithic ferroconcrete geodesic domes. However, the tubular frame form proved problematic for setting windows and doors. It was replaced by an iron rebar set vertically in the concrete footing and then bent inward and welded in place to create the dome's wireform structure and performed satisfactorily. Domes up to three stories tall built with this method proved to be remarkably strong. Other shapes such as cones, pyramids, and arches proved equally adaptable.",
"title": "Major design projects"
},
{
"paragraph_id": 52,
"text": "The project was enabled by a grant underwritten by Syracuse University and sponsored by U.S. Steel (rebar), the Johnson Wire Corp (mesh), and Portland Cement Company (concrete). The ability to build large complex load bearing concrete spanning structures in free space would open many possibilities in architecture, and is considered one of Fuller's greatest contributions.",
"title": "Major design projects"
},
{
"paragraph_id": 53,
"text": "Fuller, along with co-cartographer Shoji Sadao, also designed an alternative projection map, called the Dymaxion map. This was designed to show Earth's continents with minimum distortion when projected or printed on a flat surface.",
"title": "Major design projects"
},
{
"paragraph_id": 54,
"text": "In the 1960s, Fuller developed the World Game, a collaborative simulation game played on a 70-by-35-foot Dymaxion map, in which players attempt to solve world problems. The object of the simulation game is, in Fuller's words, to \"make the world work, for 100% of humanity, in the shortest possible time, through spontaneous cooperation, without ecological offense or the disadvantage of anyone\".",
"title": "Major design projects"
},
{
"paragraph_id": 55,
"text": "Buckminster Fuller wore thick-lensed spectacles to correct his extreme hyperopia, a condition that went undiagnosed for the first five years of his life. Fuller's hearing was damaged during his naval service in World War I and deteriorated during the 1960s. After experimenting with bullhorns as hearing aids during the mid-1960s, Fuller adopted electronic hearing aids from the 1970s onward.",
"title": "Appearance and style"
},
{
"paragraph_id": 56,
"text": "In public appearances, Fuller always wore dark-colored suits, appearing like \"an alert little clergyman\". Previously, he had experimented with unconventional clothing immediately after his 1927 epiphany, but found that breaking social fashion customs made others devalue or dismiss his ideas. Fuller learned the importance of physical appearance as part of one's credibility, and decided to become \"the invisible man\" by dressing in clothes that would not draw attention to himself. With self-deprecating humor, Fuller described this black-suited appearance as resembling a \"second-rate bank clerk\".",
"title": "Appearance and style"
},
{
"paragraph_id": 57,
"text": "Writer Guy Davenport met him in 1965 and described him thus:",
"title": "Appearance and style"
},
{
"paragraph_id": 58,
"text": "He's a dwarf, with a worker's hands, all callouses and squared fingers. He carries an ear trumpet, of green plastic, with WORLD SERIES 1965 printed on it. His smile is golden and frequent; the man's temperament is angelic, and his energy is just a touch more than that of [Robert] Gallway (champeen runner, footballeur, and swimmer). One leg is shorter than the other, and the prescription shoe worn to correct the imbalance comes from a country doctor deep in the wilderness of Maine. Blue blazer, Khrushchev trousers, and a briefcase full of Japanese-made wonderments;",
"title": "Appearance and style"
},
{
"paragraph_id": 59,
"text": "Following his global prominence from the 1960s onward, Fuller became a frequent flier, often crossing time zones to lecture. In the 1960s and 1970s, he wore three watches simultaneously; one for the time zone of his office at Southern Illinois University, one for the time zone of the location he would next visit, and one for the time zone he was currently in. In the 1970s, Fuller was only in 'homely' locations (his personal home in Carbondale, Illinois; his holiday retreat in Bear Island, Maine; and his daughter's home in Pacific Palisades, California) roughly 65 nights per year—the other 300 nights were spent in hotel beds in the locations he visited on his lecturing and consulting circuits.",
"title": "Lifestyle"
},
{
"paragraph_id": 60,
"text": "In the 1920s, Fuller experimented with polyphasic sleep, which he called Dymaxion sleep. Inspired by the sleep habits of animals such as dogs and cats, Fuller worked until he was tired, and then slept short naps. This generally resulted in Fuller sleeping 30-minute naps every 6 hours. This allowed him \"twenty-two thinking hours a day\", which aided his work productivity. Fuller reportedly kept this Dymaxion sleep habit for two years, before quitting the routine because it conflicted with his business associates' sleep habits. Despite no longer personally partaking in the habit, in 1943 Fuller suggested Dymaxion sleep as a strategy that the United States could adopt to win World War II.",
"title": "Lifestyle"
},
{
"paragraph_id": 61,
"text": "Despite only practicing true polyphasic sleep for a period during the 1920s, Fuller was known for his stamina throughout his life. He was described as \"tireless\" by Barry Farrell in Life magazine, who noted that Fuller stayed up all night replying to mail during Farrell's 1970 trip to Bear Island. In his seventies, Fuller generally slept for 5–8 hours per night.",
"title": "Lifestyle"
},
{
"paragraph_id": 62,
"text": "Fuller documented his life copiously from 1915 to 1983, approximately 270 feet (82 m) of papers in a collection called the Dymaxion Chronofile. He also kept copies of all incoming and outgoing correspondence. The enormous R. Buckminster Fuller Collection is currently housed at Stanford University.",
"title": "Lifestyle"
},
{
"paragraph_id": 63,
"text": "If somebody kept a very accurate record of a human being, going through the era from the Gay 90s, from a very different kind of world through the turn of the century—as far into the twentieth century as you might live. I decided to make myself a good case history of such a human being and it meant that I could not be judge of what was valid to put in or not. I must put everything in, so I started a very rigorous record.",
"title": "Lifestyle"
},
{
"paragraph_id": 64,
"text": "Buckminster Fuller spoke and wrote in a unique style and said it was important to describe the world as accurately as possible. Fuller often created long run-on sentences and used unusual compound words (omniwell-informed, intertransformative, omni-interaccommodative, omniself-regenerative), as well as terms he himself invented. His style of speech was characterized by progressively rapid and breathless delivery and rambling digressions of thought, which Fuller described as \"thinking out loud\". The effect, combined with Fuller's dry voice and non-rhotic New England accent, was varyingly considered \"hypnotic\" or \"overwhelming\".",
"title": "Language and neologisms"
},
{
"paragraph_id": 65,
"text": "Fuller used the word Universe without the definite or indefinite article (the or a) and always capitalized the word. Fuller wrote that \"by Universe I mean: the aggregate of all humanity's consciously apprehended and communicated (to self or others) Experiences\".",
"title": "Language and neologisms"
},
{
"paragraph_id": 66,
"text": "The words \"down\" and \"up\", according to Fuller, are awkward in that they refer to a planar concept of direction inconsistent with human experience. The words \"in\" and \"out\" should be used instead, he argued, because they better describe an object's relation to a gravitational center, the Earth. \"I suggest to audiences that they say, 'I'm going \"outstairs\" and \"instairs.\"' At first that sounds strange to them; They all laugh about it. But if they try saying in and out for a few days in fun, they find themselves beginning to realize that they are indeed going inward and outward in respect to the center of Earth, which is our Spaceship Earth. And for the first time they begin to feel real 'reality.'\"",
"title": "Language and neologisms"
},
{
"paragraph_id": 67,
"text": "\"World-around\" is a term coined by Fuller to replace \"worldwide\". The general belief in a flat Earth died out in classical antiquity, so using \"wide\" is an anachronism when referring to the surface of the Earth—a spheroidal surface has area and encloses a volume but has no width. Fuller held that unthinking use of obsolete scientific ideas detracts from and misleads intuition. Other neologisms collectively invented by the Fuller family, according to Allegra Fuller Snyder, are the terms \"sunsight\" and \"sunclipse\", replacing \"sunrise\" and \"sunset\" to overturn the geocentric bias of most pre-Copernican celestial mechanics.",
"title": "Language and neologisms"
},
{
"paragraph_id": 68,
"text": "Fuller also invented the word \"livingry\", as opposed to weaponry (or \"killingry\"), to mean that which is in support of all human, plant, and Earth life. \"The architectural profession—civil, naval, aeronautical, and astronautical—has always been the place where the most competent thinking is conducted regarding livingry, as opposed to weaponry.\"",
"title": "Language and neologisms"
},
{
"paragraph_id": 69,
"text": "As well as contributing significantly to the development of tensegrity technology, Fuller invented the term \"tensegrity\", a portmanteau of \"tensional integrity\". \"Tensegrity describes a structural-relationship principle in which structural shape is guaranteed by the finitely closed, comprehensively continuous, tensional behaviors of the system and not by the discontinuous and exclusively local compressional member behaviors. Tensegrity provides the ability to yield increasingly without ultimately breaking or coming asunder.\"",
"title": "Language and neologisms"
},
{
"paragraph_id": 70,
"text": "\"Dymaxion\" is a portmanteau of \"dynamic maximum tension\". It was invented around 1929 by two admen at Marshall Field's department store in Chicago to describe Fuller's concept house, which was shown as part of a house of the future store display. They created the term using three words that Fuller used repeatedly to describe his design – dynamic, maximum, and tension.",
"title": "Language and neologisms"
},
{
"paragraph_id": 71,
"text": "Fuller also helped to popularize the concept of Spaceship Earth: \"The most important fact about Spaceship Earth: an instruction manual didn't come with it.\"",
"title": "Language and neologisms"
},
{
"paragraph_id": 72,
"text": "In the preface for his \"cosmic fairy tale\" Tetrascroll: Goldilocks and the Three Bears, Fuller stated that his distinctive speaking style grew out of years of embellishing the classic tale for the benefit of his daughter, allowing him to explore both his new theories and how to present them. The Tetrascroll narrative was eventually transcribed onto a set of tetrahedral lithographs (hence the name), as well as being published as a traditional book.",
"title": "Language and neologisms"
},
{
"paragraph_id": 73,
"text": "Fuller's language posed problems for his credibility. John Julius Norwich recalled commissioning a 600-word introduction for a planned history of world architecture from him, and receiving a 3500-word proposal which ended:",
"title": "Language and neologisms"
},
{
"paragraph_id": 74,
"text": "We will see the (1) down-at-the-mouth-ends curvature of land civilisation's retrogression from the (2) straight raft line foundation of the Mayans' building foundation lines historically transformed to the (3) smiling, up-end curvature of maritime technology transformed through the climbing angle of wingfoil aeronautics progressing humanity into the verticality of outward-bound rocketry and inward-bound microcosmy, ergo (4) the ultimately invisible and vertically-lined architecture as humans master local environment with invisible electro-magnetic fields while travelling by radio as immortal pattern-integrities.",
"title": "Language and neologisms"
},
{
"paragraph_id": 75,
"text": "Norwich commented: \"On reflection, I asked Dr. Nikolaus Pevsner instead.\"",
"title": "Language and neologisms"
},
{
"paragraph_id": 76,
"text": "His concepts and buildings include:",
"title": "Concepts and buildings"
},
{
"paragraph_id": 77,
"text": "Among the many people who were influenced by Buckminster Fuller are: Constance Abernathy, Ruth Asawa, J. Baldwin, Michael Ben-Eli, Pierre Cabrol, John Cage, Joseph Clinton, Peter Floyd, Norman Foster, Medard Gabel, Michael Hays, Ted Nelson, David Johnston, Peter Jon Pearce, Shoji Sadao, Edwin Schlossberg, Kenneth Snelson, Robert Anton Wilson, Stewart Brand, and Jason McLennan.",
"title": "Influence and legacy"
},
{
"paragraph_id": 78,
"text": "An allotrope of carbon, fullerene—and a particular molecule of that allotrope C60 (buckminsterfullerene or buckyball) has been named after him. The Buckminsterfullerene molecule, which consists of 60 carbon atoms, very closely resembles a spherical version of Fuller's geodesic dome. The 1996 Nobel prize in chemistry was given to Kroto, Curl, and Smalley for their discovery of the fullerene.",
"title": "Influence and legacy"
},
{
"paragraph_id": 79,
"text": "On July 12, 2004, the United States Post Office released a new commemorative stamp honoring R. Buckminster Fuller on the 50th anniversary of his patent for the geodesic dome and by the occasion of his 109th birthday. The stamp's design replicated the January 10, 1964, cover of Time magazine.",
"title": "Influence and legacy"
},
{
"paragraph_id": 80,
"text": "Fuller was the subject of two documentary films: The World of Buckminster Fuller (1971) and Buckminster Fuller: Thinking Out Loud (1996). Additionally, filmmaker Sam Green and the band Yo La Tengo collaborated on a 2012 \"live documentary\" about Fuller, The Love Song of R. Buckminster Fuller.",
"title": "Influence and legacy"
},
{
"paragraph_id": 81,
"text": "In June 2008, the Whitney Museum of American Art presented \"Buckminster Fuller: Starting with the Universe\", the most comprehensive retrospective to date of his work and ideas. The exhibition traveled to the Museum of Contemporary Art, Chicago in 2009. It presented a combination of models, sketches, and other artifacts, representing six decades of the artist's integrated approach to housing, transportation, communication, and cartography. It also featured the extensive connections with Chicago from his years spent living, teaching, and working in the city.",
"title": "Influence and legacy"
},
{
"paragraph_id": 82,
"text": "In 2009, a number of US companies decided to repackage spherical magnets and sell them as toys. One company, Maxfield & Oberton, told The New York Times that they saw the product on YouTube and decided to repackage them as \"Buckyballs\", because the magnets could self-form and hold together in shapes reminiscent of the Fuller inspired buckyballs. The buckyball toy launched at New York International Gift Fair in 2009 and sold in the hundreds of thousands, but by 2010 began to experience problems with toy safety issues and the company was forced to recall the packages that were labelled as toys.",
"title": "Influence and legacy"
},
{
"paragraph_id": 83,
"text": "In 2012, the San Francisco Museum of Modern Art hosted \"The Utopian Impulse\" – a show about Buckminster Fuller's influence in the Bay Area. Featured were concepts, inventions and designs for creating \"free energy\" from natural forces, and for sequestering carbon from the atmosphere. The show ran January through July.",
"title": "Influence and legacy"
},
{
"paragraph_id": 84,
"text": "Fuller is quoted in \"The Tower of Babble\" from the musical Godspell: \"Man is a complex of patterns and processes.\"",
"title": "In popular culture"
},
{
"paragraph_id": 85,
"text": "Belgian rock band dEUS released the song The Architect, inspired by Fuller, on their 2008 album Vantage Point.",
"title": "In popular culture"
},
{
"paragraph_id": 86,
"text": "Indie band Driftless Pony Club titled their 2011 album Buckminster after Fuller. Each of the album's songs is based upon his life and works.",
"title": "In popular culture"
},
{
"paragraph_id": 87,
"text": "The design podcast 99% Invisible (2010–present) takes its title from a Fuller quote: \"Ninety-nine percent of who you are is invisible and untouchable.\"",
"title": "In popular culture"
},
{
"paragraph_id": 88,
"text": "Fuller is briefly mentioned in X-Men: Days of Future Past (2014) when Kitty Pryde is giving a lecture to a group of students regarding utopian architecture.",
"title": "In popular culture"
},
{
"paragraph_id": 89,
"text": "Robert Kiyosaki's 2015 book Second Chance concerns Kiyosaki's interactions with Fuller as well as Fuller's unusual final book, Grunch of Giants.",
"title": "In popular culture"
},
{
"paragraph_id": 90,
"text": "In The House of Tomorrow (2017), based on Peter Bognanni's 2010 novel of the same name, Ellen Burstyn's character is obsessed with Fuller and provides retro-futurist tours of her geodesic home that include videos of Fuller sailing and talking with Burstyn, who had in real life befriended Fuller.",
"title": "In popular culture"
},
{
"paragraph_id": 91,
"text": "(from the Table of Contents of Inventions: The Patented Works of R. Buckminster Fuller (1983) ISBN 0-312-43477-4)",
"title": "Patents"
}
] | Richard Buckminster Fuller was an American architect, systems theorist, writer, designer, inventor, philosopher, and futurist. He styled his name as R. Buckminster Fuller in his writings, publishing more than 30 books and coining or popularizing such terms as "Spaceship Earth", "Dymaxion", "ephemeralization", "synergetics", and "tensegrity". Fuller developed numerous inventions, mainly architectural designs, and popularized the widely known geodesic dome; carbon molecules known as fullerenes were later named by scientists for their structural and mathematical resemblance to geodesic spheres. He also served as the second World President of Mensa International from 1974 to 1983. Fuller was awarded 28 United States patents and many honorary doctorates. In 1960, he was awarded the Frank P. Brown Medal from The Franklin Institute. He was elected an honorary member of Phi Beta Kappa in 1967, on the occasion of the 50-year reunion of his Harvard class of 1917. He was elected a Fellow of the American Academy of Arts and Sciences in 1968. The same year, he was elected into the National Academy of Design as an Associate member. He became a full Academician in 1970, and he received the Gold Medal award from the American Institute of Architects the same year. Also in 1970, Fuller received the title of Master Architect from Alpha Rho Chi (APX), the national fraternity for architecture and the allied arts.
In 1976, he received the St. Louis Literary Award from the Saint Louis University Library Associates. In 1977, he received the Golden Plate Award of the American Academy of Achievement. He also received numerous other awards, including the Presidential Medal of Freedom, presented to him on February 23, 1983, by President Ronald Reagan. | 2001-10-14T01:30:10Z | 2023-12-26T01:59:53Z | [
"Template:Div col",
"Template:Cite journal",
"Template:YouTube",
"Template:Circa",
"Template:Citation needed",
"Template:Clear",
"Template:Rp",
"Template:US patent",
"Template:ISBN",
"Template:Cite web",
"Template:Cbignore",
"Template:Cite magazine",
"Template:IPAc-en",
"Template:Main",
"Template:Cite news",
"Template:Citation",
"Template:Critique of work",
"Template:Whole Earth",
"Template:Infobox architect",
"Template:For",
"Template:Div col end",
"Template:Buckminster Fuller",
"Template:Blockquote",
"Template:Patent",
"Template:Cite ANB",
"Template:Cybernetics",
"Template:Use mdy dates",
"Template:Convert",
"Template:Quote box",
"Template:Cite encyclopedia",
"Template:Commons category",
"Template:Webarchive",
"Template:Cite thesis",
"Template:Wikiquote",
"Template:Short description",
"Template:Other uses",
"Template:Reflist",
"Template:Cite book",
"Template:Dead link",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Buckminster_Fuller |
4,032 | Bill Watterson | William Boyd Watterson II (born July 5, 1958) is an American cartoonist who authored the comic strip Calvin and Hobbes. The strip was syndicated from 1985 to 1995. Watterson concluded Calvin and Hobbes with a short statement to newspaper editors and his readers that he felt he had achieved all he could in the medium. Watterson is known for his negative views on comic syndication and licensing, his efforts to expand and elevate the newspaper comic as an art form, and his move back into private life after Calvin and Hobbes ended. Watterson was born in Washington, D.C., and grew up in Chagrin Falls, Ohio. The suburban Midwestern United States setting of Ohio was part of the inspiration for Calvin and Hobbes.
Bill Watterson was born on July 5, 1958, in Washington, D.C., to Kathryn Watterson (1933-2022) and James Godfrey Watterson (1932-2016). His father worked as a patent attorney. In 1965, six-year-old Watterson and his family moved to Chagrin Falls, Ohio, a suburb of Cleveland. Watterson has a younger brother, Thomas Watterson.
Watterson drew his first cartoon at age eight, and spent much time in childhood alone, drawing and cartooning. This continued through his school years, during which time he discovered comic strips such as Pogo, Krazy Kat, and Charles Schulz' Peanuts which subsequently inspired and influenced his desire to become a professional cartoonist. On one occasion when he was in fourth grade, he wrote a letter to Charles Schulz, who responded, much to Watterson's surprise. This made a big impression on him at the time. His parents encouraged him in his artistic pursuits. Later, they recalled him as a "conservative child" — imaginative, but "not in a fantasy way", and certainly nothing like the character of Calvin that he later created. Watterson found avenues for his cartooning talents throughout primary and secondary school, creating high school-themed super hero comics with his friends and contributing cartoons and art to the school newspaper and yearbook.
After high school, Watterson attended Kenyon College, where he majored in political science. He had already decided on a career in cartooning, but he felt studying political science would help him move into editorial cartooning. He continued to develop his art skills, and during his sophomore year he painted Michelangelo's Creation of Adam on the ceiling of his dormitory room. He also contributed cartoons to the college newspaper, some of which included the original "Spaceman Spiff" cartoons. Watterson graduated from Kenyon in 1980 with a Bachelor of Arts degree.
Later, when Watterson was creating names for the characters in his comic strip, he decided on Calvin (after the Protestant reformer John Calvin) and Hobbes (after the social philosopher Thomas Hobbes), allegedly as a "tip of the hat" to Kenyon's political science department. In The Complete Calvin and Hobbes, Watterson stated that Calvin was named for "a 16th-century theologian who believed in predestination," and Hobbes for "a 17th-century philosopher with a dim view of human nature."
Watterson was inspired by the work of The Cincinnati Enquirer political cartoonist Jim Borgman, a 1976 graduate of Kenyon College, and decided to try to follow the same career path as Borgman, who in turn offered support and encouragement to the aspiring artist. Watterson graduated in 1980 and was hired on a trial basis at the Cincinnati Post, a competing paper of the Enquirer. Watterson quickly discovered that the job was full of unexpected challenges which prevented him from performing his duties to the standards set for him. Not the least of these challenges was his unfamiliarity with the Cincinnati political scene, as he had never resided in or near the city, having grown up in the Cleveland area and attending college in central Ohio. The Post fired Watterson before his contract was up.
He then joined a small advertising agency and worked there for four years as a designer, creating grocery advertisements while also working on his own projects, including development of his own cartoon strip and contributions to Target: The Political Cartoon Quarterly.
As a freelance artist, Watterson has drawn other works for various merchandise, including album art for his brother's band, calendars, clothing graphics, educational books, magazine covers, posters, and post cards.
Watterson has said that he works for personal fulfillment. As he told the graduating class of 1990 at Kenyon College, "It's surprising how hard we'll work when the work is done just for ourselves." Calvin and Hobbes was first published on November 18, 1985. In Calvin and Hobbes Tenth Anniversary Book, he wrote that his influences included Charles Schulz's Peanuts, Walt Kelly's Pogo, and George Herriman's Krazy Kat. Watterson wrote the introduction to the first volume of The Komplete Kolor Krazy Kat. Watterson's style also reflects the influence of Winsor McCay's Little Nemo in Slumberland.
Like many artists, Watterson incorporated elements of his life, interests, beliefs, and values into his work—for example, his hobby as a cyclist, memories of his own father's speeches about "building character", and his views on merchandising and corporations. Watterson's cat Sprite very much inspired the personality and physical features of Hobbes.
Watterson spent much of his career trying to change the climate of newspaper comics. He believed that the artistic value of comics was being undermined, and that the space that they occupied in newspapers continually decreased, subject to arbitrary whims of shortsighted publishers. Furthermore, he opined that art should not be judged by the medium for which it is created (i.e., there is no "high" art or "low" art—just art).
Watterson wrote forewords for FoxTrot and For Better or For Worse.
For years, Watterson battled against pressure from publishers to merchandise his work, something that he felt would cheapen his comic through compromising the act of creation or reading.
He refused to merchandise his creations on the grounds that displaying Calvin and Hobbes images on commercially sold mugs, stickers, and T-shirts would devalue the characters and their personalities. Watterson said that Universal kept putting pressure on him and that he had signed his contract without fully perusing it because, as a new artist, he was happy to find a syndicate willing to give him a chance (two other syndicates had previously turned him down). He added that the contract was so one-sided that, if Universal really wanted to, they could license his characters against his will, and could even fire him and continue Calvin and Hobbes with a new artist. Watterson's position eventually won out and he was able to renegotiate his contract so that he would receive all rights to his work, but later added that the licensing fight exhausted him and contributed to the need for a nine-month sabbatical in 1991.
Despite Watterson's efforts, many unofficial knockoffs have been found, including items that depict Calvin and Hobbes consuming alcohol or Calvin urinating on a logo. Watterson has said, "Only thieves and vandals have made money on Calvin and Hobbes merchandise."
Watterson was critical of the prevailing format for the Sunday comic strip that was in place when he began drawing (and remained so, to varying degrees). The typical layout consists of three rows with eight total squares, which take up half a page if published with its normal size. (In this context, half-page is an absolute size – approximately half a nominal 8+1⁄2-by-11-inch (22 cm × 28 cm) page size – and not related to the actual page size on which a cartoon might eventually be printed for distribution.) Some newspapers are restricted with space for their Sunday features and reduce the size of the strip. One of the more common ways is to cut out the top two panels, which Watterson believed forced him to waste the space on throwaway jokes that did not always fit the strip. While he was set to return from his first sabbatical (a second took place during 1994), Watterson discussed with his syndicate a new format for Calvin and Hobbes that would enable him to use his space more efficiently and would almost require the papers to publish it as a half-page. Universal agreed that they would sell the strip as the half-page and nothing else, which garnered anger from papers and criticism for Watterson from both editors and some of his fellow cartoonists (whom he described as "unnecessarily hot-tempered"). Eventually, Universal compromised and agreed to offer papers a choice between the full half-page or a reduced-sized version to alleviate concerns about the size issue. Watterson conceded that this caused him to lose space in many papers, but he said that, in the end, it was a benefit because he felt that he was giving the papers' readers a better strip for their money and editors were free not to run Calvin and Hobbes at their own risk. He added that he was not going to apologize for drawing a popular feature.
On November 9, 1995, Watterson announced the end of Calvin and Hobbes with the following letter to newspaper editors:
Dear Reader: I will be stopping Calvin and Hobbes at the end of the year. This was not a recent or an easy decision, and I leave with some sadness. My interests have shifted, however, and I believe I've done what I can do within the constraints of daily deadlines and small panels. I am eager to work at a more thoughtful pace, with fewer artistic compromises. I have not yet decided on future projects, but my relationship with Universal Press Syndicate will continue. That so many newspapers would carry Calvin and Hobbes is an honor I'll long be proud of, and I've greatly appreciated your support and indulgence over the last decade. Drawing this comic strip has been a privilege and a pleasure, and I thank you for giving me the opportunity. Sincerely,
Bill Watterson
The last strip of Calvin and Hobbes was published on December 31, 1995.
In the years since Calvin and Hobbes was ended, many attempts have been made to contact Watterson. Both The Plain Dealer and the Cleveland Scene sent reporters, in 1998 and 2003 respectively, but neither were able to make contact with the media-shy Watterson. Since 1995, Watterson has taken up painting, at one point drawing landscapes of the woods with his father. He has kept away from the public eye and shown no interest in resuming the strip, creating new works based on the strip's characters, or embarking on new commercial projects, though he has published several Calvin and Hobbes "treasury collection" anthologies. He does not sign autographs or license his characters. Watterson was once known to sneak autographed copies of his books onto the shelves of the Fireside Bookshop, a family-owned bookstore in his hometown of Chagrin Falls, Ohio. He ended this practice after discovering that some of the autographed books were being sold online for high prices.
Watterson rarely gives interviews or makes public appearances. His lengthiest interviews include the cover story in The Comics Journal No. 127 in February 1989, an interview that appeared in a 1987 issue of Honk Magazine, and one in a 2015 Watterson exhibition catalogue.
On December 21, 1999, a short piece was published in the Los Angeles Times, written by Watterson to mark the forthcoming retirement of iconic Peanuts creator Charles Schulz.
Circa 2003, Gene Weingarten of The Washington Post sent Watterson the first edition of the Barnaby book as an incentive, hoping to land an interview. Weingarten passed the book to Watterson's parents, along with a message, and declared that he would wait in his hotel for as long as it took Watterson to contact him. Watterson's editor Lee Salem called the next day to tell Weingarten that the cartoonist would not be coming.
In 2004, Watterson and his wife Melissa bought a home in the Cleveland suburb of Cleveland Heights, Ohio. In 2005, they completed the move from their home in Chagrin Falls to their new residence.
In October 2005, Watterson answered 15 questions submitted by readers. In October 2007, he wrote a review of Schulz and Peanuts, a biography of Charles Schulz, in The Wall Street Journal.
In 2008, he provided a foreword for the first book collection of Richard Thompson's Cul de Sac comic strip. In April 2011, a representative for Andrews McMeel received a package from a "William Watterson in Cleveland Heights, Ohio" which contained a 6-by-8-inch (15 cm × 20 cm) oil-on-board painting of Cul de Sac character Petey Otterloop, done by Watterson for the Team Cul de Sac fundraising project for Parkinson's disease in honor of Richard Thompson, who was diagnosed in 2009. Watterson's syndicate (which ultimately became Universal Uclick) revealed that the painting was the first new artwork of his that the syndicate has seen since Calvin and Hobbes ended in 1995.
In October 2009, Nevin Martell published a book called Looking for Calvin and Hobbes, which included a story about the author seeking an interview with Watterson. In his search he interviews friends, co-workers and family but never gets to meet the artist himself.
In early 2010, Watterson was interviewed by The Plain Dealer on the 15th anniversary of the end of Calvin and Hobbes. Explaining his decision to discontinue the strip, he said,
This isn't as hard to understand as people try to make it. By the end of ten years, I'd said pretty much everything I had come there to say. It's always better to leave the party early. If I had rolled along with the strip's popularity and repeated myself for another five, ten, or twenty years, the people now "grieving" for Calvin and Hobbes would be wishing me dead and cursing newspapers for running tedious, ancient strips like mine instead of acquiring fresher, livelier talent. And I'd be agreeing with them. I think some of the reason Calvin and Hobbes still finds an audience today is because I chose not to run the wheels off it. I've never regretted stopping when I did.
In October 2013, the magazine Mental Floss published an interview with Watterson, only the second since the strip ended. Watterson again confirmed that he would not be revisiting Calvin and Hobbes, and that he was satisfied with his decision. He also gave his opinion on the changes in the comic-strip industry and where it would be headed in the future:
Personally, I like paper and ink better than glowing pixels, but to each his own. Obviously the role of comics is changing very fast. On the one hand, I don't think comics have ever been more widely accepted or taken as seriously as they are now. On the other hand, the mass media is disintegrating, and audiences are atomizing. I suspect comics will have less widespread cultural impact and make a lot less money. I'm old enough to find all this unsettling, but the world moves on. All the new media will inevitably change the look, function, and maybe even the purpose of comics, but comics are vibrant and versatile, so I think they'll continue to find relevance one way or another. But they definitely won't be the same as what I grew up with.
In 2013 the documentary Dear Mr. Watterson, exploring the cultural impact of Calvin and Hobbes, was released. Watterson himself did not appear in the film.
On February 26, 2014, Watterson published his first cartoon since the end of Calvin and Hobbes: a poster for the documentary Stripped.
In 2014, Watterson co-authored The Art of Richard Thompson with Washington Post cartoonist Nick Galifianakis and David Apatoff.
In June 2014, three strips of Pearls Before Swine (published June 4, June 5, and June 6, 2014) featured guest illustrations by Watterson after mutual friend Nick Galifianakis connected him and cartoonist Stephan Pastis, who communicated via e-mail. Pastis likened this unexpected collaboration to getting "a glimpse of Bigfoot". "I thought maybe Stephan and I could do this goofy collaboration and then use the result to raise some money for Parkinson's research in honor of Richard Thompson. It seemed like a perfect convergence", Watterson told The Washington Post. The day that Stephan Pastis returned to his own strip, he paid tribute to Watterson by alluding to the final strip of Calvin and Hobbes from December 31, 1995.
On November 5, 2014, a poster was unveiled, drawn by Watterson for the 2015 Angoulême International Comics Festival where he was awarded the Grand Prix in 2014.
On April 1, 2016, for April Fools' Day, Berkeley Breathed posted on Facebook that Watterson had signed "the franchise over to my 'administration'". He then posted a comic with Calvin, Hobbes, and Opus all featured. The comic is signed by Watterson, though the degree of his involvement was speculative. Breathed posted another "Calvin County" strip featuring Calvin and Hobbes, also "signed" by Watterson on April 1, 2017, along with a fake New York Times story ostensibly detailing the "merger" of the two strips. Berkeley Breathed included Hobbes in a November 27, 2017, strip as a stand-in for the character Steve Dallas. Hobbes has also returned in the June 9, 11, and 12, 2021, strips as a stand-in for Bill The Cat.
In 2001, the Billy Ireland Cartoon Library & Museum at Ohio State University mounted an exhibition of Watterson's Sunday strips. He chose thirty-six of his favorites, displaying them with both the original drawing and the colored finished product, with most pieces featuring personal annotations. Watterson also wrote an accompanying essay that served as the foreword for the exhibit, called "Calvin and Hobbes: Sunday Pages 1985–1995", which opened on September 10, 2001. It was taken down in January 2002. The accompanying published catalog had the same title.
From March 22 to August 3, 2014, Watterson exhibited again at the Billy Ireland Cartoon Library & Museum at Ohio State University. In conjunction with this exhibition, Watterson also participated in an interview with the school. An exhibition catalog named Exploring Calvin and Hobbes was released with the exhibit. The book contained a lengthy interview with Bill Watterson, conducted by Jenny Robb, the curator of the museum.
Watterson released his first published work in 28 years on October 10, 2023, called The Mysteries. It was an illustrated "fable for grown ups" about "what lies beyond human understanding." The work was a collaboration with the illustrator and caricaturist John Kascht.
Watterson was awarded the National Cartoonists Society's Reuben Award in both 1986 and 1988. Watterson's second Reuben win made him the youngest cartoonist to be so honored, and only the sixth person to win twice, following Milton Caniff, Charles Schulz, Dik Browne, Chester Gould, and Jeff MacNelly. (Gary Larson is the only cartoonist to win a second Reuben since Watterson.) In 2014, Watterson was awarded the Grand Prix at the Angoulême International Comics Festival for his body of work, becoming just the fourth non-European cartoonist to be so honored in the first 41 years of the event.
Treasury collections | [
{
"paragraph_id": 0,
"text": "William Boyd Watterson II (born July 5, 1958) is an American cartoonist who authored the comic strip Calvin and Hobbes. The strip was syndicated from 1985 to 1995. Watterson concluded Calvin and Hobbes with a short statement to newspaper editors and his readers that he felt he had achieved all he could in the medium. Watterson is known for his negative views on comic syndication and licensing, his efforts to expand and elevate the newspaper comic as an art form, and his move back into private life after Calvin and Hobbes ended. Watterson was born in Washington, D.C., and grew up in Chagrin Falls, Ohio. The suburban Midwestern United States setting of Ohio was part of the inspiration for Calvin and Hobbes.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bill Watterson was born on July 5, 1958, in Washington, D.C., to Kathryn Watterson (1933-2022) and James Godfrey Watterson (1932-2016). His father worked as a patent attorney. In 1965, six-year-old Watterson and his family moved to Chagrin Falls, Ohio, a suburb of Cleveland. Watterson has a younger brother, Thomas Watterson.",
"title": "Early life"
},
{
"paragraph_id": 2,
"text": "Watterson drew his first cartoon at age eight, and spent much time in childhood alone, drawing and cartooning. This continued through his school years, during which time he discovered comic strips such as Pogo, Krazy Kat, and Charles Schulz' Peanuts which subsequently inspired and influenced his desire to become a professional cartoonist. On one occasion when he was in fourth grade, he wrote a letter to Charles Schulz, who responded, much to Watterson's surprise. This made a big impression on him at the time. His parents encouraged him in his artistic pursuits. Later, they recalled him as a \"conservative child\" — imaginative, but \"not in a fantasy way\", and certainly nothing like the character of Calvin that he later created. Watterson found avenues for his cartooning talents throughout primary and secondary school, creating high school-themed super hero comics with his friends and contributing cartoons and art to the school newspaper and yearbook.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "After high school, Watterson attended Kenyon College, where he majored in political science. He had already decided on a career in cartooning, but he felt studying political science would help him move into editorial cartooning. He continued to develop his art skills, and during his sophomore year he painted Michelangelo's Creation of Adam on the ceiling of his dormitory room. He also contributed cartoons to the college newspaper, some of which included the original \"Spaceman Spiff\" cartoons. Watterson graduated from Kenyon in 1980 with a Bachelor of Arts degree.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Later, when Watterson was creating names for the characters in his comic strip, he decided on Calvin (after the Protestant reformer John Calvin) and Hobbes (after the social philosopher Thomas Hobbes), allegedly as a \"tip of the hat\" to Kenyon's political science department. In The Complete Calvin and Hobbes, Watterson stated that Calvin was named for \"a 16th-century theologian who believed in predestination,\" and Hobbes for \"a 17th-century philosopher with a dim view of human nature.\"",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "Watterson was inspired by the work of The Cincinnati Enquirer political cartoonist Jim Borgman, a 1976 graduate of Kenyon College, and decided to try to follow the same career path as Borgman, who in turn offered support and encouragement to the aspiring artist. Watterson graduated in 1980 and was hired on a trial basis at the Cincinnati Post, a competing paper of the Enquirer. Watterson quickly discovered that the job was full of unexpected challenges which prevented him from performing his duties to the standards set for him. Not the least of these challenges was his unfamiliarity with the Cincinnati political scene, as he had never resided in or near the city, having grown up in the Cleveland area and attending college in central Ohio. The Post fired Watterson before his contract was up.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "He then joined a small advertising agency and worked there for four years as a designer, creating grocery advertisements while also working on his own projects, including development of his own cartoon strip and contributions to Target: The Political Cartoon Quarterly.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "As a freelance artist, Watterson has drawn other works for various merchandise, including album art for his brother's band, calendars, clothing graphics, educational books, magazine covers, posters, and post cards.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "Watterson has said that he works for personal fulfillment. As he told the graduating class of 1990 at Kenyon College, \"It's surprising how hard we'll work when the work is done just for ourselves.\" Calvin and Hobbes was first published on November 18, 1985. In Calvin and Hobbes Tenth Anniversary Book, he wrote that his influences included Charles Schulz's Peanuts, Walt Kelly's Pogo, and George Herriman's Krazy Kat. Watterson wrote the introduction to the first volume of The Komplete Kolor Krazy Kat. Watterson's style also reflects the influence of Winsor McCay's Little Nemo in Slumberland.",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "Like many artists, Watterson incorporated elements of his life, interests, beliefs, and values into his work—for example, his hobby as a cyclist, memories of his own father's speeches about \"building character\", and his views on merchandising and corporations. Watterson's cat Sprite very much inspired the personality and physical features of Hobbes.",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "Watterson spent much of his career trying to change the climate of newspaper comics. He believed that the artistic value of comics was being undermined, and that the space that they occupied in newspapers continually decreased, subject to arbitrary whims of shortsighted publishers. Furthermore, he opined that art should not be judged by the medium for which it is created (i.e., there is no \"high\" art or \"low\" art—just art).",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "Watterson wrote forewords for FoxTrot and For Better or For Worse.",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "For years, Watterson battled against pressure from publishers to merchandise his work, something that he felt would cheapen his comic through compromising the act of creation or reading.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "He refused to merchandise his creations on the grounds that displaying Calvin and Hobbes images on commercially sold mugs, stickers, and T-shirts would devalue the characters and their personalities. Watterson said that Universal kept putting pressure on him and that he had signed his contract without fully perusing it because, as a new artist, he was happy to find a syndicate willing to give him a chance (two other syndicates had previously turned him down). He added that the contract was so one-sided that, if Universal really wanted to, they could license his characters against his will, and could even fire him and continue Calvin and Hobbes with a new artist. Watterson's position eventually won out and he was able to renegotiate his contract so that he would receive all rights to his work, but later added that the licensing fight exhausted him and contributed to the need for a nine-month sabbatical in 1991.",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "Despite Watterson's efforts, many unofficial knockoffs have been found, including items that depict Calvin and Hobbes consuming alcohol or Calvin urinating on a logo. Watterson has said, \"Only thieves and vandals have made money on Calvin and Hobbes merchandise.\"",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "Watterson was critical of the prevailing format for the Sunday comic strip that was in place when he began drawing (and remained so, to varying degrees). The typical layout consists of three rows with eight total squares, which take up half a page if published with its normal size. (In this context, half-page is an absolute size – approximately half a nominal 8+1⁄2-by-11-inch (22 cm × 28 cm) page size – and not related to the actual page size on which a cartoon might eventually be printed for distribution.) Some newspapers are restricted with space for their Sunday features and reduce the size of the strip. One of the more common ways is to cut out the top two panels, which Watterson believed forced him to waste the space on throwaway jokes that did not always fit the strip. While he was set to return from his first sabbatical (a second took place during 1994), Watterson discussed with his syndicate a new format for Calvin and Hobbes that would enable him to use his space more efficiently and would almost require the papers to publish it as a half-page. Universal agreed that they would sell the strip as the half-page and nothing else, which garnered anger from papers and criticism for Watterson from both editors and some of his fellow cartoonists (whom he described as \"unnecessarily hot-tempered\"). Eventually, Universal compromised and agreed to offer papers a choice between the full half-page or a reduced-sized version to alleviate concerns about the size issue. Watterson conceded that this caused him to lose space in many papers, but he said that, in the end, it was a benefit because he felt that he was giving the papers' readers a better strip for their money and editors were free not to run Calvin and Hobbes at their own risk. He added that he was not going to apologize for drawing a popular feature.",
"title": "Career"
},
{
"paragraph_id": 16,
"text": "On November 9, 1995, Watterson announced the end of Calvin and Hobbes with the following letter to newspaper editors:",
"title": "Career"
},
{
"paragraph_id": 17,
"text": "Dear Reader: I will be stopping Calvin and Hobbes at the end of the year. This was not a recent or an easy decision, and I leave with some sadness. My interests have shifted, however, and I believe I've done what I can do within the constraints of daily deadlines and small panels. I am eager to work at a more thoughtful pace, with fewer artistic compromises. I have not yet decided on future projects, but my relationship with Universal Press Syndicate will continue. That so many newspapers would carry Calvin and Hobbes is an honor I'll long be proud of, and I've greatly appreciated your support and indulgence over the last decade. Drawing this comic strip has been a privilege and a pleasure, and I thank you for giving me the opportunity. Sincerely,",
"title": "Career"
},
{
"paragraph_id": 18,
"text": "Bill Watterson",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "The last strip of Calvin and Hobbes was published on December 31, 1995.",
"title": "Career"
},
{
"paragraph_id": 20,
"text": "In the years since Calvin and Hobbes was ended, many attempts have been made to contact Watterson. Both The Plain Dealer and the Cleveland Scene sent reporters, in 1998 and 2003 respectively, but neither were able to make contact with the media-shy Watterson. Since 1995, Watterson has taken up painting, at one point drawing landscapes of the woods with his father. He has kept away from the public eye and shown no interest in resuming the strip, creating new works based on the strip's characters, or embarking on new commercial projects, though he has published several Calvin and Hobbes \"treasury collection\" anthologies. He does not sign autographs or license his characters. Watterson was once known to sneak autographed copies of his books onto the shelves of the Fireside Bookshop, a family-owned bookstore in his hometown of Chagrin Falls, Ohio. He ended this practice after discovering that some of the autographed books were being sold online for high prices.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 21,
"text": "Watterson rarely gives interviews or makes public appearances. His lengthiest interviews include the cover story in The Comics Journal No. 127 in February 1989, an interview that appeared in a 1987 issue of Honk Magazine, and one in a 2015 Watterson exhibition catalogue.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 22,
"text": "On December 21, 1999, a short piece was published in the Los Angeles Times, written by Watterson to mark the forthcoming retirement of iconic Peanuts creator Charles Schulz.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 23,
"text": "Circa 2003, Gene Weingarten of The Washington Post sent Watterson the first edition of the Barnaby book as an incentive, hoping to land an interview. Weingarten passed the book to Watterson's parents, along with a message, and declared that he would wait in his hotel for as long as it took Watterson to contact him. Watterson's editor Lee Salem called the next day to tell Weingarten that the cartoonist would not be coming.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 24,
"text": "In 2004, Watterson and his wife Melissa bought a home in the Cleveland suburb of Cleveland Heights, Ohio. In 2005, they completed the move from their home in Chagrin Falls to their new residence.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 25,
"text": "In October 2005, Watterson answered 15 questions submitted by readers. In October 2007, he wrote a review of Schulz and Peanuts, a biography of Charles Schulz, in The Wall Street Journal.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 26,
"text": "In 2008, he provided a foreword for the first book collection of Richard Thompson's Cul de Sac comic strip. In April 2011, a representative for Andrews McMeel received a package from a \"William Watterson in Cleveland Heights, Ohio\" which contained a 6-by-8-inch (15 cm × 20 cm) oil-on-board painting of Cul de Sac character Petey Otterloop, done by Watterson for the Team Cul de Sac fundraising project for Parkinson's disease in honor of Richard Thompson, who was diagnosed in 2009. Watterson's syndicate (which ultimately became Universal Uclick) revealed that the painting was the first new artwork of his that the syndicate has seen since Calvin and Hobbes ended in 1995.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 27,
"text": "In October 2009, Nevin Martell published a book called Looking for Calvin and Hobbes, which included a story about the author seeking an interview with Watterson. In his search he interviews friends, co-workers and family but never gets to meet the artist himself.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 28,
"text": "In early 2010, Watterson was interviewed by The Plain Dealer on the 15th anniversary of the end of Calvin and Hobbes. Explaining his decision to discontinue the strip, he said,",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 29,
"text": "This isn't as hard to understand as people try to make it. By the end of ten years, I'd said pretty much everything I had come there to say. It's always better to leave the party early. If I had rolled along with the strip's popularity and repeated myself for another five, ten, or twenty years, the people now \"grieving\" for Calvin and Hobbes would be wishing me dead and cursing newspapers for running tedious, ancient strips like mine instead of acquiring fresher, livelier talent. And I'd be agreeing with them. I think some of the reason Calvin and Hobbes still finds an audience today is because I chose not to run the wheels off it. I've never regretted stopping when I did.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 30,
"text": "In October 2013, the magazine Mental Floss published an interview with Watterson, only the second since the strip ended. Watterson again confirmed that he would not be revisiting Calvin and Hobbes, and that he was satisfied with his decision. He also gave his opinion on the changes in the comic-strip industry and where it would be headed in the future:",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 31,
"text": "Personally, I like paper and ink better than glowing pixels, but to each his own. Obviously the role of comics is changing very fast. On the one hand, I don't think comics have ever been more widely accepted or taken as seriously as they are now. On the other hand, the mass media is disintegrating, and audiences are atomizing. I suspect comics will have less widespread cultural impact and make a lot less money. I'm old enough to find all this unsettling, but the world moves on. All the new media will inevitably change the look, function, and maybe even the purpose of comics, but comics are vibrant and versatile, so I think they'll continue to find relevance one way or another. But they definitely won't be the same as what I grew up with.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 32,
"text": "In 2013 the documentary Dear Mr. Watterson, exploring the cultural impact of Calvin and Hobbes, was released. Watterson himself did not appear in the film.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 33,
"text": "On February 26, 2014, Watterson published his first cartoon since the end of Calvin and Hobbes: a poster for the documentary Stripped.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 34,
"text": "In 2014, Watterson co-authored The Art of Richard Thompson with Washington Post cartoonist Nick Galifianakis and David Apatoff.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 35,
"text": "In June 2014, three strips of Pearls Before Swine (published June 4, June 5, and June 6, 2014) featured guest illustrations by Watterson after mutual friend Nick Galifianakis connected him and cartoonist Stephan Pastis, who communicated via e-mail. Pastis likened this unexpected collaboration to getting \"a glimpse of Bigfoot\". \"I thought maybe Stephan and I could do this goofy collaboration and then use the result to raise some money for Parkinson's research in honor of Richard Thompson. It seemed like a perfect convergence\", Watterson told The Washington Post. The day that Stephan Pastis returned to his own strip, he paid tribute to Watterson by alluding to the final strip of Calvin and Hobbes from December 31, 1995.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 36,
"text": "On November 5, 2014, a poster was unveiled, drawn by Watterson for the 2015 Angoulême International Comics Festival where he was awarded the Grand Prix in 2014.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 37,
"text": "On April 1, 2016, for April Fools' Day, Berkeley Breathed posted on Facebook that Watterson had signed \"the franchise over to my 'administration'\". He then posted a comic with Calvin, Hobbes, and Opus all featured. The comic is signed by Watterson, though the degree of his involvement was speculative. Breathed posted another \"Calvin County\" strip featuring Calvin and Hobbes, also \"signed\" by Watterson on April 1, 2017, along with a fake New York Times story ostensibly detailing the \"merger\" of the two strips. Berkeley Breathed included Hobbes in a November 27, 2017, strip as a stand-in for the character Steve Dallas. Hobbes has also returned in the June 9, 11, and 12, 2021, strips as a stand-in for Bill The Cat.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 38,
"text": "In 2001, the Billy Ireland Cartoon Library & Museum at Ohio State University mounted an exhibition of Watterson's Sunday strips. He chose thirty-six of his favorites, displaying them with both the original drawing and the colored finished product, with most pieces featuring personal annotations. Watterson also wrote an accompanying essay that served as the foreword for the exhibit, called \"Calvin and Hobbes: Sunday Pages 1985–1995\", which opened on September 10, 2001. It was taken down in January 2002. The accompanying published catalog had the same title.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 39,
"text": "From March 22 to August 3, 2014, Watterson exhibited again at the Billy Ireland Cartoon Library & Museum at Ohio State University. In conjunction with this exhibition, Watterson also participated in an interview with the school. An exhibition catalog named Exploring Calvin and Hobbes was released with the exhibit. The book contained a lengthy interview with Bill Watterson, conducted by Jenny Robb, the curator of the museum.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 40,
"text": "Watterson released his first published work in 28 years on October 10, 2023, called The Mysteries. It was an illustrated \"fable for grown ups\" about \"what lies beyond human understanding.\" The work was a collaboration with the illustrator and caricaturist John Kascht.",
"title": "After Calvin and Hobbes"
},
{
"paragraph_id": 41,
"text": "Watterson was awarded the National Cartoonists Society's Reuben Award in both 1986 and 1988. Watterson's second Reuben win made him the youngest cartoonist to be so honored, and only the sixth person to win twice, following Milton Caniff, Charles Schulz, Dik Browne, Chester Gould, and Jeff MacNelly. (Gary Larson is the only cartoonist to win a second Reuben since Watterson.) In 2014, Watterson was awarded the Grand Prix at the Angoulême International Comics Festival for his body of work, becoming just the fourth non-European cartoonist to be so honored in the first 41 years of the event.",
"title": "Awards and honors"
},
{
"paragraph_id": 42,
"text": "Treasury collections",
"title": "Bibliography"
}
] | William Boyd Watterson II is an American cartoonist who authored the comic strip Calvin and Hobbes. The strip was syndicated from 1985 to 1995. Watterson concluded Calvin and Hobbes with a short statement to newspaper editors and his readers that he felt he had achieved all he could in the medium. Watterson is known for his negative views on comic syndication and licensing, his efforts to expand and elevate the newspaper comic as an art form, and his move back into private life after Calvin and Hobbes ended. Watterson was born in Washington, D.C., and grew up in Chagrin Falls, Ohio. The suburban Midwestern United States setting of Ohio was part of the inspiration for Calvin and Hobbes. | 2001-08-10T07:07:33Z | 2023-12-31T04:29:10Z | [
"Template:Reflist",
"Template:Citation",
"Template:Use mdy dates",
"Template:Infobox person",
"Template:Notes",
"Template:Authority control",
"Template:Citation needed",
"Template:Rp",
"Template:Snd",
"Template:Cite web",
"Template:Webarchive",
"Template:See also",
"Template:Cite book",
"Template:Cite news",
"Template:Short description",
"Template:Efn",
"Template:Cn",
"Template:Convert",
"Template:Bquote",
"Template:Wikiquote",
"Template:Calvin and Hobbes"
] | https://en.wikipedia.org/wiki/Bill_Watterson |
4,035 | Black | Black is a color that results from the absence or complete absorption of visible light. It is an achromatic color, without hue, like white and grey. It is often used symbolically or figuratively to represent darkness. Black and white have often been used to describe opposites such as good and evil, the Dark Ages versus Age of Enlightenment, and night versus day. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason it is still commonly worn by judges and magistrates.
Black was one of the first colors used by artists in Neolithic cave paintings. It was used in ancient Egypt and Greece as the color of the underworld. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches, and magic. In the 14th century, it was worn by royalty, clergy, judges, and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, fear, evil, and elegance.
Black is the most common ink color used for printing books, newspapers and documents, as it provides the highest contrast with white paper and thus is the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens. As of September 2019, the darkest material is made by MIT engineers from vertically aligned carbon nanotubes.
The word black comes from Old English blæc ("black, dark", also, "ink"), from Proto-Germanic *blakkaz ("burned"), from Proto-Indo-European *bhleg- ("to burn, gleam, shine, flash"), from base *bhel- ("to shine"), related to Old Saxon blak ("ink"), Old High German blach ("black"), Old Norse blakkr ("dark"), Dutch blaken ("to burn"), and Swedish bläck ("ink"). More distant cognates include Latin flagrare ("to blaze, glow, burn"), and Ancient Greek phlegein ("to burn, scorch"). The Ancient Greeks sometimes used the same word to name different colors, if they had the same intensity. Kuanos' could mean both dark blue and black. The Ancient Romans had two words for black: ater was a flat, dull black, while niger was a brilliant, saturated black. Ater has vanished from the vocabulary, but niger was the source of the country name Nigeria, the English word Negro, and the word for "black" in most modern Romance languages (French: noir; Spanish and Portuguese: negro; Italian: nero; Romanian: negru).
Old High German also had two words for black: swartz for dull black and blach for a luminous black. These are parallelled in Middle English by the terms swart for dull black and blaek for luminous black. Swart still survives as the word swarthy, while blaek became the modern English black. The former is cognate with the words used for black in most modern Germanic languages aside from English (German: schwarz, Dutch: zwart, Swedish: svart, Danish: sort, Icelandic: svartr). In heraldry, the word used for the black color is sable, named for the black fur of the sable, an animal.
Black was one of the first colors used in art. The Lascaux Cave in France contains drawings of bulls and other animals drawn by paleolithic artists between 18,000 and 17,000 years ago. They began by using charcoal, and later achieved darker pigments by burning bones or grinding a powder of manganese oxide.
For the ancient Egyptians, black had positive associations; being the color of fertility and the rich black soil flooded by the Nile. It was the color of Anubis, the god of the underworld, who took the form of a black jackal, and offered protection against evil to the dead. To ancient Greeks, black represented the underworld, separated from the living by the river Acheron, whose water ran black. Those who had committed the worst sins were sent to Tartarus, the deepest and darkest level. In the center was the palace of Hades, the king of the underworld, where he was seated upon a black ebony throne. Black was one of the most important colors used by ancient Greek artists. In the 6th century BC, they began making black-figure pottery and later red figure pottery, using a highly original technique. In black-figure pottery, the artist would paint figures with a glossy clay slip on a red clay pot. When the pot was fired, the figures painted with the slip would turn black, against a red background. Later they reversed the process, painting the spaces between the figures with slip. This created magnificent red figures against a glossy black background.
In the social hierarchy of ancient Rome, purple was the color reserved for the Emperor; red was the color worn by soldiers (red cloaks for the officers, red tunics for the soldiers); white the color worn by the priests, and black was worn by craftsmen and artisans. The black they wore was not deep and rich; the vegetable dyes used to make black were not solid or lasting, so the blacks often faded to gray or brown.
In Latin, the word for black, ater and to darken, atere, were associated with cruelty, brutality and evil. They were the root of the English words "atrocious" and "atrocity". Black was also the Roman color of death and mourning. In the 2nd century BC Roman magistrates began to wear a dark toga, called a toga pulla, to funeral ceremonies. Later, under the Empire, the family of the deceased also wore dark colors for a long period; then, after a banquet to mark the end of mourning, exchanged the black for a white toga. In Roman poetry, death was called the hora nigra, the black hour.
The German and Scandinavian peoples worshipped their own goddess of the night, Nótt, who crossed the sky in a chariot drawn by a black horse. They also feared Hel, the goddess of the kingdom of the dead, whose skin was black on one side and red on the other. They also held sacred the raven. They believed that Odin, the king of the Nordic pantheon, had two black ravens, Huginn and Muninn, who served as his agents, traveling the world for him, watching and listening.
In the early Middle Ages, black was commonly associated with darkness and evil. In Medieval paintings, the devil was usually depicted as having human form, but with wings and black skin or hair.
In fashion, black did not have the prestige of red, the color of the nobility. It was worn by Benedictine monks as a sign of humility and penitence. In the 12th century a famous theological dispute broke out between the Cistercian monks, who wore white, and the Benedictines, who wore black. A Benedictine abbot, Pierre the Venerable, accused the Cistercians of excessive pride in wearing white instead of black. Saint Bernard of Clairvaux, the founder of the Cistercians responded that black was the color of the devil, hell, "of death and sin", while white represented "purity, innocence and all the virtues".
Black symbolized both power and secrecy in the medieval world. The emblem of the Holy Roman Empire of Germany was a black eagle. The black knight in the poetry of the Middle Ages was an enigmatic figure, hiding his identity, usually wrapped in secrecy.
Black ink, invented in China, was traditionally used in the Middle Ages for writing, for the simple reason that black was the darkest color and therefore provided the greatest contrast with white paper or parchment, making it the easiest color to read. It became even more important in the 15th century, with the invention of printing. A new kind of ink, printer's ink, was created out of soot, turpentine and walnut oil. The new ink made it possible to spread ideas to a mass audience through printed books, and to popularize art through black and white engravings and prints. Because of its contrast and clarity, black ink on white paper continued to be the standard for printing books, newspapers and documents; and for the same reason black text on a white background is the most common format used on computer screens.
In the early Middle Ages, princes, nobles and the wealthy usually wore bright colors, particularly scarlet cloaks from Italy. Black was rarely part of the wardrobe of a noble family. The one exception was the fur of the sable. This glossy black fur, from an animal of the marten family, was the finest and most expensive fur in Europe. It was imported from Russia and Poland and used to trim the robes and gowns of royalty.
In the 14th century, the status of black began to change. First, high-quality black dyes began to arrive on the market, allowing garments of a deep, rich black. Magistrates and government officials began to wear black robes, as a sign of the importance and seriousness of their positions. A third reason was the passage of sumptuary laws in some parts of Europe which prohibited the wearing of costly clothes and certain colors by anyone except members of the nobility. The famous bright scarlet cloaks from Venice and the peacock blue fabrics from Florence were restricted to the nobility. The wealthy bankers and merchants of northern Italy responded by changing to black robes and gowns, made with the most expensive fabrics.
The change to the more austere but elegant black was quickly picked up by the kings and nobility. It began in northern Italy, where the Duke of Milan and the Count of Savoy and the rulers of Mantua, Ferrara, Rimini and Urbino began to dress in black. It then spread to France, led by Louis I, Duke of Orleans, younger brother of King Charles VI of France. It moved to England at the end of the reign of King Richard II (1377–1399), where all the court began to wear black. In 1419–20, black became the color of the powerful Duke of Burgundy, Philip the Good. It moved to Spain, where it became the color of the Spanish Habsburgs, of Charles V and of his son, Philip II of Spain (1527–1598). European rulers saw it as the color of power, dignity, humility and temperance. By the end of the 16th century, it was the color worn by almost all the monarchs of Europe and their courts.
While black was the color worn by the Catholic rulers of Europe, it was also the emblematic color of the Protestant Reformation in Europe and the Puritans in England and America. John Calvin, Philip Melanchthon and other Protestant theologians denounced the richly colored and decorated interiors of Roman Catholic churches. They saw the color red, worn by the Pope and his Cardinals, as the color of luxury, sin, and human folly. In some northern European cities, mobs attacked churches and cathedrals, smashed the stained glass windows and defaced the statues and decoration. In Protestant doctrine, clothing was required to be sober, simple and discreet. Bright colors were banished and replaced by blacks, browns and grays; women and children were recommended to wear white.
In the Protestant Netherlands, Rembrandt used this sober new palette of blacks and browns to create portraits whose faces emerged from the shadows expressing the deepest human emotions. The Catholic painters of the Counter-Reformation, like Rubens, went in the opposite direction; they filled their paintings with bright and rich colors. The new Baroque churches of the Counter-Reformation were usually shining white inside and filled with statues, frescoes, marble, gold and colorful paintings, to appeal to the public. But European Catholics of all classes, like Protestants, eventually adopted a sober wardrobe that was mostly black, brown and gray.
In the second part of the 17th century, Europe and America experienced an epidemic of fear of witchcraft. People widely believed that the devil appeared at midnight in a ceremony called a Black Mass or black sabbath, usually in the form of a black animal, often a goat, a dog, a wolf, a bear, a deer or a rooster, accompanied by their familiar spirits, black cats, serpents and other black creatures. This was the origin of the widespread superstition about black cats and other black animals. In medieval Flanders, in a ceremony called Kattenstoet, black cats were thrown from the belfry of the Cloth Hall of Ypres to ward off witchcraft.
Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a "black thing with a blue cap," and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches.
In the 18th century, during the European Age of Enlightenment, black receded as a fashion color. Paris became the fashion capital, and pastels, blues, greens, yellow and white became the colors of the nobility and upper classes. But after the French Revolution, black again became the dominant color.
Black was the color of the industrial revolution, largely fueled by coal, and later by oil. Thanks to coal smoke, the buildings of the large cities of Europe and America gradually turned black. By 1846 the industrial area of the West Midlands of England was "commonly called 'the Black Country'”. Charles Dickens and other writers described the dark streets and smoky skies of London, and they were vividly illustrated in the engravings of French artist Gustave Doré.
A different kind of black was an important part of the romantic movement in literature. Black was the color of melancholy, the dominant theme of romanticism. The novels of the period were filled with castles, ruins, dungeons, storms, and meetings at midnight. The leading poets of the movement were usually portrayed dressed in black, usually with a white shirt and open collar, and a scarf carelessly over their shoulder, Percy Bysshe Shelley and Lord Byron helped create the enduring stereotype of the romantic poet.
The invention of inexpensive synthetic black dyes and the industrialization of the textile industry meant that high-quality black clothes were available for the first time to the general population. In the 19th century black gradually became the most popular color of business dress of the upper and middle classes in England, the Continent, and America.
Black dominated literature and fashion in the 19th century, and played a large role in painting. James McNeill Whistler made the color the subject of his most famous painting, Arrangement in grey and black number one (1871), better known as Whistler's Mother.
Some 19th-century French painters had a low opinion of black: "Reject black," Paul Gauguin said, "and that mix of black and white they call gray. Nothing is black, nothing is gray." But Édouard Manet used blacks for their strength and dramatic effect. Manet's portrait of painter Berthe Morisot was a study in black which perfectly captured her spirit of independence. The black gave the painting power and immediacy; he even changed her eyes, which were green, to black to strengthen the effect. Henri Matisse quoted the French impressionist Pissarro telling him, "Manet is stronger than us all – he made light with black."
Pierre-Auguste Renoir used luminous blacks, especially in his portraits. When someone told him that black was not a color, Renoir replied: "What makes you think that? Black is the queen of colors. I always detested Prussian blue. I tried to replace black with a mixture of red and blue, I tried using cobalt blue or ultramarine, but I always came back to ivory black."
Vincent van Gogh used black lines to outline many of the objects in his paintings, such as the bed in the famous painting of his bedroom. making them stand apart. His painting of black crows over a cornfield, painted shortly before he died, was particularly agitated and haunting. In the late 19th century, black also became the color of anarchism. (See the section political movements.)
In the 20th century, black was the color of Italian and German fascism. (See the section political movements.)
In art, black regained some of the territory that it had lost during the 19th century. The Russian painter Kasimir Malevich, a member of the Suprematist movement, created the Black Square in 1915, is widely considered the first purely abstract painting. He wrote, "The painted work is no longer simply the imitation of reality, but is this very reality ... It is not a demonstration of ability, but the materialization of an idea."
Black was also appreciated by Henri Matisse. "When I didn't know what color to put down, I put down black," he said in 1945. "Black is a force: I used black as ballast to simplify the construction ... Since the impressionists it seems to have made continuous progress, taking a more and more important part in color orchestration, comparable to that of the double bass as a solo instrument."
In the 1950s, black came to be a symbol of individuality and intellectual and social rebellion, the color of those who did not accept established norms and values. In Paris, it was worn by Left-Bank intellectuals and performers such as Juliette Gréco, and by some members of the Beat Movement in New York and San Francisco. Black leather jackets were worn by motorcycle gangs such as the Hells Angels and street gangs on the fringes of society in the United States. Black as a color of rebellion was celebrated in such films as The Wild One, with Marlon Brando. By the end of the 20th century, black was the emblematic color of the punk subculture punk fashion, and the goth subculture. Goth fashion, which emerged in England in the 1980s, was inspired by Victorian era mourning dress.
In men's fashion, black gradually ceded its dominance to navy blue, particularly in business suits. Black evening dress and formal dress in general were worn less and less. In 1960, John F. Kennedy was the last American President to be inaugurated wearing formal dress; President Lyndon Johnson and all his successors were inaugurated wearing business suits.
Women's fashion was revolutionized and simplified in 1926 by the French designer Coco Chanel, who published a drawing of a simple black dress in Vogue magazine. She famously said, "A woman needs just three things; a black dress, a black sweater, and, on her arm, a man she loves." French designer Jean Patou also followed suit by creating a black collection in 1929. Other designers contributed to the trend of the little black dress. The Italian designer Gianni Versace said, "Black is the quintessence of simplicity and elegance," and French designer Yves Saint Laurent said, "black is the liaison which connects art and fashion. One of the most famous black dresses of the century was designed by Hubert de Givenchy and was worn by Audrey Hepburn in the 1961 film Breakfast at Tiffany's.
The American civil rights movement in the 1950s was a struggle for the political equality of African Americans. It developed into the Black Power movement in the early 1960s until the late 1980s, and the Black Lives Matter movement in the 2010s and 2020s. It also popularized the slogan "Black is Beautiful".
In the visible spectrum, black is the result of the absorption of all light wavelengths. Black can be defined as the visual impression (or color) experienced when no visible light reaches the eye. Pigments or dyes that absorb light rather than reflect it back to the eye look black. A black pigment can, however, result from a combination of several pigments that collectively absorb all wavelengths of visible light. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called black. This provides two superficially opposite but actually complementary descriptions of black. Black is the color produced by the absorption of all wavelengths of visible light, or an exhaustive combination of multiple colors of pigment.
In physics, a black body is a perfect absorber of light, but, by a thermodynamic rule, it is also the best emitter. Thus, the best radiative cooling, out of sunlight, is by using black paint, though it is important that it be black (a nearly perfect absorber) in the infrared as well. In elementary science, far ultraviolet light is called "black light" because, while itself unseen, it causes many minerals and other substances to fluoresce.
Absorption of light is contrasted by transmission, reflection and diffusion, where the light is only redirected, causing objects to appear transparent, reflective or white respectively. A material is said to be black if most incoming light is absorbed equally in the material. Light (electromagnetic radiation in the visible spectrum) interacts with the atoms and molecules, which causes the energy of the light to be converted into other forms of energy, usually heat. This means that black surfaces can act as thermal collectors, absorbing light and generating heat (see Solar thermal collector).
As of September 2019, the darkest material is made from vertically aligned carbon nanotubes. The material was grown by MIT engineers and was reported to have a 99.995% absorption rate of any incoming light. This surpasses any former darkest materials including Vantablack, which has a peak absorption rate of 99.965% in the visible spectrum.
The earliest pigments used by Neolithic man were charcoal, red ocher and yellow ocher. The black lines of cave art were drawn with the tips of burnt torches made of a wood with resin. Different charcoal pigments were made by burning different woods and animal products, each of which produced a different tone. The charcoal would be ground and then mixed with animal fat to make the pigment.
The 15th-century painter Cennino Cennini described how this pigment was made during the Renaissance in his famous handbook for artists: "...there is a black which is made from the tendrils of vines. And these tendrils need to be burned. And when they have been burned, throw some water onto them and put them out and then mull them in the same way as the other black. And this is a lean and black pigment and is one of the perfect pigments that we use."
Cennini also noted that "There is another black which is made from burnt almond shells or peaches and this is a perfect, fine black." Similar fine blacks were made by burning the pits of the peach, cherry or apricot. The powdered charcoal was then mixed with gum arabic or the yellow of an egg to make a paint.
Different civilizations burned different plants to produce their charcoal pigments. The Inuit of Alaska used wood charcoal mixed with the blood of seals to paint masks and wooden objects. The Polynesians burned coconuts to produce their pigment.
Good-quality black dyes were not known until the middle of the 14th century. The most common early dyes were made from bark, roots or fruits of different trees; usually walnuts, chestnuts, or certain oak trees. The blacks produced were often more gray, brown or bluish. The cloth had to be dyed several times to darken the color. One solution used by dyers was add to the dye some iron filings, rich in iron oxide, which gave a deeper black. Another was to first dye the fabric dark blue, and then to dye it black.
A much richer and deeper black dye was eventually found made from the oak apple or "gall-nut". The gall-nut is a small round tumor which grows on oak and other varieties of trees. They range in size from 2–5 cm, and are caused by chemicals injected by the larva of certain kinds of gall wasp in the family Cynipidae. The dye was very expensive; a great quantity of gall-nuts were needed for a very small amount of dye. The gall-nuts which made the best dye came from Poland, eastern Europe, the near east and North Africa. Beginning in about the 14th century, dye from gall-nuts was used for clothes of the kings and princes of Europe.
Another important source of natural black dyes from the 17th century onwards was the logwood tree, or Haematoxylum campechianum, which also produced reddish and bluish dyes. It is a species of flowering tree in the legume family, Fabaceae, that is native to southern Mexico and northern Central America. The modern nation of Belize grew from 17th century English logwood logging camps.
Since the mid-19th century, synthetic black dyes have largely replaced natural dyes. One of the important synthetic blacks is Nigrosin, a mixture of synthetic black dyes (CI 50415, Solvent black 5) made by heating a mixture of nitrobenzene, aniline and aniline hydrochloride in the presence of a copper or iron catalyst. Its main industrial uses are as a colorant for lacquers and varnishes and in marker-pen inks.
The first known inks were made by the Chinese, and date back to the 23rd century B.C. They used natural plant dyes and minerals such as graphite ground with water and applied with an ink brush. Early Chinese inks similar to the modern inkstick have been found dating to about 256 BC at the end of the Warring States period. They were produced from soot, usually produced by burning pine wood, mixed with animal glue. To make ink from an inkstick, the stick is continuously ground against an inkstone with a small quantity of water to produce a dark liquid which is then applied with an ink brush. Artists and calligraphists could vary the thickness of the resulting ink by reducing or increasing the intensity and time of ink grinding. These inks produced the delicate shading and subtle or dramatic effects of Chinese brush painting.
India ink (or "Indian ink" in British English) is a black ink once widely used for writing and printing and now more commonly used for drawing, especially when inking comic books and comic strips. The technique of making it probably came from China. India ink has been in use in India since at least the 4th century BC, where it was called masi. In India, the black color of the ink came from bone char, tar, pitch and other substances.
The ancient Romans had a black writing ink they called atramentum librarium. Its name came from the Latin word atrare, which meant to make something black. (This was the same root as the English word atrocious.) It was usually made, like India ink, from soot, although one variety, called atramentum elephantinum, was made by burning the ivory of elephants.
Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century.
The fact that outer space is black is sometimes called Olbers' paradox. In theory, because the universe is full of stars, and is believed to be infinitely large, it would be expected that the light of an infinite number of stars would be enough to brilliantly light the whole universe all the time. However, the background color of outer space is black. This contradiction was first noted in 1823 by German astronomer Heinrich Wilhelm Matthias Olbers, who posed the question of why the night sky was black.
The current accepted answer is that, although the universe may be infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black.
The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering.
The nighttime sky on Earth is black because the part of Earth experiencing night is facing away from the Sun, the light of the Sun is blocked by Earth itself, and there is no other bright nighttime source of light in the vicinity. Thus, there is not enough light to undergo Rayleigh scattering and make the sky blue. On the Moon, on the other hand, because there is virtually no atmosphere to scatter the light, the sky is black both day and night. This also holds true for other locations without an atmosphere, such as Mercury.
In China, the color black is associated with water, one of the five fundamental elements believed to compose all things; and with winter, cold, and the direction north, usually symbolized by a black tortoise. It is also associated with disorder, including the positive disorder which leads to change and new life. When the first Emperor of China Qin Shi Huang seized power from the Zhou Dynasty, he changed the Imperial color from red to black, saying that black extinguished red. Only when the Han Dynasty appeared in 206 BC was red restored as the imperial color.
In Japan, black is associated with mystery, the night, the unknown, the supernatural, the invisible and death. Combined with white, it can symbolize intuition. In 10th and 11th century Japan, it was believed that wearing black could bring misfortune. It was worn at court by those who wanted to set themselves apart from the established powers or who had renounced material possessions.
In Japan black can also symbolize experience, as opposed to white, which symbolizes naiveté. The black belt in martial arts symbolizes experience, while a white belt is worn by novices. Japanese men traditionally wear a black kimono with some white decoration on their wedding day.
In Indonesia black is associated with depth, the subterranean world, demons, disaster, and the left hand. When black is combined with white, however, it symbolizes harmony and equilibrium.
Anarchism is a political philosophy, most popular in the late 19th and early 20th centuries, which holds that governments and capitalism are harmful and undesirable. The symbols of anarchism was usually either a black flag or a black letter A. More recently it is usually represented with a bisected red and black flag, to emphasise the movement's socialist roots in the First International. Anarchism was most popular in Spain, France, Italy, Ukraine and Argentina. There were also small but influential movements in the United States, Russia and many other countries all around the world.
The Black Army was a collection of anarchist military units which fought for a stateless society in Ukraine in the Russian Civil War. While fighting against the reactionary White Army and alongside the Bolshevik Red Army at first, it was later defeated by the Communist forces. It was officially known as the Revolutionary Insurgent Army of Ukraine, and originally founded by the anarchist Nestor Makhno.
The Blackshirts (Italian: camicie nere, 'CCNN) were Fascist paramilitary groups in Italy during the period immediately following World War I and until the end of World War II. The Blackshirts were officially known as the Voluntary Militia for National Security (Milizia Volontaria per la Sicurezza Nazionale, or MVSN).
Inspired by the black uniforms of the Arditi, Italy's elite storm troops of World War I, the Fascist Blackshirts were organized by Benito Mussolini as the military tool of his political movement. They used violence and intimidation against Mussolini's opponents. The emblem of the Italian fascists was a black flag with fasces, an axe in a bundle of sticks, an ancient Roman symbol of authority. Mussolini came to power in 1922 through his March on Rome with the blackshirts.
Black was also adopted by Adolf Hitler and the Nazis in Germany. Red, white and black were the colors of the flag of the German Empire from 1870 to 1918. In Mein Kampf, Hitler explained that they were "revered colors expressive of our homage to the glorious past." Hitler also wrote that "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement." The black swastika was meant to symbolize the Aryan race, which, according to the Nazis, "was always anti-Semitic and will always be anti-Semitic." Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Black became the color of the uniform of the SS, the Schutzstaffel or "defense corps", the paramilitary wing of the Nazi Party, and was worn by SS officers from 1932 until the end of World War II.
The Nazis used a black triangle to symbolize anti-social elements. The symbol originates from Nazi concentration camps, where every prisoner had to wear one of the Nazi concentration camp badges on their jacket, the color of which categorized them according to "their kind". Many Black Triangle prisoners were either mentally disabled or mentally ill. The homeless were also included, as were alcoholics, the Romani people, the habitually "work-shy", prostitutes, draft dodgers and pacifists. More recently the black triangle has been adopted as a symbol in lesbian culture and by disabled activists.
Black shirts were also worn by the British Union of Fascists before World War II, and members of fascist movements in the Netherlands.
The Lützow Free Corps, composed of volunteer German students and academics fighting against Napoleon in 1813, could not afford to make special uniforms and therefore adopted black, as the only color that could be used to dye their civilian clothing without the original color showing. In 1815 the students began to carry a red, black and gold flag, which they believed (incorrectly) had been the colors of the Holy Roman Empire (the imperial flag had actually been gold and black). In 1848, this banner became the flag of the German confederation. In 1866, Prussia unified Germany under its rule, and imposed the red, white and black of its own flag, which remained the colors of the German flag until the end of the Second World War. In 1949 the Federal Republic of Germany returned to the original flag and colors of the students and professors of 1815, which is the flag of Germany today.
Black has been a traditional color of cavalry and armoured or mechanized troops. German armoured troops (Panzerwaffe) traditionally wore black uniforms, and even in others, a black beret is common. In Finland, black is the symbolic color for both armoured troops and combat engineers, and military units of these specialities have black flags and unit insignia.
The black beret and the color black is also a symbol of special forces in many countries. Soviet and Russian OMON special police and Russian naval infantry wear a black beret. A black beret is also worn by military police in the Canadian, Czech, Croatian, Portuguese, Spanish and Serbian armies.
The silver-on-black skull and crossbones symbol or Totenkopf and a black uniform were used by Hussars and Black Brunswickers, the German Panzerwaffe and the Nazi Schutzstaffel, and U.S. 400th Missile Squadron (crossed missiles), and continues in use with the Estonian Kuperjanov Battalion.
In Christian theology, black was the color of the universe before God created light. In many religious cultures, from Mesoamerica to Oceania to India and Japan, the world was created out of a primordial darkness. In the Bible the light of faith and Christianity is often contrasted with the darkness of ignorance and paganism.
In Christianity, the devil is often called the "prince of darkness". The term was used in John Milton's poem Paradise Lost, published in 1667, referring to Satan, who is viewed as the embodiment of evil. It is an English translation of the Latin phrase princeps tenebrarum, which occurs in the Acts of Pilate, written in the fourth century, in the 11th-century hymn Rhythmus de die mortis by Pietro Damiani, and in a sermon by Bernard of Clairvaux from the 12th century. The phrase also occurs in King Lear by William Shakespeare (c. 1606), Act III, Scene IV, l. 14: 'The prince of darkness is a gentleman."
Priests and pastors of the Roman Catholic, Eastern Orthodox and Protestant churches commonly wear black, as do monks of the Benedictine Order, who consider it the color of humility and penitence.
In the West, black is commonly associated with mourning and bereavement, and usually worn at funerals and memorial services. In some traditional societies, for example in Greece and Italy, some widows wear black for the rest of their lives. In contrast, across much of Africa and parts of Asia like Vietnam, white is a color of mourning.
In Victorian England, the colors and fabrics of mourning were specified in an unofficial dress code: "non-reflective black paramatta and crape for the first year of deepest mourning, followed by nine months of dullish black silk, heavily trimmed with crape, and then three months when crape was discarded. Paramatta was a fabric of combined silk and wool or cotton; crape was a harsh black silk fabric with a crimped appearance produced by heat. Widows were allowed to change into the colors of half-mourning, such as gray and lavender, black and white, for the final six months."
A "black day" (or week or month) usually refers to tragic date. The Romans marked fasti days with white stones and nefasti days with black. The term is often used to remember massacres. Black months include the Black September in Jordan, when large numbers of Palestinians were killed, and Black July in Sri Lanka, the killing of members of the Tamil population by the Sinhalese government.
In the financial world, the term often refers to a dramatic drop in the stock market. For example, the Wall Street Crash of 1929, the stock market crash on October 29, 1929, which marked the start of the Great Depression, is nicknamed Black Tuesday, and was preceded by Black Thursday, a downturn on October 24 the previous week.
In western popular culture, black has long been associated with evil and darkness. It is the traditional color of witchcraft and black magic.
In the Book of Revelation, the last book in the New Testament of the Bible, the Four Horsemen of the Apocalypse are supposed to announce the Apocalypse before the Last Judgment. The horseman representing famine rides a black horse. The vampire of literature and films, such as Count Dracula of the Bram Stoker novel, dressed in black, and could only move at night. The Wicked Witch of the West in the 1939 film The Wizard of Oz became the archetype of witches for generations of children. Whereas witches and sorcerers inspired real fear in the 17th century, in the 21st century children and adults dressed as witches for Halloween parties and parades.
Black is frequently used as a color of power, law and authority. In many countries judges and magistrates wear black robes. That custom began in Europe in the 13th and 14th centuries. Jurists, magistrates and certain other court officials in France began to wear long black robes during the reign of Philip IV of France (1285–1314), and in England from the time of Edward I (1271–1307). The custom spread to the cities of Italy at about the same time, between 1300 and 1320. The robes of judges resembled those worn by the clergy, and represented the law and authority of the King, while those of the clergy represented the law of God and authority of the church.
Until the 20th century most police uniforms were black, until they were largely replaced by a less menacing blue in France, the U.S. and other countries. In the United States, police cars are frequently Black and white. The riot control units of the Basque Autonomous Police in Spain are known as beltzak ("blacks") after their uniform.
Black today is the most common color for limousines and the official cars of government officials.
Black formal attire is still worn at many solemn occasions or ceremonies, from graduations to formal balls. Graduation gowns are copied from the gowns worn by university professors in the Middle Ages, which in turn were copied from the robes worn by judges and priests, who often taught at the early universities. The mortarboard hat worn by graduates is adapted from a square cap called a biretta worn by Medieval professors and clerics.
In the 19th and 20th centuries, many machines and devices, large and small, were painted black, to stress their functionality. These included telephones, sewing machines, steamships, railroad locomotives, and automobiles. The Ford Model T, the first mass-produced car, was available only in black from 1914 to 1926. Of means of transportation, only airplanes were rarely ever painted black.
Black house paint is becoming more popular with Sherwin-Williams reporting that the color, Tricorn Black, was the 6th most popular exterior house paint color in Canada and the 12th most popular paint in the United States in 2018.
Black is also commonly used as a racial description in the United Kingdom, since ethnicity was first measured in the 2001 census. The 2011 British census asked residents to describe themselves, and categories offered included Black, African, Caribbean, or Black British. Other possible categories were African British, African Scottish, Caribbean British and Caribbean Scottish. Of the total UK population in 2001, 1.0 percent identified themselves as Black Caribbean, 0.8 percent as Black African, and 0.2 percent as Black (others).
In Canada, census respondents can identify themselves as Black. In the 2006 census, 2.5 percent of the population identified themselves as black.
In Australia, the term black is not used in the census. In the 2006 census, 2.3 percent of Australians identified themselves as Aboriginal and/or Torres Strait Islanders.
In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as branco (white), pardo (brown), preto (black), or amarelo (yellow). In 2008 6.8 percent of the population identified themselves as "preto".
Black is commonly associated with secrecy.
Black is the color most commonly associated with elegance in Europe and the United States, followed by silver, gold, and white.
Black first became a fashionable color for men in Europe in the 17th century, in the courts of Italy and Spain. (See history above.) In the 19th century, it was the fashion for men both in business and for evening wear, in the form of a black coat whose tails came down the knees. In the evening it was the custom of the men to leave the women after dinner to go to a special smoking room to enjoy cigars or cigarettes. This meant that their tailcoats eventually smelled of tobacco. According to the legend, in 1865 Edward VII, then the Prince of Wales, had his tailor make a special short smoking jacket. The smoking jacket then evolved into the dinner jacket. Again according to legend, the first Americans to wear the jacket were members of the Tuxedo Club in New York State. Thereafter the jacket became known as a tuxedo in the U.S. The term "smoking" is still used today in Russia and other countries. The tuxedo was always black until the 1930s, when the Duke of Windsor began to wear a tuxedo that was a very dark midnight blue. He did so because a black tuxedo looked greenish in artificial light, while a dark blue tuxedo looked blacker than black itself.
For women's fashion, the defining moment was the invention of the simple black dress by Coco Chanel in 1926. (See history.) Thereafter, a long black gown was used for formal occasions, while the simple black dress could be used for everything else. The designer Karl Lagerfeld, explaining why black was so popular, said: "Black is the color that goes with everything. If you're wearing black, you're on sure ground." Skirts have gone up and down and fashions have changed, but the black dress has not lost its position as the essential element of a woman's wardrobe. The fashion designer Christian Dior said, "elegance is a combination of distinction, naturalness, care and simplicity," and black exemplified elegance.
The expression "X is the new black" is a reference to the latest trend or fad that is considered a wardrobe basic for the duration of the trend, on the basis that black is always fashionable. The phrase has taken on a life of its own and has become a cliché.
Many performers of both popular and European classical music, including French singers Edith Piaf and Juliette Gréco, and violinist Joshua Bell have traditionally worn black on stage during performances. A black costume was usually chosen as part of their image or stage persona, or because it did not distract from the music, or sometimes for a political reason. Country-western singer Johnny Cash always wore black on stage. In 1971, Cash wrote the song "Man in Black" to explain why he dressed in that color: "We're doing mighty fine I do suppose / In our streak of lightning cars and fancy clothes / But just so we're reminded of the ones who are held back / Up front there ought to be a man in black." | [
{
"paragraph_id": 0,
"text": "Black is a color that results from the absence or complete absorption of visible light. It is an achromatic color, without hue, like white and grey. It is often used symbolically or figuratively to represent darkness. Black and white have often been used to describe opposites such as good and evil, the Dark Ages versus Age of Enlightenment, and night versus day. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason it is still commonly worn by judges and magistrates.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Black was one of the first colors used by artists in Neolithic cave paintings. It was used in ancient Egypt and Greece as the color of the underworld. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches, and magic. In the 14th century, it was worn by royalty, clergy, judges, and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, fear, evil, and elegance.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Black is the most common ink color used for printing books, newspapers and documents, as it provides the highest contrast with white paper and thus is the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens. As of September 2019, the darkest material is made by MIT engineers from vertically aligned carbon nanotubes.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word black comes from Old English blæc (\"black, dark\", also, \"ink\"), from Proto-Germanic *blakkaz (\"burned\"), from Proto-Indo-European *bhleg- (\"to burn, gleam, shine, flash\"), from base *bhel- (\"to shine\"), related to Old Saxon blak (\"ink\"), Old High German blach (\"black\"), Old Norse blakkr (\"dark\"), Dutch blaken (\"to burn\"), and Swedish bläck (\"ink\"). More distant cognates include Latin flagrare (\"to blaze, glow, burn\"), and Ancient Greek phlegein (\"to burn, scorch\"). The Ancient Greeks sometimes used the same word to name different colors, if they had the same intensity. Kuanos' could mean both dark blue and black. The Ancient Romans had two words for black: ater was a flat, dull black, while niger was a brilliant, saturated black. Ater has vanished from the vocabulary, but niger was the source of the country name Nigeria, the English word Negro, and the word for \"black\" in most modern Romance languages (French: noir; Spanish and Portuguese: negro; Italian: nero; Romanian: negru).",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "Old High German also had two words for black: swartz for dull black and blach for a luminous black. These are parallelled in Middle English by the terms swart for dull black and blaek for luminous black. Swart still survives as the word swarthy, while blaek became the modern English black. The former is cognate with the words used for black in most modern Germanic languages aside from English (German: schwarz, Dutch: zwart, Swedish: svart, Danish: sort, Icelandic: svartr). In heraldry, the word used for the black color is sable, named for the black fur of the sable, an animal.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "Black was one of the first colors used in art. The Lascaux Cave in France contains drawings of bulls and other animals drawn by paleolithic artists between 18,000 and 17,000 years ago. They began by using charcoal, and later achieved darker pigments by burning bones or grinding a powder of manganese oxide.",
"title": "Art"
},
{
"paragraph_id": 6,
"text": "For the ancient Egyptians, black had positive associations; being the color of fertility and the rich black soil flooded by the Nile. It was the color of Anubis, the god of the underworld, who took the form of a black jackal, and offered protection against evil to the dead. To ancient Greeks, black represented the underworld, separated from the living by the river Acheron, whose water ran black. Those who had committed the worst sins were sent to Tartarus, the deepest and darkest level. In the center was the palace of Hades, the king of the underworld, where he was seated upon a black ebony throne. Black was one of the most important colors used by ancient Greek artists. In the 6th century BC, they began making black-figure pottery and later red figure pottery, using a highly original technique. In black-figure pottery, the artist would paint figures with a glossy clay slip on a red clay pot. When the pot was fired, the figures painted with the slip would turn black, against a red background. Later they reversed the process, painting the spaces between the figures with slip. This created magnificent red figures against a glossy black background.",
"title": "Art"
},
{
"paragraph_id": 7,
"text": "In the social hierarchy of ancient Rome, purple was the color reserved for the Emperor; red was the color worn by soldiers (red cloaks for the officers, red tunics for the soldiers); white the color worn by the priests, and black was worn by craftsmen and artisans. The black they wore was not deep and rich; the vegetable dyes used to make black were not solid or lasting, so the blacks often faded to gray or brown.",
"title": "Art"
},
{
"paragraph_id": 8,
"text": "In Latin, the word for black, ater and to darken, atere, were associated with cruelty, brutality and evil. They were the root of the English words \"atrocious\" and \"atrocity\". Black was also the Roman color of death and mourning. In the 2nd century BC Roman magistrates began to wear a dark toga, called a toga pulla, to funeral ceremonies. Later, under the Empire, the family of the deceased also wore dark colors for a long period; then, after a banquet to mark the end of mourning, exchanged the black for a white toga. In Roman poetry, death was called the hora nigra, the black hour.",
"title": "Art"
},
{
"paragraph_id": 9,
"text": "The German and Scandinavian peoples worshipped their own goddess of the night, Nótt, who crossed the sky in a chariot drawn by a black horse. They also feared Hel, the goddess of the kingdom of the dead, whose skin was black on one side and red on the other. They also held sacred the raven. They believed that Odin, the king of the Nordic pantheon, had two black ravens, Huginn and Muninn, who served as his agents, traveling the world for him, watching and listening.",
"title": "Art"
},
{
"paragraph_id": 10,
"text": "In the early Middle Ages, black was commonly associated with darkness and evil. In Medieval paintings, the devil was usually depicted as having human form, but with wings and black skin or hair.",
"title": "Art"
},
{
"paragraph_id": 11,
"text": "In fashion, black did not have the prestige of red, the color of the nobility. It was worn by Benedictine monks as a sign of humility and penitence. In the 12th century a famous theological dispute broke out between the Cistercian monks, who wore white, and the Benedictines, who wore black. A Benedictine abbot, Pierre the Venerable, accused the Cistercians of excessive pride in wearing white instead of black. Saint Bernard of Clairvaux, the founder of the Cistercians responded that black was the color of the devil, hell, \"of death and sin\", while white represented \"purity, innocence and all the virtues\".",
"title": "Art"
},
{
"paragraph_id": 12,
"text": "Black symbolized both power and secrecy in the medieval world. The emblem of the Holy Roman Empire of Germany was a black eagle. The black knight in the poetry of the Middle Ages was an enigmatic figure, hiding his identity, usually wrapped in secrecy.",
"title": "Art"
},
{
"paragraph_id": 13,
"text": "Black ink, invented in China, was traditionally used in the Middle Ages for writing, for the simple reason that black was the darkest color and therefore provided the greatest contrast with white paper or parchment, making it the easiest color to read. It became even more important in the 15th century, with the invention of printing. A new kind of ink, printer's ink, was created out of soot, turpentine and walnut oil. The new ink made it possible to spread ideas to a mass audience through printed books, and to popularize art through black and white engravings and prints. Because of its contrast and clarity, black ink on white paper continued to be the standard for printing books, newspapers and documents; and for the same reason black text on a white background is the most common format used on computer screens.",
"title": "Art"
},
{
"paragraph_id": 14,
"text": "In the early Middle Ages, princes, nobles and the wealthy usually wore bright colors, particularly scarlet cloaks from Italy. Black was rarely part of the wardrobe of a noble family. The one exception was the fur of the sable. This glossy black fur, from an animal of the marten family, was the finest and most expensive fur in Europe. It was imported from Russia and Poland and used to trim the robes and gowns of royalty.",
"title": "Art"
},
{
"paragraph_id": 15,
"text": "In the 14th century, the status of black began to change. First, high-quality black dyes began to arrive on the market, allowing garments of a deep, rich black. Magistrates and government officials began to wear black robes, as a sign of the importance and seriousness of their positions. A third reason was the passage of sumptuary laws in some parts of Europe which prohibited the wearing of costly clothes and certain colors by anyone except members of the nobility. The famous bright scarlet cloaks from Venice and the peacock blue fabrics from Florence were restricted to the nobility. The wealthy bankers and merchants of northern Italy responded by changing to black robes and gowns, made with the most expensive fabrics.",
"title": "Art"
},
{
"paragraph_id": 16,
"text": "The change to the more austere but elegant black was quickly picked up by the kings and nobility. It began in northern Italy, where the Duke of Milan and the Count of Savoy and the rulers of Mantua, Ferrara, Rimini and Urbino began to dress in black. It then spread to France, led by Louis I, Duke of Orleans, younger brother of King Charles VI of France. It moved to England at the end of the reign of King Richard II (1377–1399), where all the court began to wear black. In 1419–20, black became the color of the powerful Duke of Burgundy, Philip the Good. It moved to Spain, where it became the color of the Spanish Habsburgs, of Charles V and of his son, Philip II of Spain (1527–1598). European rulers saw it as the color of power, dignity, humility and temperance. By the end of the 16th century, it was the color worn by almost all the monarchs of Europe and their courts.",
"title": "Art"
},
{
"paragraph_id": 17,
"text": "While black was the color worn by the Catholic rulers of Europe, it was also the emblematic color of the Protestant Reformation in Europe and the Puritans in England and America. John Calvin, Philip Melanchthon and other Protestant theologians denounced the richly colored and decorated interiors of Roman Catholic churches. They saw the color red, worn by the Pope and his Cardinals, as the color of luxury, sin, and human folly. In some northern European cities, mobs attacked churches and cathedrals, smashed the stained glass windows and defaced the statues and decoration. In Protestant doctrine, clothing was required to be sober, simple and discreet. Bright colors were banished and replaced by blacks, browns and grays; women and children were recommended to wear white.",
"title": "Art"
},
{
"paragraph_id": 18,
"text": "In the Protestant Netherlands, Rembrandt used this sober new palette of blacks and browns to create portraits whose faces emerged from the shadows expressing the deepest human emotions. The Catholic painters of the Counter-Reformation, like Rubens, went in the opposite direction; they filled their paintings with bright and rich colors. The new Baroque churches of the Counter-Reformation were usually shining white inside and filled with statues, frescoes, marble, gold and colorful paintings, to appeal to the public. But European Catholics of all classes, like Protestants, eventually adopted a sober wardrobe that was mostly black, brown and gray.",
"title": "Art"
},
{
"paragraph_id": 19,
"text": "In the second part of the 17th century, Europe and America experienced an epidemic of fear of witchcraft. People widely believed that the devil appeared at midnight in a ceremony called a Black Mass or black sabbath, usually in the form of a black animal, often a goat, a dog, a wolf, a bear, a deer or a rooster, accompanied by their familiar spirits, black cats, serpents and other black creatures. This was the origin of the widespread superstition about black cats and other black animals. In medieval Flanders, in a ceremony called Kattenstoet, black cats were thrown from the belfry of the Cloth Hall of Ypres to ward off witchcraft.",
"title": "Art"
},
{
"paragraph_id": 20,
"text": "Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a \"black thing with a blue cap,\" and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches.",
"title": "Art"
},
{
"paragraph_id": 21,
"text": "In the 18th century, during the European Age of Enlightenment, black receded as a fashion color. Paris became the fashion capital, and pastels, blues, greens, yellow and white became the colors of the nobility and upper classes. But after the French Revolution, black again became the dominant color.",
"title": "Art"
},
{
"paragraph_id": 22,
"text": "Black was the color of the industrial revolution, largely fueled by coal, and later by oil. Thanks to coal smoke, the buildings of the large cities of Europe and America gradually turned black. By 1846 the industrial area of the West Midlands of England was \"commonly called 'the Black Country'”. Charles Dickens and other writers described the dark streets and smoky skies of London, and they were vividly illustrated in the engravings of French artist Gustave Doré.",
"title": "Art"
},
{
"paragraph_id": 23,
"text": "A different kind of black was an important part of the romantic movement in literature. Black was the color of melancholy, the dominant theme of romanticism. The novels of the period were filled with castles, ruins, dungeons, storms, and meetings at midnight. The leading poets of the movement were usually portrayed dressed in black, usually with a white shirt and open collar, and a scarf carelessly over their shoulder, Percy Bysshe Shelley and Lord Byron helped create the enduring stereotype of the romantic poet.",
"title": "Art"
},
{
"paragraph_id": 24,
"text": "The invention of inexpensive synthetic black dyes and the industrialization of the textile industry meant that high-quality black clothes were available for the first time to the general population. In the 19th century black gradually became the most popular color of business dress of the upper and middle classes in England, the Continent, and America.",
"title": "Art"
},
{
"paragraph_id": 25,
"text": "Black dominated literature and fashion in the 19th century, and played a large role in painting. James McNeill Whistler made the color the subject of his most famous painting, Arrangement in grey and black number one (1871), better known as Whistler's Mother.",
"title": "Art"
},
{
"paragraph_id": 26,
"text": "Some 19th-century French painters had a low opinion of black: \"Reject black,\" Paul Gauguin said, \"and that mix of black and white they call gray. Nothing is black, nothing is gray.\" But Édouard Manet used blacks for their strength and dramatic effect. Manet's portrait of painter Berthe Morisot was a study in black which perfectly captured her spirit of independence. The black gave the painting power and immediacy; he even changed her eyes, which were green, to black to strengthen the effect. Henri Matisse quoted the French impressionist Pissarro telling him, \"Manet is stronger than us all – he made light with black.\"",
"title": "Art"
},
{
"paragraph_id": 27,
"text": "Pierre-Auguste Renoir used luminous blacks, especially in his portraits. When someone told him that black was not a color, Renoir replied: \"What makes you think that? Black is the queen of colors. I always detested Prussian blue. I tried to replace black with a mixture of red and blue, I tried using cobalt blue or ultramarine, but I always came back to ivory black.\"",
"title": "Art"
},
{
"paragraph_id": 28,
"text": "Vincent van Gogh used black lines to outline many of the objects in his paintings, such as the bed in the famous painting of his bedroom. making them stand apart. His painting of black crows over a cornfield, painted shortly before he died, was particularly agitated and haunting. In the late 19th century, black also became the color of anarchism. (See the section political movements.)",
"title": "Art"
},
{
"paragraph_id": 29,
"text": "In the 20th century, black was the color of Italian and German fascism. (See the section political movements.)",
"title": "Art"
},
{
"paragraph_id": 30,
"text": "In art, black regained some of the territory that it had lost during the 19th century. The Russian painter Kasimir Malevich, a member of the Suprematist movement, created the Black Square in 1915, is widely considered the first purely abstract painting. He wrote, \"The painted work is no longer simply the imitation of reality, but is this very reality ... It is not a demonstration of ability, but the materialization of an idea.\"",
"title": "Art"
},
{
"paragraph_id": 31,
"text": "Black was also appreciated by Henri Matisse. \"When I didn't know what color to put down, I put down black,\" he said in 1945. \"Black is a force: I used black as ballast to simplify the construction ... Since the impressionists it seems to have made continuous progress, taking a more and more important part in color orchestration, comparable to that of the double bass as a solo instrument.\"",
"title": "Art"
},
{
"paragraph_id": 32,
"text": "In the 1950s, black came to be a symbol of individuality and intellectual and social rebellion, the color of those who did not accept established norms and values. In Paris, it was worn by Left-Bank intellectuals and performers such as Juliette Gréco, and by some members of the Beat Movement in New York and San Francisco. Black leather jackets were worn by motorcycle gangs such as the Hells Angels and street gangs on the fringes of society in the United States. Black as a color of rebellion was celebrated in such films as The Wild One, with Marlon Brando. By the end of the 20th century, black was the emblematic color of the punk subculture punk fashion, and the goth subculture. Goth fashion, which emerged in England in the 1980s, was inspired by Victorian era mourning dress.",
"title": "Art"
},
{
"paragraph_id": 33,
"text": "In men's fashion, black gradually ceded its dominance to navy blue, particularly in business suits. Black evening dress and formal dress in general were worn less and less. In 1960, John F. Kennedy was the last American President to be inaugurated wearing formal dress; President Lyndon Johnson and all his successors were inaugurated wearing business suits.",
"title": "Art"
},
{
"paragraph_id": 34,
"text": "Women's fashion was revolutionized and simplified in 1926 by the French designer Coco Chanel, who published a drawing of a simple black dress in Vogue magazine. She famously said, \"A woman needs just three things; a black dress, a black sweater, and, on her arm, a man she loves.\" French designer Jean Patou also followed suit by creating a black collection in 1929. Other designers contributed to the trend of the little black dress. The Italian designer Gianni Versace said, \"Black is the quintessence of simplicity and elegance,\" and French designer Yves Saint Laurent said, \"black is the liaison which connects art and fashion. One of the most famous black dresses of the century was designed by Hubert de Givenchy and was worn by Audrey Hepburn in the 1961 film Breakfast at Tiffany's.",
"title": "Art"
},
{
"paragraph_id": 35,
"text": "The American civil rights movement in the 1950s was a struggle for the political equality of African Americans. It developed into the Black Power movement in the early 1960s until the late 1980s, and the Black Lives Matter movement in the 2010s and 2020s. It also popularized the slogan \"Black is Beautiful\".",
"title": "Art"
},
{
"paragraph_id": 36,
"text": "In the visible spectrum, black is the result of the absorption of all light wavelengths. Black can be defined as the visual impression (or color) experienced when no visible light reaches the eye. Pigments or dyes that absorb light rather than reflect it back to the eye look black. A black pigment can, however, result from a combination of several pigments that collectively absorb all wavelengths of visible light. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called black. This provides two superficially opposite but actually complementary descriptions of black. Black is the color produced by the absorption of all wavelengths of visible light, or an exhaustive combination of multiple colors of pigment.",
"title": "Science"
},
{
"paragraph_id": 37,
"text": "In physics, a black body is a perfect absorber of light, but, by a thermodynamic rule, it is also the best emitter. Thus, the best radiative cooling, out of sunlight, is by using black paint, though it is important that it be black (a nearly perfect absorber) in the infrared as well. In elementary science, far ultraviolet light is called \"black light\" because, while itself unseen, it causes many minerals and other substances to fluoresce.",
"title": "Science"
},
{
"paragraph_id": 38,
"text": "Absorption of light is contrasted by transmission, reflection and diffusion, where the light is only redirected, causing objects to appear transparent, reflective or white respectively. A material is said to be black if most incoming light is absorbed equally in the material. Light (electromagnetic radiation in the visible spectrum) interacts with the atoms and molecules, which causes the energy of the light to be converted into other forms of energy, usually heat. This means that black surfaces can act as thermal collectors, absorbing light and generating heat (see Solar thermal collector).",
"title": "Science"
},
{
"paragraph_id": 39,
"text": "As of September 2019, the darkest material is made from vertically aligned carbon nanotubes. The material was grown by MIT engineers and was reported to have a 99.995% absorption rate of any incoming light. This surpasses any former darkest materials including Vantablack, which has a peak absorption rate of 99.965% in the visible spectrum.",
"title": "Science"
},
{
"paragraph_id": 40,
"text": "The earliest pigments used by Neolithic man were charcoal, red ocher and yellow ocher. The black lines of cave art were drawn with the tips of burnt torches made of a wood with resin. Different charcoal pigments were made by burning different woods and animal products, each of which produced a different tone. The charcoal would be ground and then mixed with animal fat to make the pigment.",
"title": "Science"
},
{
"paragraph_id": 41,
"text": "The 15th-century painter Cennino Cennini described how this pigment was made during the Renaissance in his famous handbook for artists: \"...there is a black which is made from the tendrils of vines. And these tendrils need to be burned. And when they have been burned, throw some water onto them and put them out and then mull them in the same way as the other black. And this is a lean and black pigment and is one of the perfect pigments that we use.\"",
"title": "Science"
},
{
"paragraph_id": 42,
"text": "Cennini also noted that \"There is another black which is made from burnt almond shells or peaches and this is a perfect, fine black.\" Similar fine blacks were made by burning the pits of the peach, cherry or apricot. The powdered charcoal was then mixed with gum arabic or the yellow of an egg to make a paint.",
"title": "Science"
},
{
"paragraph_id": 43,
"text": "Different civilizations burned different plants to produce their charcoal pigments. The Inuit of Alaska used wood charcoal mixed with the blood of seals to paint masks and wooden objects. The Polynesians burned coconuts to produce their pigment.",
"title": "Science"
},
{
"paragraph_id": 44,
"text": "Good-quality black dyes were not known until the middle of the 14th century. The most common early dyes were made from bark, roots or fruits of different trees; usually walnuts, chestnuts, or certain oak trees. The blacks produced were often more gray, brown or bluish. The cloth had to be dyed several times to darken the color. One solution used by dyers was add to the dye some iron filings, rich in iron oxide, which gave a deeper black. Another was to first dye the fabric dark blue, and then to dye it black.",
"title": "Science"
},
{
"paragraph_id": 45,
"text": "A much richer and deeper black dye was eventually found made from the oak apple or \"gall-nut\". The gall-nut is a small round tumor which grows on oak and other varieties of trees. They range in size from 2–5 cm, and are caused by chemicals injected by the larva of certain kinds of gall wasp in the family Cynipidae. The dye was very expensive; a great quantity of gall-nuts were needed for a very small amount of dye. The gall-nuts which made the best dye came from Poland, eastern Europe, the near east and North Africa. Beginning in about the 14th century, dye from gall-nuts was used for clothes of the kings and princes of Europe.",
"title": "Science"
},
{
"paragraph_id": 46,
"text": "Another important source of natural black dyes from the 17th century onwards was the logwood tree, or Haematoxylum campechianum, which also produced reddish and bluish dyes. It is a species of flowering tree in the legume family, Fabaceae, that is native to southern Mexico and northern Central America. The modern nation of Belize grew from 17th century English logwood logging camps.",
"title": "Science"
},
{
"paragraph_id": 47,
"text": "Since the mid-19th century, synthetic black dyes have largely replaced natural dyes. One of the important synthetic blacks is Nigrosin, a mixture of synthetic black dyes (CI 50415, Solvent black 5) made by heating a mixture of nitrobenzene, aniline and aniline hydrochloride in the presence of a copper or iron catalyst. Its main industrial uses are as a colorant for lacquers and varnishes and in marker-pen inks.",
"title": "Science"
},
{
"paragraph_id": 48,
"text": "The first known inks were made by the Chinese, and date back to the 23rd century B.C. They used natural plant dyes and minerals such as graphite ground with water and applied with an ink brush. Early Chinese inks similar to the modern inkstick have been found dating to about 256 BC at the end of the Warring States period. They were produced from soot, usually produced by burning pine wood, mixed with animal glue. To make ink from an inkstick, the stick is continuously ground against an inkstone with a small quantity of water to produce a dark liquid which is then applied with an ink brush. Artists and calligraphists could vary the thickness of the resulting ink by reducing or increasing the intensity and time of ink grinding. These inks produced the delicate shading and subtle or dramatic effects of Chinese brush painting.",
"title": "Science"
},
{
"paragraph_id": 49,
"text": "India ink (or \"Indian ink\" in British English) is a black ink once widely used for writing and printing and now more commonly used for drawing, especially when inking comic books and comic strips. The technique of making it probably came from China. India ink has been in use in India since at least the 4th century BC, where it was called masi. In India, the black color of the ink came from bone char, tar, pitch and other substances.",
"title": "Science"
},
{
"paragraph_id": 50,
"text": "The ancient Romans had a black writing ink they called atramentum librarium. Its name came from the Latin word atrare, which meant to make something black. (This was the same root as the English word atrocious.) It was usually made, like India ink, from soot, although one variety, called atramentum elephantinum, was made by burning the ivory of elephants.",
"title": "Science"
},
{
"paragraph_id": 51,
"text": "Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century.",
"title": "Science"
},
{
"paragraph_id": 52,
"text": "The fact that outer space is black is sometimes called Olbers' paradox. In theory, because the universe is full of stars, and is believed to be infinitely large, it would be expected that the light of an infinite number of stars would be enough to brilliantly light the whole universe all the time. However, the background color of outer space is black. This contradiction was first noted in 1823 by German astronomer Heinrich Wilhelm Matthias Olbers, who posed the question of why the night sky was black.",
"title": "Science"
},
{
"paragraph_id": 53,
"text": "The current accepted answer is that, although the universe may be infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black.",
"title": "Science"
},
{
"paragraph_id": 54,
"text": "The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering.",
"title": "Science"
},
{
"paragraph_id": 55,
"text": "The nighttime sky on Earth is black because the part of Earth experiencing night is facing away from the Sun, the light of the Sun is blocked by Earth itself, and there is no other bright nighttime source of light in the vicinity. Thus, there is not enough light to undergo Rayleigh scattering and make the sky blue. On the Moon, on the other hand, because there is virtually no atmosphere to scatter the light, the sky is black both day and night. This also holds true for other locations without an atmosphere, such as Mercury.",
"title": "Science"
},
{
"paragraph_id": 56,
"text": "In China, the color black is associated with water, one of the five fundamental elements believed to compose all things; and with winter, cold, and the direction north, usually symbolized by a black tortoise. It is also associated with disorder, including the positive disorder which leads to change and new life. When the first Emperor of China Qin Shi Huang seized power from the Zhou Dynasty, he changed the Imperial color from red to black, saying that black extinguished red. Only when the Han Dynasty appeared in 206 BC was red restored as the imperial color.",
"title": "Culture"
},
{
"paragraph_id": 57,
"text": "In Japan, black is associated with mystery, the night, the unknown, the supernatural, the invisible and death. Combined with white, it can symbolize intuition. In 10th and 11th century Japan, it was believed that wearing black could bring misfortune. It was worn at court by those who wanted to set themselves apart from the established powers or who had renounced material possessions.",
"title": "Culture"
},
{
"paragraph_id": 58,
"text": "In Japan black can also symbolize experience, as opposed to white, which symbolizes naiveté. The black belt in martial arts symbolizes experience, while a white belt is worn by novices. Japanese men traditionally wear a black kimono with some white decoration on their wedding day.",
"title": "Culture"
},
{
"paragraph_id": 59,
"text": "In Indonesia black is associated with depth, the subterranean world, demons, disaster, and the left hand. When black is combined with white, however, it symbolizes harmony and equilibrium.",
"title": "Culture"
},
{
"paragraph_id": 60,
"text": "Anarchism is a political philosophy, most popular in the late 19th and early 20th centuries, which holds that governments and capitalism are harmful and undesirable. The symbols of anarchism was usually either a black flag or a black letter A. More recently it is usually represented with a bisected red and black flag, to emphasise the movement's socialist roots in the First International. Anarchism was most popular in Spain, France, Italy, Ukraine and Argentina. There were also small but influential movements in the United States, Russia and many other countries all around the world.",
"title": "Culture"
},
{
"paragraph_id": 61,
"text": "The Black Army was a collection of anarchist military units which fought for a stateless society in Ukraine in the Russian Civil War. While fighting against the reactionary White Army and alongside the Bolshevik Red Army at first, it was later defeated by the Communist forces. It was officially known as the Revolutionary Insurgent Army of Ukraine, and originally founded by the anarchist Nestor Makhno.",
"title": "Culture"
},
{
"paragraph_id": 62,
"text": "The Blackshirts (Italian: camicie nere, 'CCNN) were Fascist paramilitary groups in Italy during the period immediately following World War I and until the end of World War II. The Blackshirts were officially known as the Voluntary Militia for National Security (Milizia Volontaria per la Sicurezza Nazionale, or MVSN).",
"title": "Culture"
},
{
"paragraph_id": 63,
"text": "Inspired by the black uniforms of the Arditi, Italy's elite storm troops of World War I, the Fascist Blackshirts were organized by Benito Mussolini as the military tool of his political movement. They used violence and intimidation against Mussolini's opponents. The emblem of the Italian fascists was a black flag with fasces, an axe in a bundle of sticks, an ancient Roman symbol of authority. Mussolini came to power in 1922 through his March on Rome with the blackshirts.",
"title": "Culture"
},
{
"paragraph_id": 64,
"text": "Black was also adopted by Adolf Hitler and the Nazis in Germany. Red, white and black were the colors of the flag of the German Empire from 1870 to 1918. In Mein Kampf, Hitler explained that they were \"revered colors expressive of our homage to the glorious past.\" Hitler also wrote that \"the new flag ... should prove effective as a large poster\" because \"in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement.\" The black swastika was meant to symbolize the Aryan race, which, according to the Nazis, \"was always anti-Semitic and will always be anti-Semitic.\" Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Black became the color of the uniform of the SS, the Schutzstaffel or \"defense corps\", the paramilitary wing of the Nazi Party, and was worn by SS officers from 1932 until the end of World War II.",
"title": "Culture"
},
{
"paragraph_id": 65,
"text": "The Nazis used a black triangle to symbolize anti-social elements. The symbol originates from Nazi concentration camps, where every prisoner had to wear one of the Nazi concentration camp badges on their jacket, the color of which categorized them according to \"their kind\". Many Black Triangle prisoners were either mentally disabled or mentally ill. The homeless were also included, as were alcoholics, the Romani people, the habitually \"work-shy\", prostitutes, draft dodgers and pacifists. More recently the black triangle has been adopted as a symbol in lesbian culture and by disabled activists.",
"title": "Culture"
},
{
"paragraph_id": 66,
"text": "Black shirts were also worn by the British Union of Fascists before World War II, and members of fascist movements in the Netherlands.",
"title": "Culture"
},
{
"paragraph_id": 67,
"text": "The Lützow Free Corps, composed of volunteer German students and academics fighting against Napoleon in 1813, could not afford to make special uniforms and therefore adopted black, as the only color that could be used to dye their civilian clothing without the original color showing. In 1815 the students began to carry a red, black and gold flag, which they believed (incorrectly) had been the colors of the Holy Roman Empire (the imperial flag had actually been gold and black). In 1848, this banner became the flag of the German confederation. In 1866, Prussia unified Germany under its rule, and imposed the red, white and black of its own flag, which remained the colors of the German flag until the end of the Second World War. In 1949 the Federal Republic of Germany returned to the original flag and colors of the students and professors of 1815, which is the flag of Germany today.",
"title": "Culture"
},
{
"paragraph_id": 68,
"text": "Black has been a traditional color of cavalry and armoured or mechanized troops. German armoured troops (Panzerwaffe) traditionally wore black uniforms, and even in others, a black beret is common. In Finland, black is the symbolic color for both armoured troops and combat engineers, and military units of these specialities have black flags and unit insignia.",
"title": "Culture"
},
{
"paragraph_id": 69,
"text": "The black beret and the color black is also a symbol of special forces in many countries. Soviet and Russian OMON special police and Russian naval infantry wear a black beret. A black beret is also worn by military police in the Canadian, Czech, Croatian, Portuguese, Spanish and Serbian armies.",
"title": "Culture"
},
{
"paragraph_id": 70,
"text": "The silver-on-black skull and crossbones symbol or Totenkopf and a black uniform were used by Hussars and Black Brunswickers, the German Panzerwaffe and the Nazi Schutzstaffel, and U.S. 400th Missile Squadron (crossed missiles), and continues in use with the Estonian Kuperjanov Battalion.",
"title": "Culture"
},
{
"paragraph_id": 71,
"text": "In Christian theology, black was the color of the universe before God created light. In many religious cultures, from Mesoamerica to Oceania to India and Japan, the world was created out of a primordial darkness. In the Bible the light of faith and Christianity is often contrasted with the darkness of ignorance and paganism.",
"title": "Culture"
},
{
"paragraph_id": 72,
"text": "In Christianity, the devil is often called the \"prince of darkness\". The term was used in John Milton's poem Paradise Lost, published in 1667, referring to Satan, who is viewed as the embodiment of evil. It is an English translation of the Latin phrase princeps tenebrarum, which occurs in the Acts of Pilate, written in the fourth century, in the 11th-century hymn Rhythmus de die mortis by Pietro Damiani, and in a sermon by Bernard of Clairvaux from the 12th century. The phrase also occurs in King Lear by William Shakespeare (c. 1606), Act III, Scene IV, l. 14: 'The prince of darkness is a gentleman.\"",
"title": "Culture"
},
{
"paragraph_id": 73,
"text": "Priests and pastors of the Roman Catholic, Eastern Orthodox and Protestant churches commonly wear black, as do monks of the Benedictine Order, who consider it the color of humility and penitence.",
"title": "Culture"
},
{
"paragraph_id": 74,
"text": "In the West, black is commonly associated with mourning and bereavement, and usually worn at funerals and memorial services. In some traditional societies, for example in Greece and Italy, some widows wear black for the rest of their lives. In contrast, across much of Africa and parts of Asia like Vietnam, white is a color of mourning.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 75,
"text": "In Victorian England, the colors and fabrics of mourning were specified in an unofficial dress code: \"non-reflective black paramatta and crape for the first year of deepest mourning, followed by nine months of dullish black silk, heavily trimmed with crape, and then three months when crape was discarded. Paramatta was a fabric of combined silk and wool or cotton; crape was a harsh black silk fabric with a crimped appearance produced by heat. Widows were allowed to change into the colors of half-mourning, such as gray and lavender, black and white, for the final six months.\"",
"title": "Associations and symbolism"
},
{
"paragraph_id": 76,
"text": "A \"black day\" (or week or month) usually refers to tragic date. The Romans marked fasti days with white stones and nefasti days with black. The term is often used to remember massacres. Black months include the Black September in Jordan, when large numbers of Palestinians were killed, and Black July in Sri Lanka, the killing of members of the Tamil population by the Sinhalese government.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 77,
"text": "In the financial world, the term often refers to a dramatic drop in the stock market. For example, the Wall Street Crash of 1929, the stock market crash on October 29, 1929, which marked the start of the Great Depression, is nicknamed Black Tuesday, and was preceded by Black Thursday, a downturn on October 24 the previous week.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 78,
"text": "In western popular culture, black has long been associated with evil and darkness. It is the traditional color of witchcraft and black magic.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 79,
"text": "In the Book of Revelation, the last book in the New Testament of the Bible, the Four Horsemen of the Apocalypse are supposed to announce the Apocalypse before the Last Judgment. The horseman representing famine rides a black horse. The vampire of literature and films, such as Count Dracula of the Bram Stoker novel, dressed in black, and could only move at night. The Wicked Witch of the West in the 1939 film The Wizard of Oz became the archetype of witches for generations of children. Whereas witches and sorcerers inspired real fear in the 17th century, in the 21st century children and adults dressed as witches for Halloween parties and parades.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 80,
"text": "Black is frequently used as a color of power, law and authority. In many countries judges and magistrates wear black robes. That custom began in Europe in the 13th and 14th centuries. Jurists, magistrates and certain other court officials in France began to wear long black robes during the reign of Philip IV of France (1285–1314), and in England from the time of Edward I (1271–1307). The custom spread to the cities of Italy at about the same time, between 1300 and 1320. The robes of judges resembled those worn by the clergy, and represented the law and authority of the King, while those of the clergy represented the law of God and authority of the church.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 81,
"text": "Until the 20th century most police uniforms were black, until they were largely replaced by a less menacing blue in France, the U.S. and other countries. In the United States, police cars are frequently Black and white. The riot control units of the Basque Autonomous Police in Spain are known as beltzak (\"blacks\") after their uniform.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 82,
"text": "Black today is the most common color for limousines and the official cars of government officials.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 83,
"text": "Black formal attire is still worn at many solemn occasions or ceremonies, from graduations to formal balls. Graduation gowns are copied from the gowns worn by university professors in the Middle Ages, which in turn were copied from the robes worn by judges and priests, who often taught at the early universities. The mortarboard hat worn by graduates is adapted from a square cap called a biretta worn by Medieval professors and clerics.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 84,
"text": "In the 19th and 20th centuries, many machines and devices, large and small, were painted black, to stress their functionality. These included telephones, sewing machines, steamships, railroad locomotives, and automobiles. The Ford Model T, the first mass-produced car, was available only in black from 1914 to 1926. Of means of transportation, only airplanes were rarely ever painted black.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 85,
"text": "Black house paint is becoming more popular with Sherwin-Williams reporting that the color, Tricorn Black, was the 6th most popular exterior house paint color in Canada and the 12th most popular paint in the United States in 2018.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 86,
"text": "Black is also commonly used as a racial description in the United Kingdom, since ethnicity was first measured in the 2001 census. The 2011 British census asked residents to describe themselves, and categories offered included Black, African, Caribbean, or Black British. Other possible categories were African British, African Scottish, Caribbean British and Caribbean Scottish. Of the total UK population in 2001, 1.0 percent identified themselves as Black Caribbean, 0.8 percent as Black African, and 0.2 percent as Black (others).",
"title": "Associations and symbolism"
},
{
"paragraph_id": 87,
"text": "In Canada, census respondents can identify themselves as Black. In the 2006 census, 2.5 percent of the population identified themselves as black.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 88,
"text": "In Australia, the term black is not used in the census. In the 2006 census, 2.3 percent of Australians identified themselves as Aboriginal and/or Torres Strait Islanders.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 89,
"text": "In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as branco (white), pardo (brown), preto (black), or amarelo (yellow). In 2008 6.8 percent of the population identified themselves as \"preto\".",
"title": "Associations and symbolism"
},
{
"paragraph_id": 90,
"text": "Black is commonly associated with secrecy.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 91,
"text": "Black is the color most commonly associated with elegance in Europe and the United States, followed by silver, gold, and white.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 92,
"text": "Black first became a fashionable color for men in Europe in the 17th century, in the courts of Italy and Spain. (See history above.) In the 19th century, it was the fashion for men both in business and for evening wear, in the form of a black coat whose tails came down the knees. In the evening it was the custom of the men to leave the women after dinner to go to a special smoking room to enjoy cigars or cigarettes. This meant that their tailcoats eventually smelled of tobacco. According to the legend, in 1865 Edward VII, then the Prince of Wales, had his tailor make a special short smoking jacket. The smoking jacket then evolved into the dinner jacket. Again according to legend, the first Americans to wear the jacket were members of the Tuxedo Club in New York State. Thereafter the jacket became known as a tuxedo in the U.S. The term \"smoking\" is still used today in Russia and other countries. The tuxedo was always black until the 1930s, when the Duke of Windsor began to wear a tuxedo that was a very dark midnight blue. He did so because a black tuxedo looked greenish in artificial light, while a dark blue tuxedo looked blacker than black itself.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 93,
"text": "For women's fashion, the defining moment was the invention of the simple black dress by Coco Chanel in 1926. (See history.) Thereafter, a long black gown was used for formal occasions, while the simple black dress could be used for everything else. The designer Karl Lagerfeld, explaining why black was so popular, said: \"Black is the color that goes with everything. If you're wearing black, you're on sure ground.\" Skirts have gone up and down and fashions have changed, but the black dress has not lost its position as the essential element of a woman's wardrobe. The fashion designer Christian Dior said, \"elegance is a combination of distinction, naturalness, care and simplicity,\" and black exemplified elegance.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 94,
"text": "The expression \"X is the new black\" is a reference to the latest trend or fad that is considered a wardrobe basic for the duration of the trend, on the basis that black is always fashionable. The phrase has taken on a life of its own and has become a cliché.",
"title": "Associations and symbolism"
},
{
"paragraph_id": 95,
"text": "Many performers of both popular and European classical music, including French singers Edith Piaf and Juliette Gréco, and violinist Joshua Bell have traditionally worn black on stage during performances. A black costume was usually chosen as part of their image or stage persona, or because it did not distract from the music, or sometimes for a political reason. Country-western singer Johnny Cash always wore black on stage. In 1971, Cash wrote the song \"Man in Black\" to explain why he dressed in that color: \"We're doing mighty fine I do suppose / In our streak of lightning cars and fancy clothes / But just so we're reminded of the ones who are held back / Up front there ought to be a man in black.\"",
"title": "Associations and symbolism"
}
] | Black is a color that results from the absence or complete absorption of visible light. It is an achromatic color, without hue, like white and grey. It is often used symbolically or figuratively to represent darkness. Black and white have often been used to describe opposites such as good and evil, the Dark Ages versus Age of Enlightenment, and night versus day. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason it is still commonly worn by judges and magistrates. Black was one of the first colors used by artists in Neolithic cave paintings. It was used in ancient Egypt and Greece as the color of the underworld. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches, and magic. In the 14th century, it was worn by royalty, clergy, judges, and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, fear, evil, and elegance. Black is the most common ink color used for printing books, newspapers and documents, as it provides the highest contrast with white paper and thus is the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens. As of September 2019, the darkest material is made by MIT engineers from vertically aligned carbon nanotubes. | 2001-08-13T21:51:12Z | 2023-12-18T17:51:11Z | [
"Template:Use mdy dates",
"Template:Infobox color",
"Template:Wikiquote",
"Template:Webarchive",
"Template:Citation",
"Template:Pp-semi-indef",
"Template:Circa",
"Template:ISBN",
"Template:Asof",
"Template:Main article",
"Template:Further",
"Template:Spoken Wikipedia",
"Template:Commons category",
"Template:Cite journal",
"Template:Web colors",
"Template:Color topics",
"Template:Pp-vandalism",
"Template:Reflist",
"Template:Cite book",
"Template:Cite news",
"Template:Short description",
"Template:TOC limit",
"Template:GRIN",
"Template:Authority control",
"Template:About",
"Template:Shades of black",
"Template:Shades of grey",
"Template:Goth subculture",
"Template:Sfn",
"Template:Lang-it",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:Bibleverse"
] | https://en.wikipedia.org/wiki/Black |
4,036 | Black Flag | Black Flag or black flag may refer to: | [
{
"paragraph_id": 0,
"text": "Black Flag or black flag may refer to:",
"title": ""
}
] | Black Flag or black flag may refer to: | 2023-02-14T15:45:47Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Main",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Black_Flag |
|
4,037 | Bletchley Park | Bletchley Park is an English country house and estate in Bletchley, Milton Keynes (Buckinghamshire) that became the principal centre of Allied code-breaking during the Second World War. The mansion was constructed during the years following 1883 for the financier and politician Sir Herbert Leon in the Victorian Gothic, Tudor, and Dutch Baroque styles, on the site of older buildings of the same name.
During World War II, the estate housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powers – most importantly the German Enigma and Lorenz ciphers. The GC&CS team of codebreakers included Alan Turing, Harry Golombek, Gordon Welchman, Hugh Alexander, Bill Tutte, and Stuart Milner-Barry.
According to the official historian of British Intelligence, the "Ultra" intelligence produced at Bletchley shortened the war by two to four years, and without it the outcome of the war would have been uncertain. The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park came to an end in 1946 and all information about the wartime operations was classified until the mid-1970s.
After the war it had various uses including as a teacher-training college and local GPO headquarters. By 1990 the huts in which the codebreakers worked were being considered for demolition and redevelopment. The Bletchley Park Trust was formed in February 1992 to save large portions of the site from development.
More recently, Bletchley Park has been open to the public, featuring interpretive exhibits and huts that have been rebuilt to appear as they did during their wartime operations. It receives hundreds of thousands of visitors annually. The separate National Museum of Computing, which includes a working replica Bombe machine and a rebuilt Colossus computer, is housed in Block H on the site.
The site appears in the Domesday Book of 1086 as part of the Manor of Eaton. Browne Willis built a mansion there in 1711, but after Thomas Harrison purchased the property in 1793 this was pulled down. It was first known as Bletchley Park after its purchase by the architect Samuel Lipscomb Seckham in 1877, who built a house there. The estate of 581 acres (235 ha) was bought in 1883 by Sir Herbert Samuel Leon, who expanded the then-existing house into what architect Landis Gores called a "maudlin and monstrous pile" combining Victorian Gothic, Tudor, and Dutch Baroque styles. At his Christmas family gatherings there was a fox hunting meet on Boxing Day with glasses of sloe gin from the butler, and the house was always "humming with servants". With 40 gardeners, a flower bed of yellow daffodils could become a sea of red tulips overnight. After the death of Herbert Leon in 1926, the estate continued to be occupied by his widow Fanny Leon (née Higham) until her death in 1937.
In 1938, the mansion and much of the site was bought by a builder for a housing estate, but in May 1938 Admiral Sir Hugh Sinclair, head of the Secret Intelligence Service (SIS or MI6), bought the mansion and 58 acres (23 ha) of land for £6,000 (£408,000 today) for use by GC&CS and SIS in the event of war. He used his own money as the Government said they did not have the budget to do so.
A key advantage seen by Sinclair and his colleagues (inspecting the site under the cover of "Captain Ridley's shooting party") was Bletchley's geographical centrality. It was almost immediately adjacent to Bletchley railway station, where the "Varsity Line" between Oxford and Cambridge – whose universities were expected to supply many of the code-breakers – met the main West Coast railway line connecting London, Birmingham, Manchester, Liverpool, Glasgow and Edinburgh. Watling Street, the main road linking London to the north-west (subsequently the A5) was close by, and high-volume communication links were available at the telegraph and telephone repeater station in nearby Fenny Stratford.
Bletchley Park was known as "B.P." to those who worked there. "Station X" (X = Roman numeral ten), "London Signals Intelligence Centre", and "Government Communications Headquarters" were all cover names used during the war. The formal posting of the many "Wrens" – members of the Women's Royal Naval Service – working there, was to HMS Pembroke V. Royal Air Force names of Bletchley Park and its outstations included RAF Eastcote, RAF Lime Grove and RAF Church Green. The postal address that staff had to use was "Room 47, Foreign Office".
After the war, the Government Code & Cypher School became the Government Communications Headquarters (GCHQ), moving to Eastcote in 1946 and to Cheltenham in the 1950s. The site was used by various government agencies, including the GPO and the Civil Aviation Authority. One large building, block F, was demolished in 1987 by which time the site was being run down with tenants leaving.
In 1990 the site was at risk of being sold for housing development. However, Milton Keynes Council made it into a conservation area. Bletchley Park Trust was set up in 1991 by a group of people who recognised the site's importance. The initial trustees included Roger Bristow, Ted Enever, Peter Wescombe, Dr Peter Jarvis of the Bletchley Archaeological & Historical Society, and Tony Sale who in 1994 became the first director of the Bletchley Park Museums.
Admiral Hugh Sinclair was the founder and head of GC&CS between 1919 and 1938 with Commander Alastair Denniston being operational head of the organization from 1919 to 1942, beginning with its formation from the Admiralty's Room 40 (NID25) and the War Office's MI1b. Key GC&CS cryptanalysts who moved from London to Bletchley Park included John Tiltman, Dillwyn "Dilly" Knox, Josh Cooper, Oliver Strachey and Nigel de Grey. These people had a variety of backgrounds – linguists and chess champions were common, and Knox's field was papyrology. The British War Office recruited top solvers of cryptic crossword puzzles, as these individuals had strong lateral thinking skills.
On the day Britain declared war on Germany, Denniston wrote to the Foreign Office about recruiting "men of the professor type". Personal networking drove early recruitments, particularly of men from the universities of Cambridge and Oxford. Trustworthy women were similarly recruited for administrative and clerical jobs. In one 1941 recruiting stratagem, The Daily Telegraph was asked to organise a crossword competition, after which promising contestants were discreetly approached about "a particular type of work as a contribution to the war effort".
Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would also be needed; Oxford's Peter Twinn joined GC&CS in February 1939; Cambridge's Alan Turing and Gordon Welchman began training in 1938 and reported to Bletchley the day after war was declared, along with John Jeffreys. Later-recruited cryptanalysts included the mathematicians Derek Taunt, Jack Good, Bill Tutte, and Max Newman; historian Harry Hinsley, and chess champions Hugh Alexander and Stuart Milner-Barry. Joan Clarke was one of the few women employed at Bletchley as a full-fledged cryptanalyst.
When seeking to recruit more suitably advanced linguists, John Tiltman turned to Patrick Wilkinson of the Italian section for advice, and he suggested asking Lord Lindsay of Birker, of Balliol College, Oxford, S. W. Grose, and Martin Charlesworth, of St John's College, Cambridge, to recommend classical scholars or applicants to their colleges.
This eclectic staff of "Boffins and Debs" (scientists and debutantes, young women of high society) caused GC&CS to be whimsically dubbed the "Golf, Cheese and Chess Society".
During a morale-boosting visit on 9 September 1941, Winston Churchill reportedly remarked to Denniston or Menzies: "I told you to leave no stone unturned to get staff, but I had no idea you had taken me so literally." Six weeks later, having failed to get sufficient typing and unskilled staff to achieve the productivity that was possible, Turing, Welchman, Alexander and Milner-Barry wrote directly to Churchill. His response was "Action this day make sure they have all they want on extreme priority and report to me that this has been done."
After initial training at the Inter-Service Special Intelligence School set up by John Tiltman (initially at an RAF depot in Buckingham and later in Bedford – where it was known locally as "the Spy School") staff worked a six-day week, rotating through three shifts: 4 p.m. to midnight, midnight to 8 a.m. (the most disliked shift), and 8 a.m. to 4 p.m., each with a half-hour meal break. At the end of the third week, a worker went off at 8 a.m. and came back at 4 p.m., thus putting in 16 hours on that last day. The irregular hours affected workers' health and social life, as well as the routines of the nearby homes at which most staff lodged. The work was tedious and demanded intense concentration; staff got one week's leave four times a year, but some "girls" collapsed and required extended rest. Recruitment took place to combat a shortage of experts in Morse code and German.
In January 1945, at the peak of codebreaking efforts, nearly 10,000 personnel were working at Bletchley and its outstations. About three-quarters of these were women. Many of the women came from middle-class backgrounds and held degrees in the areas of mathematics, physics and engineering; they were given chance due to the lack of men, who had been sent to war. They performed calculations and coding and hence were integral to the computing processes. Among them were Eleanor Ireland, who worked on the Colossus computers and Ruth Briggs, a German scholar, who worked within the Naval Section.
The female staff in Dilwyn Knox's section were sometimes termed "Dilly's Fillies". Knox's methods enabled Mavis Lever (who married mathematician and fellow code-breaker Keith Batey) and Margaret Rock to solve a German code, the Abwehr cipher.
Many of the women had backgrounds in languages, particularly French, German and Italian. Among them were Rozanne Colchester, a translator who worked mainly for the Italian air forces Section, and Cicely Mayhew, recruited straight from university, who worked in Hut 8, translating decoded German Navy signals.
Alan Brooke (CIGS) in his secret wartime diary frequently refers to “intercepts”:
For a long time, the British Government failed to acknowledge the contributions the personnel at Bletchley Park had made. Their work achieved official recognition only in 2009.
Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities that made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures, and such changes would certainly have been implemented had Germany had any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's "Ultra secret" – higher even than the normally highest classification Most Secret – and security was paramount.
All staff signed the Official Secrets Act (1939) and a 1942 security warning emphasised the importance of discretion even within Bletchley itself: "Do not talk at meals. Do not talk in the transport. Do not talk travelling. Do not talk in the billet. Do not talk by your own fireside. Be careful even in your Hut ..."
Nevertheless, there were security leaks. Jock Colville, the Assistant Private Secretary to Winston Churchill, recorded in his diary on 31 July 1941, that the newspaper proprietor Lord Camrose had discovered Ultra and that security leaks "increase in number and seriousness". Without doubt, the most serious of these was that Bletchley Park had been infiltrated by John Cairncross, the notorious Soviet mole and member of the Cambridge Spy Ring, who leaked Ultra material to Moscow.
Despite the high degree of secrecy surrounding Bletchley Park during the Second World War, unique and hitherto unknown amateur film footage of the outstation at nearby Whaddon Hall came to light in 2020, after being anonymously donated to the Bletchley Park Trust. A spokesman for the Trust noted the film's existence was all the more incredible because it was "very, very rare even to have [still] photographs" of the park and its associated sites.
The first personnel of the Government Code and Cypher School (GC&CS) moved to Bletchley Park on 15 August 1939. The Naval, Military, and Air Sections were on the ground floor of the mansion, together with a telephone exchange, teleprinter room, kitchen, and dining room; the top floor was allocated to MI6. Construction of the wooden huts began in late 1939, and Elmers School, a neighbouring boys' boarding school in a Victorian Gothic redbrick building by a church, was acquired for the Commercial and Diplomatic Sections.
After the United States joined World War II, a number of American cryptographers were posted to Hut 3, and from May 1943 onwards there was close co-operation between British and American intelligence. (See 1943 BRUSA Agreement.) In contrast, the Soviet Union was never officially told of Bletchley Park and its activities – a reflection of Churchill's distrust of the Soviets even during the US-UK-USSR alliance imposed by the Nazi threat.
The only direct enemy damage to the site was done 20–21 November 1940 by three bombs probably intended for Bletchley railway station; Hut 4, shifted two feet off its foundation, was winched back into place as work inside continued.
Initially, when only a very limited amount of Enigma traffic was being read, deciphered non-Naval Enigma messages were sent from Hut 6 to Hut 3 which handled their translation and onward transmission. Subsequently, under Group Captain Eric Jones, Hut 3 expanded to become the heart of Bletchley Park's intelligence effort, with input from decrypts of "Tunny" (Lorenz SZ42) traffic and many other sources. Early in 1942 it moved into Block D, but its functions were still referred to as Hut 3.
Hut 3 contained a number of sections: Air Section "3A", Military Section "3M", a small Naval Section "3N", a multi-service Research Section "3G" and a large liaison section "3L". It also housed the Traffic Analysis Section, SIXTA. An important function that allowed the synthesis of raw messages into valuable Military intelligence was the indexing and cross-referencing of information in a number of different filing systems. Intelligence reports were sent out to the Secret Intelligence Service, the intelligence chiefs in the relevant ministries, and later on to high-level commanders in the field.
Naval Enigma deciphering was in Hut 8, with translation in Hut 4. Verbatim translations were sent to the Naval Intelligence Division (NID) of the Admiralty's Operational Intelligence Centre (OIC), supplemented by information from indexes as to the meaning of technical terms and cross-references from a knowledge store of German naval technology. Where relevant to non-naval matters, they would also be passed to Hut 3. Hut 4 also decoded a manual system known as the dockyard cipher, which sometimes carried messages that were also sent on an Enigma network. Feeding these back to Hut 8 provided excellent "cribs" for Known-plaintext attacks on the daily naval Enigma key.
Initially, a wireless room was established at Bletchley Park. It was set up in the mansion's water tower under the code name "Station X", a term now sometimes applied to the codebreaking efforts at Bletchley as a whole. The "X" is the Roman numeral "ten", this being the Secret Intelligence Service's tenth such station. Due to the long radio aerials stretching from the wireless room, the radio station was moved from Bletchley Park to nearby Whaddon Hall to avoid drawing attention to the site.
Subsequently, other listening stations – the Y-stations, such as the ones at Chicksands in Bedfordshire, Beaumanor Hall, Leicestershire (where the headquarters of the War Office "Y" Group was located) and Beeston Hill Y Station in Norfolk – gathered raw signals for processing at Bletchley. Coded messages were taken down by hand and sent to Bletchley on paper by motorcycle despatch riders or (later) by teleprinter.
The wartime needs required the building of additional accommodation.
Often a hut's number became so strongly associated with the work performed inside that even when the work was moved to another building it was still referred to by the original "Hut" designation.
In addition to the wooden huts, there were a number of brick-built "blocks".
Most German messages decrypted at Bletchley were produced by one or another version of the Enigma cipher machine, but an important minority were produced by the even more complicated twelve-rotor Lorenz SZ42 on-line teleprinter cipher machine used for high command messages, known as Fish.
Five weeks before the outbreak of war, Warsaw's Cipher Bureau revealed its achievements in breaking Enigma to astonished French and British personnel. The British used the Poles' information and techniques, and the Enigma clone sent to them in August 1939, which greatly increased their (previously very limited) success in decrypting Enigma messages.
The bombe was an electromechanical device whose function was to discover some of the daily settings of the Enigma machines on the various German military networks. Its pioneering design was developed by Alan Turing (with an important contribution from Gordon Welchman) and the machine was engineered by Harold 'Doc' Keen of the British Tabulating Machine Company. Each machine was about 7 feet (2.1 m) high and wide, 2 feet (0.61 m) deep and weighed about a ton.
At its peak, GC&CS was reading approximately 4,000 messages per day. As a hedge against enemy attack most bombes were dispersed to installations at Adstock and Wavendon (both later supplanted by installations at Stanmore and Eastcote), and Gayhurst.
Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months. Britain produced modified bombes, but it was the success of the US Navy Bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links.
The Lorenz messages were codenamed Tunny at Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in the Testery (named after Ralph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated in Colossus, the world's first programmable digital electronic computer. This was designed and built by Tommy Flowers and his team at the Post Office Research Station at Dollis Hill. The prototype first worked in December 1943, was delivered to Bletchley Park in January and first worked operationally on 5 February 1944. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of 1 June in time for D-day. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named the Newmanry after its head Max Newman.
Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". While not changing the events, "Ultra" decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions.
Italian signals had been of interest since Italy's attack on Abyssinia in 1935. During the Spanish Civil War the Italian Navy used the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937. When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were "wholesale changes" in Italian codes and cyphers.
Knox was given a new section for work on Enigma variations, which he staffed with women ("Dilly's girls"), who included Margaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; and Mavis Lever. Mavis Lever solved the signals revealing the Italian Navy's operational plans before the Battle of Cape Matapan in 1941, leading to a British victory.
Although most Bletchley staff did not know the results of their work, Admiral Cunningham visited Bletchley in person a few weeks later to congratulate them.
On entering World War II in June 1940, the Italians were using book codes for most of their military messages. The exception was the Italian Navy, which after the Battle of Cape Matapan started using the C-38 version of the Boris Hagelin rotor-based cipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa. As a consequence, JRM Butler recruited his former student Bernard Willson to join a team with two others in Hut 4. In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct the Royal Navy and Royal Air Force to sink enemy ships carrying supplies from Europe to Rommel's Afrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for the Luftwaffe in North Africa reduced by 90 per cent. After an intensive language course, in March 1944 Willson switched to Japanese language-based codes.
A Middle East Intelligence Centre (MEIC) was set up in Cairo in 1939. When Italy entered the war in June 1940, delays in forwarding intercepts to Bletchley via congested radio links resulted in cryptanalysts being sent to Cairo. A Combined Bureau Middle East (CBME) was set up in November, though the Middle East authorities made "increasingly bitter complaints" that GC&CS was giving too little priority to work on Italian cyphers. However, the principle of concentrating high-grade cryptanalysis at Bletchley was maintained. John Chadwick started cryptanalysis work in 1942 on Italian signals at the naval base 'HMS Nile' in Alexandria. Later, he was with GC&CS; in the Heliopolis Museum, Cairo and then in the Villa Laurens, Alexandria.
Soviet signals had been studied since the 1920s. In 1939–40, John Tiltman (who had worked on Russian Army traffic from 1930) set up two Russian sections at Wavendon (a country house near Bletchley) and at Sarafand in Palestine. Two Russian high-grade army and navy systems were broken early in 1940. Tiltman spent two weeks in Finland, where he obtained Russian traffic from Finland and Estonia in exchange for radio equipment. In June 1941, when the Soviet Union became an ally, Churchill ordered a halt to intelligence operations against it. In December 1941, the Russian section was closed down, but in late summer 1943 or late 1944, a small GC&CS Russian cypher section was set up in London overlooking Park Lane, then in Sloane Square.
An outpost of the Government Code and Cypher School had been set up in Hong Kong in 1935, the Far East Combined Bureau (FECB). The FECB naval staff moved in 1940 to Singapore, then Colombo, Ceylon, then Kilindini, Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune. The Army and Air Force staff went from Singapore to the Wireless Experimental Centre at Delhi, India.
In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages in Hut 7, under John Tiltman.
By mid-1945, well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service at Arlington Hall, Virginia. In 1999, Michael Smith wrote that: "Only now are the British codebreakers (like John Tiltman, Hugh Foss, and Eric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers".
After the War, the secrecy imposed on Bletchley staff remained in force, so that most relatives never knew more than that a child, spouse, or parent had done some kind of secret war work. Churchill referred to the Bletchley staff as "the geese that laid the golden eggs and never cackled". That said, occasional mentions of the work performed at Bletchley Park slipped the censor's net and appeared in print.
With the publication of F. W. Winterbotham's The Ultra Secret (1974) public discussion of Bletchley Park's work finally became possible, although even today some former staff still consider themselves bound to silence.
Professor Brian Randell was researching the history of computer science in Britain in 1975-76 for a conference on the history of computing held at the Los Alamos National Laboratory, New Mexico on 10–15 June 1976, and received permission to present a paper on wartime development of the COLOSSI at the Post Office Research Station, Dollis Hill. (In October 1975 the British Government had released a series of captioned photographs from the Public Record Office.) The interest in the “revelations” in his paper resulted in a special evening meeting when Randell and Cooombs answered further questions. Coombs later wrote that "no member of our team could ever forget the fellowship, the sense of purpose and, above all, the breathless excitement of those days". In 1977 Randell published an article "The First Electronic Computer" in several journals.
In July 2009 the British government announced that Bletchley personnel would be recognised with a commemorative badge.
After the war, the site passed through a succession of hands and saw a number of uses, including as a teacher-training college and local GPO headquarters. By 1991, the site was nearly empty and the buildings were at risk of demolition for redevelopment.
In February 1992, the Milton Keynes Borough Council declared most of the Park a conservation area, and the Bletchley Park Trust was formed to maintain the site as a museum. The site opened to visitors in 1993, and was formally inaugurated by the Duke of Kent as Chief Patron in July 1994. In 1999 the land owners, the Property Advisors to the Civil Estate and BT, granted a lease to the Trust giving it control over most of the site.
June 2014 saw the completion of an £8 million restoration project by museum design specialist, Event Communications, which was marked by a visit from Catherine, Duchess of Cambridge. The Duchess' paternal grandmother, Valerie, and Valerie's twin sister, Mary (née Glassborow), both worked at Bletchley Park during the war. The twin sisters worked as Foreign Office Civilians in Hut 6, where they managed the interception of enemy and neutral diplomatic signals for decryption. Valerie married Catherine's grandfather, Captain Peter Middleton. A memorial at Bletchley Park commemorates Mary and Valerie Middleton's work as code-breakers.
The Bletchley Park Learning Department offers educational group visits with active learning activities for schools and universities. Visits can be booked in advance during term time, where students can engage with the history of Bletchley Park and understand its wider relevance for computer history and national security. Their workshops cover introductions to codebreaking, cyber security and the story of Enigma and Lorenz.
In October 2005, American billionaire Sidney Frank donated £500,000 to Bletchley Park Trust to fund a new Science Centre dedicated to Alan Turing. Simon Greenish joined as Director in 2006 to lead the fund-raising effort in a post he held until 2012 when Iain Standen took over the leadership role. In July 2008, a letter to The Times from more than a hundred academics condemned the neglect of the site. In September 2008, PGP, IBM, and other technology firms announced a fund-raising campaign to repair the facility. On 6 November 2008 it was announced that English Heritage would donate £300,000 to help maintain the buildings at Bletchley Park, and that they were in discussions regarding the donation of a further £600,000.
In October 2011, the Bletchley Park Trust received a £4.6m Heritage Lottery Fund grant to be used "to complete the restoration of the site, and to tell its story to the highest modern standards" on the condition that £1.7m of 'match funding' is raised by the Bletchley Park Trust. Just weeks later, Google contributed £550k and by June 2012 the trust had successfully raised £2.4m to unlock the grants to restore Huts 3 and 6, as well as develop its exhibition centre in Block C.
Additional income is raised by renting Block H to the National Museum of Computing, and some office space in various parts of the park to private firms.
Due to the COVID-19 pandemic the Trust expected to lose more than £2m in 2020 and be required to cut a third of its workforce. Former MP John Leech asked tech giants Amazon, Apple, Google, Facebook and Microsoft to donate £400,000 each to secure the future of the Trust. Leech had led the successful campaign to pardon Alan Turing and implement Turing's Law.
The National Museum of Computing is housed in Block H, which is rented from the Bletchley Park Trust. Its Colossus and Tunny galleries tell an important part of allied breaking of German codes during World War II. There is a working reconstruction of a Bombe and a rebuilt Colossus computer which was used on the high-level Lorenz cipher, codenamed Tunny by the British.
The museum, which opened in 2007, is an independent voluntary organisation that is governed by its own board of trustees. Its aim is "To collect and restore computer systems particularly those developed in Britain and to enable people to explore that collection for inspiration, learning and enjoyment." Through its many exhibits, the museum displays the story of computing through the mainframes of the 1960s and 1970s, and the rise of personal computing in the 1980s. It has a policy of having as many of the exhibits as possible in full working order.
This consists of serviced office accommodation housed in Bletchley Park's Blocks A and E, and the upper floors of the Mansion. Its aim is to foster the growth and development of dynamic knowledge-based start-ups and other businesses.
In April 2020 Bletchley Park Capital Partners, a private company run by Tim Reynolds, Deputy Chairman of the National Museum of Computing, announced plans to sell off the freehold to part of the site containing former Block G for commercial development. Offers of between £4m and £6m were reportedly being sought for the 3 acre plot, for which planning permission for employment purposes was granted in 2005. Previously, the construction of a National College of Cyber Security for students aged from 16 to 19 years old had been envisaged on the site, to be housed in Block G after renovation with funds supplied by the Bletchley Park Science and Innovation Centre.
The Radio Society of Great Britain's National Radio Centre (including a library, radio station, museum and bookshop) are in a newly constructed building close to the main Bletchley Park entrance.
Not until July 2009 did the British government fully acknowledge the contribution of the many people working for the Government Code and Cypher School ('G C & C S') at Bletchley. Only then was a commemorative medal struck to be presented to those involved. The gilded medal bears the inscription G C & C S 1939-1945 Bletchley Park and its Outstations.
Bletchley Park is opposite Bletchley railway station. It is close to junctions 13 and 14 of the M1, about 50 miles (80 km) northwest of London.
--- | [
{
"paragraph_id": 0,
"text": "Bletchley Park is an English country house and estate in Bletchley, Milton Keynes (Buckinghamshire) that became the principal centre of Allied code-breaking during the Second World War. The mansion was constructed during the years following 1883 for the financier and politician Sir Herbert Leon in the Victorian Gothic, Tudor, and Dutch Baroque styles, on the site of older buildings of the same name.",
"title": ""
},
{
"paragraph_id": 1,
"text": "During World War II, the estate housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powers – most importantly the German Enigma and Lorenz ciphers. The GC&CS team of codebreakers included Alan Turing, Harry Golombek, Gordon Welchman, Hugh Alexander, Bill Tutte, and Stuart Milner-Barry.",
"title": ""
},
{
"paragraph_id": 2,
"text": "According to the official historian of British Intelligence, the \"Ultra\" intelligence produced at Bletchley shortened the war by two to four years, and without it the outcome of the war would have been uncertain. The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park came to an end in 1946 and all information about the wartime operations was classified until the mid-1970s.",
"title": ""
},
{
"paragraph_id": 3,
"text": "After the war it had various uses including as a teacher-training college and local GPO headquarters. By 1990 the huts in which the codebreakers worked were being considered for demolition and redevelopment. The Bletchley Park Trust was formed in February 1992 to save large portions of the site from development.",
"title": ""
},
{
"paragraph_id": 4,
"text": "More recently, Bletchley Park has been open to the public, featuring interpretive exhibits and huts that have been rebuilt to appear as they did during their wartime operations. It receives hundreds of thousands of visitors annually. The separate National Museum of Computing, which includes a working replica Bombe machine and a rebuilt Colossus computer, is housed in Block H on the site.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The site appears in the Domesday Book of 1086 as part of the Manor of Eaton. Browne Willis built a mansion there in 1711, but after Thomas Harrison purchased the property in 1793 this was pulled down. It was first known as Bletchley Park after its purchase by the architect Samuel Lipscomb Seckham in 1877, who built a house there. The estate of 581 acres (235 ha) was bought in 1883 by Sir Herbert Samuel Leon, who expanded the then-existing house into what architect Landis Gores called a \"maudlin and monstrous pile\" combining Victorian Gothic, Tudor, and Dutch Baroque styles. At his Christmas family gatherings there was a fox hunting meet on Boxing Day with glasses of sloe gin from the butler, and the house was always \"humming with servants\". With 40 gardeners, a flower bed of yellow daffodils could become a sea of red tulips overnight. After the death of Herbert Leon in 1926, the estate continued to be occupied by his widow Fanny Leon (née Higham) until her death in 1937.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1938, the mansion and much of the site was bought by a builder for a housing estate, but in May 1938 Admiral Sir Hugh Sinclair, head of the Secret Intelligence Service (SIS or MI6), bought the mansion and 58 acres (23 ha) of land for £6,000 (£408,000 today) for use by GC&CS and SIS in the event of war. He used his own money as the Government said they did not have the budget to do so.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A key advantage seen by Sinclair and his colleagues (inspecting the site under the cover of \"Captain Ridley's shooting party\") was Bletchley's geographical centrality. It was almost immediately adjacent to Bletchley railway station, where the \"Varsity Line\" between Oxford and Cambridge – whose universities were expected to supply many of the code-breakers – met the main West Coast railway line connecting London, Birmingham, Manchester, Liverpool, Glasgow and Edinburgh. Watling Street, the main road linking London to the north-west (subsequently the A5) was close by, and high-volume communication links were available at the telegraph and telephone repeater station in nearby Fenny Stratford.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Bletchley Park was known as \"B.P.\" to those who worked there. \"Station X\" (X = Roman numeral ten), \"London Signals Intelligence Centre\", and \"Government Communications Headquarters\" were all cover names used during the war. The formal posting of the many \"Wrens\" – members of the Women's Royal Naval Service – working there, was to HMS Pembroke V. Royal Air Force names of Bletchley Park and its outstations included RAF Eastcote, RAF Lime Grove and RAF Church Green. The postal address that staff had to use was \"Room 47, Foreign Office\".",
"title": "History"
},
{
"paragraph_id": 9,
"text": "After the war, the Government Code & Cypher School became the Government Communications Headquarters (GCHQ), moving to Eastcote in 1946 and to Cheltenham in the 1950s. The site was used by various government agencies, including the GPO and the Civil Aviation Authority. One large building, block F, was demolished in 1987 by which time the site was being run down with tenants leaving.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1990 the site was at risk of being sold for housing development. However, Milton Keynes Council made it into a conservation area. Bletchley Park Trust was set up in 1991 by a group of people who recognised the site's importance. The initial trustees included Roger Bristow, Ted Enever, Peter Wescombe, Dr Peter Jarvis of the Bletchley Archaeological & Historical Society, and Tony Sale who in 1994 became the first director of the Bletchley Park Museums.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Admiral Hugh Sinclair was the founder and head of GC&CS between 1919 and 1938 with Commander Alastair Denniston being operational head of the organization from 1919 to 1942, beginning with its formation from the Admiralty's Room 40 (NID25) and the War Office's MI1b. Key GC&CS cryptanalysts who moved from London to Bletchley Park included John Tiltman, Dillwyn \"Dilly\" Knox, Josh Cooper, Oliver Strachey and Nigel de Grey. These people had a variety of backgrounds – linguists and chess champions were common, and Knox's field was papyrology. The British War Office recruited top solvers of cryptic crossword puzzles, as these individuals had strong lateral thinking skills.",
"title": "Personnel"
},
{
"paragraph_id": 12,
"text": "On the day Britain declared war on Germany, Denniston wrote to the Foreign Office about recruiting \"men of the professor type\". Personal networking drove early recruitments, particularly of men from the universities of Cambridge and Oxford. Trustworthy women were similarly recruited for administrative and clerical jobs. In one 1941 recruiting stratagem, The Daily Telegraph was asked to organise a crossword competition, after which promising contestants were discreetly approached about \"a particular type of work as a contribution to the war effort\".",
"title": "Personnel"
},
{
"paragraph_id": 13,
"text": "Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would also be needed; Oxford's Peter Twinn joined GC&CS in February 1939; Cambridge's Alan Turing and Gordon Welchman began training in 1938 and reported to Bletchley the day after war was declared, along with John Jeffreys. Later-recruited cryptanalysts included the mathematicians Derek Taunt, Jack Good, Bill Tutte, and Max Newman; historian Harry Hinsley, and chess champions Hugh Alexander and Stuart Milner-Barry. Joan Clarke was one of the few women employed at Bletchley as a full-fledged cryptanalyst.",
"title": "Personnel"
},
{
"paragraph_id": 14,
"text": "When seeking to recruit more suitably advanced linguists, John Tiltman turned to Patrick Wilkinson of the Italian section for advice, and he suggested asking Lord Lindsay of Birker, of Balliol College, Oxford, S. W. Grose, and Martin Charlesworth, of St John's College, Cambridge, to recommend classical scholars or applicants to their colleges.",
"title": "Personnel"
},
{
"paragraph_id": 15,
"text": "This eclectic staff of \"Boffins and Debs\" (scientists and debutantes, young women of high society) caused GC&CS to be whimsically dubbed the \"Golf, Cheese and Chess Society\".",
"title": "Personnel"
},
{
"paragraph_id": 16,
"text": "During a morale-boosting visit on 9 September 1941, Winston Churchill reportedly remarked to Denniston or Menzies: \"I told you to leave no stone unturned to get staff, but I had no idea you had taken me so literally.\" Six weeks later, having failed to get sufficient typing and unskilled staff to achieve the productivity that was possible, Turing, Welchman, Alexander and Milner-Barry wrote directly to Churchill. His response was \"Action this day make sure they have all they want on extreme priority and report to me that this has been done.\"",
"title": "Personnel"
},
{
"paragraph_id": 17,
"text": "After initial training at the Inter-Service Special Intelligence School set up by John Tiltman (initially at an RAF depot in Buckingham and later in Bedford – where it was known locally as \"the Spy School\") staff worked a six-day week, rotating through three shifts: 4 p.m. to midnight, midnight to 8 a.m. (the most disliked shift), and 8 a.m. to 4 p.m., each with a half-hour meal break. At the end of the third week, a worker went off at 8 a.m. and came back at 4 p.m., thus putting in 16 hours on that last day. The irregular hours affected workers' health and social life, as well as the routines of the nearby homes at which most staff lodged. The work was tedious and demanded intense concentration; staff got one week's leave four times a year, but some \"girls\" collapsed and required extended rest. Recruitment took place to combat a shortage of experts in Morse code and German.",
"title": "Personnel"
},
{
"paragraph_id": 18,
"text": "In January 1945, at the peak of codebreaking efforts, nearly 10,000 personnel were working at Bletchley and its outstations. About three-quarters of these were women. Many of the women came from middle-class backgrounds and held degrees in the areas of mathematics, physics and engineering; they were given chance due to the lack of men, who had been sent to war. They performed calculations and coding and hence were integral to the computing processes. Among them were Eleanor Ireland, who worked on the Colossus computers and Ruth Briggs, a German scholar, who worked within the Naval Section.",
"title": "Personnel"
},
{
"paragraph_id": 19,
"text": "The female staff in Dilwyn Knox's section were sometimes termed \"Dilly's Fillies\". Knox's methods enabled Mavis Lever (who married mathematician and fellow code-breaker Keith Batey) and Margaret Rock to solve a German code, the Abwehr cipher.",
"title": "Personnel"
},
{
"paragraph_id": 20,
"text": "Many of the women had backgrounds in languages, particularly French, German and Italian. Among them were Rozanne Colchester, a translator who worked mainly for the Italian air forces Section, and Cicely Mayhew, recruited straight from university, who worked in Hut 8, translating decoded German Navy signals.",
"title": "Personnel"
},
{
"paragraph_id": 21,
"text": "Alan Brooke (CIGS) in his secret wartime diary frequently refers to “intercepts”:",
"title": "Personnel"
},
{
"paragraph_id": 22,
"text": "For a long time, the British Government failed to acknowledge the contributions the personnel at Bletchley Park had made. Their work achieved official recognition only in 2009.",
"title": "Personnel"
},
{
"paragraph_id": 23,
"text": "Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities that made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures, and such changes would certainly have been implemented had Germany had any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's \"Ultra secret\" – higher even than the normally highest classification Most Secret – and security was paramount.",
"title": "Secrecy"
},
{
"paragraph_id": 24,
"text": "All staff signed the Official Secrets Act (1939) and a 1942 security warning emphasised the importance of discretion even within Bletchley itself: \"Do not talk at meals. Do not talk in the transport. Do not talk travelling. Do not talk in the billet. Do not talk by your own fireside. Be careful even in your Hut ...\"",
"title": "Secrecy"
},
{
"paragraph_id": 25,
"text": "Nevertheless, there were security leaks. Jock Colville, the Assistant Private Secretary to Winston Churchill, recorded in his diary on 31 July 1941, that the newspaper proprietor Lord Camrose had discovered Ultra and that security leaks \"increase in number and seriousness\". Without doubt, the most serious of these was that Bletchley Park had been infiltrated by John Cairncross, the notorious Soviet mole and member of the Cambridge Spy Ring, who leaked Ultra material to Moscow.",
"title": "Secrecy"
},
{
"paragraph_id": 26,
"text": "Despite the high degree of secrecy surrounding Bletchley Park during the Second World War, unique and hitherto unknown amateur film footage of the outstation at nearby Whaddon Hall came to light in 2020, after being anonymously donated to the Bletchley Park Trust. A spokesman for the Trust noted the film's existence was all the more incredible because it was \"very, very rare even to have [still] photographs\" of the park and its associated sites.",
"title": "Secrecy"
},
{
"paragraph_id": 27,
"text": "The first personnel of the Government Code and Cypher School (GC&CS) moved to Bletchley Park on 15 August 1939. The Naval, Military, and Air Sections were on the ground floor of the mansion, together with a telephone exchange, teleprinter room, kitchen, and dining room; the top floor was allocated to MI6. Construction of the wooden huts began in late 1939, and Elmers School, a neighbouring boys' boarding school in a Victorian Gothic redbrick building by a church, was acquired for the Commercial and Diplomatic Sections.",
"title": "Early work"
},
{
"paragraph_id": 28,
"text": "After the United States joined World War II, a number of American cryptographers were posted to Hut 3, and from May 1943 onwards there was close co-operation between British and American intelligence. (See 1943 BRUSA Agreement.) In contrast, the Soviet Union was never officially told of Bletchley Park and its activities – a reflection of Churchill's distrust of the Soviets even during the US-UK-USSR alliance imposed by the Nazi threat.",
"title": "Early work"
},
{
"paragraph_id": 29,
"text": "The only direct enemy damage to the site was done 20–21 November 1940 by three bombs probably intended for Bletchley railway station; Hut 4, shifted two feet off its foundation, was winched back into place as work inside continued.",
"title": "Early work"
},
{
"paragraph_id": 30,
"text": "Initially, when only a very limited amount of Enigma traffic was being read, deciphered non-Naval Enigma messages were sent from Hut 6 to Hut 3 which handled their translation and onward transmission. Subsequently, under Group Captain Eric Jones, Hut 3 expanded to become the heart of Bletchley Park's intelligence effort, with input from decrypts of \"Tunny\" (Lorenz SZ42) traffic and many other sources. Early in 1942 it moved into Block D, but its functions were still referred to as Hut 3.",
"title": "Intelligence reporting"
},
{
"paragraph_id": 31,
"text": "Hut 3 contained a number of sections: Air Section \"3A\", Military Section \"3M\", a small Naval Section \"3N\", a multi-service Research Section \"3G\" and a large liaison section \"3L\". It also housed the Traffic Analysis Section, SIXTA. An important function that allowed the synthesis of raw messages into valuable Military intelligence was the indexing and cross-referencing of information in a number of different filing systems. Intelligence reports were sent out to the Secret Intelligence Service, the intelligence chiefs in the relevant ministries, and later on to high-level commanders in the field.",
"title": "Intelligence reporting"
},
{
"paragraph_id": 32,
"text": "Naval Enigma deciphering was in Hut 8, with translation in Hut 4. Verbatim translations were sent to the Naval Intelligence Division (NID) of the Admiralty's Operational Intelligence Centre (OIC), supplemented by information from indexes as to the meaning of technical terms and cross-references from a knowledge store of German naval technology. Where relevant to non-naval matters, they would also be passed to Hut 3. Hut 4 also decoded a manual system known as the dockyard cipher, which sometimes carried messages that were also sent on an Enigma network. Feeding these back to Hut 8 provided excellent \"cribs\" for Known-plaintext attacks on the daily naval Enigma key.",
"title": "Intelligence reporting"
},
{
"paragraph_id": 33,
"text": "Initially, a wireless room was established at Bletchley Park. It was set up in the mansion's water tower under the code name \"Station X\", a term now sometimes applied to the codebreaking efforts at Bletchley as a whole. The \"X\" is the Roman numeral \"ten\", this being the Secret Intelligence Service's tenth such station. Due to the long radio aerials stretching from the wireless room, the radio station was moved from Bletchley Park to nearby Whaddon Hall to avoid drawing attention to the site.",
"title": "Listening stations"
},
{
"paragraph_id": 34,
"text": "Subsequently, other listening stations – the Y-stations, such as the ones at Chicksands in Bedfordshire, Beaumanor Hall, Leicestershire (where the headquarters of the War Office \"Y\" Group was located) and Beeston Hill Y Station in Norfolk – gathered raw signals for processing at Bletchley. Coded messages were taken down by hand and sent to Bletchley on paper by motorcycle despatch riders or (later) by teleprinter.",
"title": "Listening stations"
},
{
"paragraph_id": 35,
"text": "The wartime needs required the building of additional accommodation.",
"title": "Additional buildings"
},
{
"paragraph_id": 36,
"text": "Often a hut's number became so strongly associated with the work performed inside that even when the work was moved to another building it was still referred to by the original \"Hut\" designation.",
"title": "Additional buildings"
},
{
"paragraph_id": 37,
"text": "In addition to the wooden huts, there were a number of brick-built \"blocks\".",
"title": "Additional buildings"
},
{
"paragraph_id": 38,
"text": "Most German messages decrypted at Bletchley were produced by one or another version of the Enigma cipher machine, but an important minority were produced by the even more complicated twelve-rotor Lorenz SZ42 on-line teleprinter cipher machine used for high command messages, known as Fish.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 39,
"text": "Five weeks before the outbreak of war, Warsaw's Cipher Bureau revealed its achievements in breaking Enigma to astonished French and British personnel. The British used the Poles' information and techniques, and the Enigma clone sent to them in August 1939, which greatly increased their (previously very limited) success in decrypting Enigma messages.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 40,
"text": "The bombe was an electromechanical device whose function was to discover some of the daily settings of the Enigma machines on the various German military networks. Its pioneering design was developed by Alan Turing (with an important contribution from Gordon Welchman) and the machine was engineered by Harold 'Doc' Keen of the British Tabulating Machine Company. Each machine was about 7 feet (2.1 m) high and wide, 2 feet (0.61 m) deep and weighed about a ton.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 41,
"text": "At its peak, GC&CS was reading approximately 4,000 messages per day. As a hedge against enemy attack most bombes were dispersed to installations at Adstock and Wavendon (both later supplanted by installations at Stanmore and Eastcote), and Gayhurst.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 42,
"text": "Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months. Britain produced modified bombes, but it was the success of the US Navy Bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 43,
"text": "The Lorenz messages were codenamed Tunny at Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in the Testery (named after Ralph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated in Colossus, the world's first programmable digital electronic computer. This was designed and built by Tommy Flowers and his team at the Post Office Research Station at Dollis Hill. The prototype first worked in December 1943, was delivered to Bletchley Park in January and first worked operationally on 5 February 1944. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of 1 June in time for D-day. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named the Newmanry after its head Max Newman.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 44,
"text": "Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, \"Rommel would have certainly got through to Cairo\". While not changing the events, \"Ultra\" decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 45,
"text": "Italian signals had been of interest since Italy's attack on Abyssinia in 1935. During the Spanish Civil War the Italian Navy used the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937. When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were \"wholesale changes\" in Italian codes and cyphers.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 46,
"text": "Knox was given a new section for work on Enigma variations, which he staffed with women (\"Dilly's girls\"), who included Margaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; and Mavis Lever. Mavis Lever solved the signals revealing the Italian Navy's operational plans before the Battle of Cape Matapan in 1941, leading to a British victory.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 47,
"text": "Although most Bletchley staff did not know the results of their work, Admiral Cunningham visited Bletchley in person a few weeks later to congratulate them.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 48,
"text": "On entering World War II in June 1940, the Italians were using book codes for most of their military messages. The exception was the Italian Navy, which after the Battle of Cape Matapan started using the C-38 version of the Boris Hagelin rotor-based cipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa. As a consequence, JRM Butler recruited his former student Bernard Willson to join a team with two others in Hut 4. In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct the Royal Navy and Royal Air Force to sink enemy ships carrying supplies from Europe to Rommel's Afrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for the Luftwaffe in North Africa reduced by 90 per cent. After an intensive language course, in March 1944 Willson switched to Japanese language-based codes.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 49,
"text": "A Middle East Intelligence Centre (MEIC) was set up in Cairo in 1939. When Italy entered the war in June 1940, delays in forwarding intercepts to Bletchley via congested radio links resulted in cryptanalysts being sent to Cairo. A Combined Bureau Middle East (CBME) was set up in November, though the Middle East authorities made \"increasingly bitter complaints\" that GC&CS was giving too little priority to work on Italian cyphers. However, the principle of concentrating high-grade cryptanalysis at Bletchley was maintained. John Chadwick started cryptanalysis work in 1942 on Italian signals at the naval base 'HMS Nile' in Alexandria. Later, he was with GC&CS; in the Heliopolis Museum, Cairo and then in the Villa Laurens, Alexandria.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 50,
"text": "Soviet signals had been studied since the 1920s. In 1939–40, John Tiltman (who had worked on Russian Army traffic from 1930) set up two Russian sections at Wavendon (a country house near Bletchley) and at Sarafand in Palestine. Two Russian high-grade army and navy systems were broken early in 1940. Tiltman spent two weeks in Finland, where he obtained Russian traffic from Finland and Estonia in exchange for radio equipment. In June 1941, when the Soviet Union became an ally, Churchill ordered a halt to intelligence operations against it. In December 1941, the Russian section was closed down, but in late summer 1943 or late 1944, a small GC&CS Russian cypher section was set up in London overlooking Park Lane, then in Sloane Square.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 51,
"text": "An outpost of the Government Code and Cypher School had been set up in Hong Kong in 1935, the Far East Combined Bureau (FECB). The FECB naval staff moved in 1940 to Singapore, then Colombo, Ceylon, then Kilindini, Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune. The Army and Air Force staff went from Singapore to the Wireless Experimental Centre at Delhi, India.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 52,
"text": "In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages in Hut 7, under John Tiltman.",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 53,
"text": "By mid-1945, well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service at Arlington Hall, Virginia. In 1999, Michael Smith wrote that: \"Only now are the British codebreakers (like John Tiltman, Hugh Foss, and Eric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers\".",
"title": "Work on specific countries' signals"
},
{
"paragraph_id": 54,
"text": "After the War, the secrecy imposed on Bletchley staff remained in force, so that most relatives never knew more than that a child, spouse, or parent had done some kind of secret war work. Churchill referred to the Bletchley staff as \"the geese that laid the golden eggs and never cackled\". That said, occasional mentions of the work performed at Bletchley Park slipped the censor's net and appeared in print.",
"title": "Postwar"
},
{
"paragraph_id": 55,
"text": "With the publication of F. W. Winterbotham's The Ultra Secret (1974) public discussion of Bletchley Park's work finally became possible, although even today some former staff still consider themselves bound to silence.",
"title": "Postwar"
},
{
"paragraph_id": 56,
"text": "Professor Brian Randell was researching the history of computer science in Britain in 1975-76 for a conference on the history of computing held at the Los Alamos National Laboratory, New Mexico on 10–15 June 1976, and received permission to present a paper on wartime development of the COLOSSI at the Post Office Research Station, Dollis Hill. (In October 1975 the British Government had released a series of captioned photographs from the Public Record Office.) The interest in the “revelations” in his paper resulted in a special evening meeting when Randell and Cooombs answered further questions. Coombs later wrote that \"no member of our team could ever forget the fellowship, the sense of purpose and, above all, the breathless excitement of those days\". In 1977 Randell published an article \"The First Electronic Computer\" in several journals.",
"title": "Postwar"
},
{
"paragraph_id": 57,
"text": "In July 2009 the British government announced that Bletchley personnel would be recognised with a commemorative badge.",
"title": "Postwar"
},
{
"paragraph_id": 58,
"text": "After the war, the site passed through a succession of hands and saw a number of uses, including as a teacher-training college and local GPO headquarters. By 1991, the site was nearly empty and the buildings were at risk of demolition for redevelopment.",
"title": "Postwar"
},
{
"paragraph_id": 59,
"text": "In February 1992, the Milton Keynes Borough Council declared most of the Park a conservation area, and the Bletchley Park Trust was formed to maintain the site as a museum. The site opened to visitors in 1993, and was formally inaugurated by the Duke of Kent as Chief Patron in July 1994. In 1999 the land owners, the Property Advisors to the Civil Estate and BT, granted a lease to the Trust giving it control over most of the site.",
"title": "Postwar"
},
{
"paragraph_id": 60,
"text": "June 2014 saw the completion of an £8 million restoration project by museum design specialist, Event Communications, which was marked by a visit from Catherine, Duchess of Cambridge. The Duchess' paternal grandmother, Valerie, and Valerie's twin sister, Mary (née Glassborow), both worked at Bletchley Park during the war. The twin sisters worked as Foreign Office Civilians in Hut 6, where they managed the interception of enemy and neutral diplomatic signals for decryption. Valerie married Catherine's grandfather, Captain Peter Middleton. A memorial at Bletchley Park commemorates Mary and Valerie Middleton's work as code-breakers.",
"title": "Heritage attraction"
},
{
"paragraph_id": 61,
"text": "The Bletchley Park Learning Department offers educational group visits with active learning activities for schools and universities. Visits can be booked in advance during term time, where students can engage with the history of Bletchley Park and understand its wider relevance for computer history and national security. Their workshops cover introductions to codebreaking, cyber security and the story of Enigma and Lorenz.",
"title": "Heritage attraction"
},
{
"paragraph_id": 62,
"text": "In October 2005, American billionaire Sidney Frank donated £500,000 to Bletchley Park Trust to fund a new Science Centre dedicated to Alan Turing. Simon Greenish joined as Director in 2006 to lead the fund-raising effort in a post he held until 2012 when Iain Standen took over the leadership role. In July 2008, a letter to The Times from more than a hundred academics condemned the neglect of the site. In September 2008, PGP, IBM, and other technology firms announced a fund-raising campaign to repair the facility. On 6 November 2008 it was announced that English Heritage would donate £300,000 to help maintain the buildings at Bletchley Park, and that they were in discussions regarding the donation of a further £600,000.",
"title": "Funding"
},
{
"paragraph_id": 63,
"text": "In October 2011, the Bletchley Park Trust received a £4.6m Heritage Lottery Fund grant to be used \"to complete the restoration of the site, and to tell its story to the highest modern standards\" on the condition that £1.7m of 'match funding' is raised by the Bletchley Park Trust. Just weeks later, Google contributed £550k and by June 2012 the trust had successfully raised £2.4m to unlock the grants to restore Huts 3 and 6, as well as develop its exhibition centre in Block C.",
"title": "Funding"
},
{
"paragraph_id": 64,
"text": "Additional income is raised by renting Block H to the National Museum of Computing, and some office space in various parts of the park to private firms.",
"title": "Funding"
},
{
"paragraph_id": 65,
"text": "Due to the COVID-19 pandemic the Trust expected to lose more than £2m in 2020 and be required to cut a third of its workforce. Former MP John Leech asked tech giants Amazon, Apple, Google, Facebook and Microsoft to donate £400,000 each to secure the future of the Trust. Leech had led the successful campaign to pardon Alan Turing and implement Turing's Law.",
"title": "Funding"
},
{
"paragraph_id": 66,
"text": "The National Museum of Computing is housed in Block H, which is rented from the Bletchley Park Trust. Its Colossus and Tunny galleries tell an important part of allied breaking of German codes during World War II. There is a working reconstruction of a Bombe and a rebuilt Colossus computer which was used on the high-level Lorenz cipher, codenamed Tunny by the British.",
"title": "Other organisations sharing the campus"
},
{
"paragraph_id": 67,
"text": "The museum, which opened in 2007, is an independent voluntary organisation that is governed by its own board of trustees. Its aim is \"To collect and restore computer systems particularly those developed in Britain and to enable people to explore that collection for inspiration, learning and enjoyment.\" Through its many exhibits, the museum displays the story of computing through the mainframes of the 1960s and 1970s, and the rise of personal computing in the 1980s. It has a policy of having as many of the exhibits as possible in full working order.",
"title": "Other organisations sharing the campus"
},
{
"paragraph_id": 68,
"text": "This consists of serviced office accommodation housed in Bletchley Park's Blocks A and E, and the upper floors of the Mansion. Its aim is to foster the growth and development of dynamic knowledge-based start-ups and other businesses.",
"title": "Other organisations sharing the campus"
},
{
"paragraph_id": 69,
"text": "In April 2020 Bletchley Park Capital Partners, a private company run by Tim Reynolds, Deputy Chairman of the National Museum of Computing, announced plans to sell off the freehold to part of the site containing former Block G for commercial development. Offers of between £4m and £6m were reportedly being sought for the 3 acre plot, for which planning permission for employment purposes was granted in 2005. Previously, the construction of a National College of Cyber Security for students aged from 16 to 19 years old had been envisaged on the site, to be housed in Block G after renovation with funds supplied by the Bletchley Park Science and Innovation Centre.",
"title": "Other organisations sharing the campus"
},
{
"paragraph_id": 70,
"text": "The Radio Society of Great Britain's National Radio Centre (including a library, radio station, museum and bookshop) are in a newly constructed building close to the main Bletchley Park entrance.",
"title": "Other organisations sharing the campus"
},
{
"paragraph_id": 71,
"text": "Not until July 2009 did the British government fully acknowledge the contribution of the many people working for the Government Code and Cypher School ('G C & C S') at Bletchley. Only then was a commemorative medal struck to be presented to those involved. The gilded medal bears the inscription G C & C S 1939-1945 Bletchley Park and its Outstations.",
"title": "Final recognition"
},
{
"paragraph_id": 72,
"text": "Bletchley Park is opposite Bletchley railway station. It is close to junctions 13 and 14 of the M1, about 50 miles (80 km) northwest of London.",
"title": "Location"
},
{
"paragraph_id": 73,
"text": "---",
"title": "See also"
}
] | Bletchley Park is an English country house and estate in Bletchley, Milton Keynes (Buckinghamshire) that became the principal centre of Allied code-breaking during the Second World War. The mansion was constructed during the years following 1883 for the financier and politician Sir Herbert Leon in the Victorian Gothic, Tudor, and Dutch Baroque styles, on the site of older buildings of the same name. During World War II, the estate housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powers – most importantly the German Enigma and Lorenz ciphers. The GC&CS team of codebreakers included Alan Turing, Harry Golombek, Gordon Welchman, Hugh Alexander, Bill Tutte, and Stuart Milner-Barry. According to the official historian of British Intelligence, the "Ultra" intelligence produced at Bletchley shortened the war by two to four years, and without it the outcome of the war would have been uncertain. The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park came to an end in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war it had various uses including as a teacher-training college and local GPO headquarters. By 1990 the huts in which the codebreakers worked were being considered for demolition and redevelopment. The Bletchley Park Trust was formed in February 1992 to save large portions of the site from development. More recently, Bletchley Park has been open to the public, featuring interpretive exhibits and huts that have been rebuilt to appear as they did during their wartime operations. It receives hundreds of thousands of visitors annually. The separate National Museum of Computing, which includes a working replica Bombe machine and a rebuilt Colossus computer, is housed in Block H on the site. | 2001-08-30T17:01:52Z | 2023-12-30T15:14:30Z | [
"Template:Inflation",
"Template:Authority control",
"Template:Nowrap",
"Template:Location map",
"Template:Refbegin",
"Template:YouTube",
"Template:Cite book",
"Template:Use dmy dates",
"Template:Efn",
"Template:See also",
"Template:Main",
"Template:Snd",
"Template:EnigmaSeries",
"Template:Cite journal",
"Template:ISBN",
"Template:Cite web",
"Template:Cite news",
"Template:Dead link",
"Template:Use British English",
"Template:Convert",
"Template:Ndash",
"Template:Harvnb",
"Template:Webarchive",
"Template:Commons category",
"Template:Short description",
"Template:Sfn",
"Template:Notelist",
"Template:Reflist",
"Template:Annotated link",
"Template:Citation",
"Template:Cbignore",
"Template:Cite magazine",
"Template:Infobox museum",
"Template:Clear left",
"Template:Refend"
] | https://en.wikipedia.org/wiki/Bletchley_Park |
4,041 | Bede | Bede (/biːd/; Old English: Bēda [ˈbeːdɑ]; 672/3 – 26 May 735), also known as Saint Bede, The Venerable Bede, and Bede the Venerable (Latin: Beda Venerabilis), was an English monk and an author and scholar. He was one of the greatest teachers and writers during the Early Middle Ages, and his most famous work, Ecclesiastical History of the English People, gained him the title "The Father of English History". He served at the monastery of St Peter and its companion monastery of St Paul in the Kingdom of Northumbria of the Angles.
Born on lands belonging to the twin monastery of Monkwearmouth–Jarrow in present-day Tyne and Wear, England, Bede was sent to Monkwearmouth at the age of seven and later joined Abbot Ceolfrith at Jarrow. Both of them survived a plague that struck in 686 and killed a majority of the population there. While Bede spent most of his life in the monastery, he travelled to several abbeys and monasteries across the British Isles, even visiting the archbishop of York and King Ceolwulf of Northumbria.
His ecumenical writings were extensive and included a number of Biblical commentaries and other theological works of exegetical erudition. Another important area of study for Bede was the academic discipline of computus, otherwise known to his contemporaries as the science of calculating calendar dates. One of the more important dates Bede tried to compute was Easter, an effort that was mired in controversy. He also helped popularize the practice of dating forward from the birth of Christ (Anno Domini—in the year of our Lord), a practice which eventually became commonplace in medieval Europe. He is considered by many historians to be the most important scholar of antiquity for the period between the death of Pope Gregory I in 604 and the coronation of Charlemagne in 800.
In 1899, Pope Leo XIII declared him a Doctor of the Church. He is the only native of Great Britain to achieve this designation. Bede was moreover a skilled linguist and translator, and his work made the Latin and Greek writings of the early Church Fathers much more accessible to his fellow Anglo-Saxons, which contributed significantly to English Christianity. Bede's monastery had access to an impressive library which included works by Eusebius, Orosius, and many others.
Almost everything that is known of Bede's life is contained in the last chapter of his Ecclesiastical History of the English People, a history of the church in England. It was completed in about 731, and Bede implies that he was then in his fifty-ninth year, which would give a birth date in 672 or 673. A minor source of information is the letter by his disciple Cuthbert (not to be confused with the saint, Cuthbert, who is mentioned in Bede's work) which relates Bede's death. Bede, in the Historia, gives his birthplace as "on the lands of this monastery". He is referring to the twinned monasteries of Monkwearmouth and Jarrow, in modern-day Wearside and Tyneside respectively. There is also a tradition that he was born at Monkton, two miles from the site where the monastery at Jarrow was later built. Bede says nothing of his origins, but his connections with men of noble ancestry suggest that his own family was well-to-do. Bede's first abbot was Benedict Biscop, and the names "Biscop" and "Beda" both appear in a list of the kings of Lindsey from around 800, further suggesting that Bede came from a noble family.
Bede's name reflects West Saxon Bīeda (Anglian Bēda). It is an Old English short name formed on the root of bēodan "to bid, command". The name also occurs in the Anglo-Saxon Chronicle, s.a. 501, as Bieda, one of the sons of the Saxon founder of Portsmouth. The Liber Vitae of Durham Cathedral names two priests with this name, one of whom is presumably Bede himself. Some manuscripts of the Life of Cuthbert, one of Bede's works, mention that Cuthbert's own priest was named Bede; it is possible that this priest is the other name listed in the Liber Vitae.
At the age of seven, Bede was sent as a puer oblatus to the monastery of Monkwearmouth by his family to be educated by Benedict Biscop and later by Ceolfrith. Bede does not say whether it was already intended at that point that he would be a monk. It was fairly common in Ireland at this time for young boys, particularly those of noble birth, to be fostered out as an oblate; the practice was also likely to have been common among the Germanic peoples in England. Monkwearmouth's sister monastery at Jarrow was founded by Ceolfrith in 682, and Bede probably transferred to Jarrow with Ceolfrith that year.
The dedication stone for the church has survived as of 1969; it is dated 23 April 685, and as Bede would have been required to assist with menial tasks in his day-to-day life it is possible that he helped in building the original church. In 686, plague broke out at Jarrow. The Life of Ceolfrith, written in about 710, records that only two surviving monks were capable of singing the full offices; one was Ceolfrith and the other a young boy, who according to the anonymous writer had been taught by Ceolfrith. The two managed to do the entire service of the liturgy until others could be trained. The young boy was almost certainly Bede, who would have been about 14.
When Bede was about 17 years old, Adomnán, the abbot of Iona Abbey, visited Monkwearmouth and Jarrow. Bede would probably have met the abbot during this visit, and it may be that Adomnán sparked Bede's interest in the Easter dating controversy. In about 692, in Bede's nineteenth year, Bede was ordained a deacon by his diocesan bishop, John, who was bishop of Hexham. The canonical age for the ordination of a deacon was 25; Bede's early ordination may mean that his abilities were considered exceptional, but it is also possible that the minimum age requirement was often disregarded. There might have been minor orders ranking below a deacon; but there is no record of whether Bede held any of these offices. In Bede's thirtieth year (about 702), he became a priest, with the ordination again performed by Bishop John.
In about 701 Bede wrote his first works, the De Arte Metrica and De Schematibus et Tropis; both were intended for use in the classroom. He continued to write for the rest of his life, eventually completing over 60 books, most of which have survived. Not all his output can be easily dated, and Bede may have worked on some texts over a period of many years. His last surviving work is a letter to Ecgbert of York, a former student, written in 734. A 6th-century Greek and Latin manuscript of Acts of the Apostles that is believed to have been used by Bede survives and is now in the Bodleian Library at University of Oxford. It is known as the Codex Laudianus.
Bede may have worked on some of the Latin Bibles that were copied at Jarrow, one of which, the Codex Amiatinus, is now held by the Laurentian Library in Florence. Bede was a teacher as well as a writer; he enjoyed music and was said to be accomplished as a singer and as a reciter of poetry in the vernacular. It is possible that he suffered a speech impediment, but this depends on a phrase in the introduction to his verse life of St Cuthbert. Translations of this phrase differ, and it is uncertain whether Bede intended to say that he was cured of a speech problem, or merely that he was inspired by the saint's works.
In 708, some monks at Hexham accused Bede of having committed heresy in his work De Temporibus. The standard theological view of world history at the time was known as the Six Ages of the World; in his book, Bede calculated the age of the world for himself, rather than accepting the authority of Isidore of Seville, and came to the conclusion that Christ had been born 3,952 years after the creation of the world, rather than the figure of over 5,000 years that was commonly accepted by theologians. The accusation occurred in front of the bishop of Hexham, Wilfrid, who was present at a feast when some drunken monks made the accusation. Wilfrid did not respond to the accusation, but a monk present relayed the episode to Bede, who replied within a few days to the monk, writing a letter setting forth his defence and asking that the letter also be read to Wilfrid. Bede had another brush with Wilfrid, for the historian says that he met Wilfrid sometime between 706 and 709 and discussed Æthelthryth, the abbess of Ely. Wilfrid had been present at the exhumation of her body in 695, and Bede questioned the bishop about the exact circumstances of the body and asked for more details of her life, as Wilfrid had been her advisor.
In 733, Bede travelled to York to visit Ecgbert, who was then bishop of York. The See of York was elevated to an archbishopric in 735, and it is likely that Bede and Ecgbert discussed the proposal for the elevation during his visit. Bede hoped to visit Ecgbert again in 734 but was too ill to make the journey. Bede also travelled to the monastery of Lindisfarne and at some point visited the otherwise unknown monastery of a monk named Wicthed, a visit that is mentioned in a letter to that monk. Because of his widespread correspondence with others throughout the British Isles, and because many of the letters imply that Bede had met his correspondents, it is likely that Bede travelled to some other places, although nothing further about timing or locations can be guessed.
It seems certain that he did not visit Rome, however, as he did not mention it in the autobiographical chapter of his Historia Ecclesiastica. Nothhelm, a correspondent of Bede's who assisted him by finding documents for him in Rome, is known to have visited Bede, though the date cannot be determined beyond the fact that it was after Nothhelm's visit to Rome. Except for a few visits to other monasteries, his life was spent in a round of prayer, observance of the monastic discipline and study of the Sacred Scriptures. He was considered the most learned man of his time.
Bede died on the Feast of the Ascension, Thursday, 26 May 735, on the floor of his cell, singing "Glory be to the Father and to the Son and to the Holy Spirit" and was buried at Jarrow. Cuthbert, a disciple of Bede's, wrote a letter to a Cuthwin (of whom nothing else is known), describing Bede's last days and his death. According to Cuthbert, Bede fell ill, "with frequent attacks of breathlessness but almost without pain", before Easter. On the Tuesday, two days before Bede died, his breathing became worse and his feet swelled. He continued to dictate to a scribe, however, and despite spending the night awake in prayer he dictated again the following day.
At three o'clock, according to Cuthbert, he asked for a box of his to be brought and distributed among the priests of the monastery "a few treasures" of his: "some pepper, and napkins, and some incense". That night he dictated a final sentence to the scribe, a boy named Wilberht, and died soon afterwards. The account of Cuthbert does not make entirely clear whether Bede died before midnight or after. However, by the reckoning of Bede's time, passage from the old day to the new occurred at sunset, not midnight, and Cuthbert is clear that he died after sunset. Thus, while his box was brought at three o'clock Wednesday afternoon of 25 May, by the time of the final dictation it was considered 26 May, although it might still have been 25 May in modern usage.
Cuthbert's letter also relates a five-line poem in the vernacular that Bede composed on his deathbed, known as "Bede's Death Song". It is the most-widely copied Old English poem and appears in 45 manuscripts, but its attribution to Bede is not certain—not all manuscripts name Bede as the author, and the ones that do are of later origin than those that do not. Bede's remains may have been transferred to Durham Cathedral in the 11th century; his tomb there was looted in 1541, but the contents were probably re-interred in the Galilee chapel at the cathedral.
One further oddity in his writings is that in one of his works, the Commentary on the Seven Catholic Epistles, he writes in a manner that gives the impression he was married. The section in question is the only one in that work that is written in first-person view. Bede says: "Prayers are hindered by the conjugal duty because as often as I perform what is due to my wife I am not able to pray." Another passage, in the Commentary on Luke, also mentions a wife in the first person: "Formerly I possessed a wife in the lustful passion of desire and now I possess her in honourable sanctification and true love of Christ." The historian Benedicta Ward argues that these passages are Bede employing a rhetorical device.
Bede wrote scientific, historical and theological works, reflecting the range of his writings from music and metrics to exegetical Scripture commentaries. He knew patristic literature, as well as Pliny the Elder, Virgil, Lucretius, Ovid, Horace and other classical writers. He knew some Greek. Bede's scriptural commentaries employed the allegorical method of interpretation, and his history includes accounts of miracles, which to modern historians has seemed at odds with his critical approach to the materials in his history. Modern studies have shown the important role such concepts played in the world-view of Early Medieval scholars. Although Bede is mainly studied as a historian now, in his time his works on grammar, chronology, and biblical studies were as important as his historical and hagiographical works. The non-historical works contributed greatly to the Carolingian renaissance. He has been credited with writing a penitential, though his authorship of this work is disputed.
Bede's best-known work is the Historia ecclesiastica gentis Anglorum, or An Ecclesiastical History of the English People, completed in about 731. Bede was aided in writing this book by Albinus, abbot of St Augustine's Abbey, Canterbury. The first of the five books begins with some geographical background and then sketches the history of England, beginning with Caesar's invasion in 55 BC. A brief account of Christianity in Roman Britain, including the martyrdom of St Alban, is followed by the story of Augustine's mission to England in 597, which brought Christianity to the Anglo-Saxons.
The second book begins with the death of Gregory the Great in 604 and follows the further progress of Christianity in Kent and the first attempts to evangelise Northumbria. These ended in disaster when Penda, the pagan king of Mercia, killed the newly Christian Edwin of Northumbria at the Battle of Hatfield Chase in about 632. The setback was temporary, and the third book recounts the growth of Christianity in Northumbria under kings Oswald of Northumbria and Oswy. The climax of the third book is the account of the Council of Whitby, traditionally seen as a major turning point in English history. The fourth book begins with the consecration of Theodore as Archbishop of Canterbury and recounts Wilfrid's efforts to bring Christianity to the Kingdom of Sussex.
The fifth book brings the story up to Bede's day and includes an account of missionary work in Frisia and of the conflict with the British church over the correct dating of Easter. Bede wrote a preface for the work, in which he dedicates it to Ceolwulf, king of Northumbria. The preface mentions that Ceolwulf received an earlier draft of the book; presumably Ceolwulf knew enough Latin to understand it, and he may even have been able to read it. The preface makes it clear that Ceolwulf had requested the earlier copy, and Bede had asked for Ceolwulf's approval; this correspondence with the king indicates that Bede's monastery had connections among the Northumbrian nobility.
The monastery at Wearmouth-Jarrow had an excellent library. Both Benedict Biscop and Ceolfrith had acquired books from the Continent, and in Bede's day the monastery was a renowned centre of learning. It has been estimated that there were about 200 books in the monastic library.
For the period prior to Augustine's arrival in 597, Bede drew on earlier writers, including Solinus. He had access to two works of Eusebius: the Historia Ecclesiastica, and also the Chronicon, though he had neither in the original Greek; instead he had a Latin translation of the Historia, by Rufinus, and Jerome's translation of the Chronicon. He also knew Orosius's Adversus Paganus, and Gregory of Tours' Historia Francorum, both Christian histories, as well as the work of Eutropius, a pagan historian. He used Constantius's Life of Germanus as a source for Germanus's visits to Britain.
Bede's account of the Anglo-Saxon settlement of Britain is drawn largely from Gildas's De Excidio et Conquestu Britanniae. Bede would also have been familiar with more recent accounts such as Stephen of Ripon's Life of Wilfrid, and anonymous Life of Gregory the Great and Life of Cuthbert. He also drew on Josephus's Antiquities, and the works of Cassiodorus, and there was a copy of the Liber Pontificalis in Bede's monastery. Bede quotes from several classical authors, including Cicero, Plautus, and Terence, but he may have had access to their work via a Latin grammar rather than directly. However, it is clear he was familiar with the works of Virgil and with Pliny the Elder's Natural History, and his monastery also owned copies of the works of Dionysius Exiguus.
He probably drew his account of Alban from a life of that saint which has not survived. He acknowledges two other lives of saints directly; one is a life of Fursa, and the other of Æthelburh; the latter no longer survives. He also had access to a life of Ceolfrith. Some of Bede's material came from oral traditions, including a description of the physical appearance of Paulinus of York, who had died nearly 90 years before Bede's Historia Ecclesiastica was written.
Bede had correspondents who supplied him with material. Albinus, the abbot of the monastery in Canterbury, provided much information about the church in Kent, and with the assistance of Nothhelm, at that time a priest in London, obtained copies of Gregory the Great's correspondence from Rome relating to Augustine's mission. Almost all of Bede's information regarding Augustine is taken from these letters. Bede acknowledged his correspondents in the preface to the Historia Ecclesiastica; he was in contact with Bishop Daniel of Winchester, for information about the history of the church in Wessex and also wrote to the monastery at Lastingham for information about Cedd and Chad. Bede also mentions an Abbot Esi as a source for the affairs of the East Anglian church, and Bishop Cynibert for information about Lindsey.
The historian Walter Goffart argues that Bede based the structure of the Historia on three works, using them as the framework around which the three main sections of the work were structured. For the early part of the work, up until the Gregorian mission, Goffart feels that Bede used De excidio. The second section, detailing the Gregorian mission of Augustine of Canterbury was framed on Life of Gregory the Great written at Whitby. The last section, detailing events after the Gregorian mission, Goffart feels was modelled on Life of Wilfrid. Most of Bede's informants for information after Augustine's mission came from the eastern part of Britain, leaving significant gaps in the knowledge of the western areas, which were those areas likely to have a native Briton presence.
Bede's stylistic models included some of the same authors from whom he drew the material for the earlier parts of his history. His introduction imitates the work of Orosius, and his title is an echo of Eusebius's Historia Ecclesiastica. Bede also followed Eusebius in taking the Acts of the Apostles as the model for the overall work: where Eusebius used the Acts as the theme for his description of the development of the church, Bede made it the model for his history of the Anglo-Saxon church. Bede quoted his sources at length in his narrative, as Eusebius had done. Bede also appears to have taken quotes directly from his correspondents at times. For example, he almost always uses the terms "Australes" and "Occidentales" for the South and West Saxons respectively, but in a passage in the first book he uses "Meridiani" and "Occidui" instead, as perhaps his informant had done. At the end of the work, Bede adds a brief autobiographical note; this was an idea taken from Gregory of Tours' earlier History of the Franks.
Bede's work as a hagiographer and his detailed attention to dating were both useful preparations for the task of writing the Historia Ecclesiastica. His interest in computus, the science of calculating the date of Easter, was also useful in the account he gives of the controversy between the British and Anglo-Saxon church over the correct method of obtaining the Easter date.
Bede is described by Michael Lapidge as "without question the most accomplished Latinist produced in these islands in the Anglo-Saxon period". His Latin has been praised for its clarity, but his style in the Historia Ecclesiastica is not simple. He knew rhetoric and often used figures of speech and rhetorical forms which cannot easily be reproduced in translation, depending as they often do on the connotations of the Latin words. However, unlike contemporaries such as Aldhelm, whose Latin is full of difficulties, Bede's own text is easy to read. In the words of Charles Plummer, one of the best-known editors of the Historia Ecclesiastica, Bede's Latin is "clear and limpid ... it is very seldom that we have to pause to think of the meaning of a sentence ... Alcuin rightly praises Bede for his unpretending style."
Bede's primary intention in writing the Historia Ecclesiastica was to show the growth of the united church throughout England. The native Britons, whose Christian church survived the departure of the Romans, earn Bede's ire for refusing to help convert the Anglo-Saxons; by the end of the Historia the English, and their church, are dominant over the Britons. This goal, of showing the movement towards unity, explains Bede's animosity towards the British method of calculating Easter: much of the Historia is devoted to a history of the dispute, including the final resolution at the Synod of Whitby in 664. Bede is also concerned to show the unity of the English, despite the disparate kingdoms that still existed when he was writing. He also wants to instruct the reader by spiritual example and to entertain, and to the latter end he adds stories about many of the places and people about which he wrote.
N. J. Higham argues that Bede designed his work to promote his reform agenda to Ceolwulf, the Northumbrian king. Bede painted a highly optimistic picture of the current situation in the Church, as opposed to the more pessimistic picture found in his private letters.
Bede's extensive use of miracles can prove difficult for readers who consider him a more or less reliable historian but do not accept the possibility of miracles. Yet both reflect an inseparable integrity and regard for accuracy and truth, expressed in terms both of historical events and of a tradition of Christian faith that continues. Bede, like Gregory the Great whom Bede quotes on the subject in the Historia, felt that faith brought about by miracles was a stepping stone to a higher, truer faith, and that as a result miracles had their place in a work designed to instruct.
Bede is somewhat reticent about the career of Wilfrid, a contemporary and one of the most prominent clerics of his day. This may be because Wilfrid's opulent lifestyle was uncongenial to Bede's monastic mind; it may also be that the events of Wilfrid's life, divisive and controversial as they were, simply did not fit with Bede's theme of the progression to a unified and harmonious church.
Bede's account of the early migrations of the Angles and Saxons to England omits any mention of a movement of those peoples across the English Channel from Britain to Brittany described by Procopius, who was writing in the sixth century. Frank Stenton describes this omission as "a scholar's dislike of the indefinite"; traditional material that could not be dated or used for Bede's didactic purposes had no interest for him.
Bede was a Northumbrian, and this tinged his work with a local bias. The sources to which he had access gave him less information about the west of England than for other areas. He says relatively little about the achievements of Mercia and Wessex, omitting, for example, any mention of Boniface, a West Saxon missionary to the continent of some renown and of whom Bede had almost certainly heard, though Bede does discuss Northumbrian missionaries to the continent. He is also parsimonious in his praise for Aldhelm, a West Saxon who had done much to convert the native Britons to the Roman form of Christianity. He lists seven kings of the Anglo-Saxons whom he regards as having held imperium, or overlordship; only one king of Wessex, Ceawlin, is listed as Bretwalda, and none from Mercia, though elsewhere he acknowledges the secular power several of the Mercians held. Historian Robin Fleming states that he was so hostile to Mercia because Northumbria had been diminished by Mercian power that he consulted no Mercian informants and included no stories about its saints.
Bede relates the story of Augustine's mission from Rome, and tells how the British clergy refused to assist Augustine in the conversion of the Anglo-Saxons. This, combined with Gildas's negative assessment of the British church at the time of the Anglo-Saxon invasions, led Bede to a very critical view of the native church. However, Bede ignores the fact that at the time of Augustine's mission, the history between the two was one of warfare and conquest, which, in the words of Barbara Yorke, would have naturally "curbed any missionary impulses towards the Anglo-Saxons from the British clergy."
At the time Bede wrote the Historia Ecclesiastica, there were two common ways of referring to dates. One was to use indictions, which were 15-year cycles, counting from 312 AD. There were three different varieties of indiction, each starting on a different day of the year. The other approach was to use regnal years—the reigning Roman emperor, for example, or the ruler of whichever kingdom was under discussion. This meant that in discussing conflicts between kingdoms, the date would have to be given in the regnal years of all the kings involved. Bede used both these approaches on occasion but adopted a third method as his main approach to dating: the Anno Domini method invented by Dionysius Exiguus. Although Bede did not invent this method, his adoption of it and his promulgation of it in De Temporum Ratione, his work on chronology, is the main reason it is now so widely used. Bede's Easter table, contained in De Temporum Ratione, was developed from Dionysius Exiguus' Easter table.
The Historia Ecclesiastica was copied often in the Middle Ages, and about 160 manuscripts containing it survive. About half of those are located on the European continent, rather than in the British Isles. Most of the 8th- and 9th-century texts of Bede's Historia come from the northern parts of the Carolingian Empire. This total does not include manuscripts with only a part of the work, of which another 100 or so survive. It was printed for the first time between 1474 and 1482, probably at Strasbourg.
Modern historians have studied the Historia extensively, and several editions have been produced. For many years, early Anglo-Saxon history was essentially a retelling of the Historia, but recent scholarship has focused as much on what Bede did not write as what he did. The belief that the Historia was the culmination of Bede's works, the aim of all his scholarship, was a belief common among historians in the past but is no longer accepted by most scholars.
Modern historians and editors of Bede have been lavish in their praise of his achievement in the Historia Ecclesiastica. Stenton regards it as one of the "small class of books which transcend all but the most fundamental conditions of time and place", and regards its quality as dependent on Bede's "astonishing power of co-ordinating the fragments of information which came to him through tradition, the relation of friends, or documentary evidence ... In an age where little was attempted beyond the registration of fact, he had reached the conception of history." Patrick Wormald describes him as "the first and greatest of England's historians".
The Historia Ecclesiastica has given Bede a high reputation, but his concerns were different from those of a modern writer of history. His focus on the history of the organisation of the English church, and on heresies and the efforts made to root them out, led him to exclude the secular history of kings and kingdoms except where a moral lesson could be drawn or where they illuminated events in the church. Besides the Anglo-Saxon Chronicle, the medieval writers William of Malmesbury, Henry of Huntingdon, and Geoffrey of Monmouth used his works as sources and inspirations. Early modern writers, such as Polydore Vergil and Matthew Parker, the Elizabethan Archbishop of Canterbury, also utilised the Historia, and his works were used by both Protestant and Catholic sides in the wars of religion.
Some historians have questioned the reliability of some of Bede's accounts. One historian, Charlotte Behr, thinks that the Historia's account of the arrival of the Germanic invaders in Kent should not be considered to relate what actually happened, but rather relates myths that were current in Kent during Bede's time.
It is likely that Bede's work, because it was so widely copied, discouraged others from writing histories and may even have led to the disappearance of manuscripts containing older historical works.
As Chapter 66 of his On the Reckoning of Time, in 725 Bede wrote the Greater Chronicle (chronica maiora), which sometimes circulated as a separate work. For recent events the Chronicle, like his Ecclesiastical History, relied upon Gildas, upon a version of the Liber Pontificalis current at least to the papacy of Pope Sergius I (687–701), and other sources. For earlier events he drew on Eusebius's Chronikoi Kanones. The dating of events in the Chronicle is inconsistent with his other works, using the era of creation, the Anno Mundi.
His other historical works included lives of the abbots of Wearmouth and Jarrow, as well as verse and prose lives of St Cuthbert, an adaptation of Paulinus of Nola's Life of St Felix, and a translation of the Greek Passion of St Anastasius. He also created a listing of saints, the Martyrology.
In his own time, Bede was as well known for his biblical commentaries, and for his exegetical and other theological works. The majority of his writings were of this type and covered the Old Testament and the New Testament. Most survived the Middle Ages, but a few were lost. It was for his theological writings that he earned the title of Doctor Anglorum and why he was declared a saint.
Bede synthesised and transmitted the learning from his predecessors, as well as made careful, judicious innovation in knowledge (such as recalculating the age of the Earth—for which he was censured before surviving the heresy accusations and eventually having his views championed by Archbishop Ussher in the sixteenth century—see below) that had theological implications. In order to do this, he learned Greek and attempted to learn Hebrew. He spent time reading and rereading both the Old and the New Testaments. He mentions that he studied from a text of Jerome's Vulgate, which itself was from the Hebrew text.
He also studied both the Latin and the Greek Fathers of the Church. In the monastic library at Jarrow were numerous books by theologians, including works by Basil, Cassian, John Chrysostom, Isidore of Seville, Origen, Gregory of Nazianzus, Augustine of Hippo, Jerome, Pope Gregory I, Ambrose of Milan, Cassiodorus, and Cyprian. He used these, in conjunction with the Biblical texts themselves, to write his commentaries and other theological works.
He had a Latin translation by Evagrius of Athanasius's Life of Antony and a copy of Sulpicius Severus' Life of St Martin. He also used lesser known writers, such as Fulgentius, Julian of Eclanum, Tyconius, and Prosper of Aquitaine. Bede was the first to refer to Jerome, Augustine, Pope Gregory and Ambrose as the four Latin Fathers of the Church. It is clear from Bede's own comments that he felt his calling was to explain to his students and readers the theology and thoughts of the Church Fathers.
Bede also wrote homilies, works written to explain theology used in worship services. He wrote homilies on the major Christian seasons such as Advent, Lent, or Easter, as well as on other subjects such as anniversaries of significant events.
Both types of Bede's theological works circulated widely in the Middle Ages. Several of his biblical commentaries were incorporated into the Glossa Ordinaria, an 11th-century collection of biblical commentaries. Some of Bede's homilies were collected by Paul the Deacon, and they were used in that form in the Monastic Office. Boniface used Bede's homilies in his missionary efforts on the continent.
Bede sometimes included in his theological books an acknowledgement of the predecessors on whose works he drew. In two cases he left instructions that his marginal notes, which gave the details of his sources, should be preserved by the copyist, and he may have originally added marginal comments about his sources to others of his works. Where he does not specify, it is still possible to identify books to which he must have had access by quotations that he uses. A full catalogue of the library available to Bede in the monastery cannot be reconstructed, but it is possible to tell, for example, that Bede was very familiar with the works of Virgil.
There is little evidence that he had access to any other of the pagan Latin writers—he quotes many of these writers, but the quotes are almost always found in the Latin grammars that were common in his day, one or more of which would certainly have been at the monastery. Another difficulty is that manuscripts of early writers were often incomplete: it is apparent that Bede had access to Pliny's Encyclopaedia, for example, but it seems that the version he had was missing book xviii, since he did not quote from it in his De temporum ratione.
Bede's works included Commentary on Revelation, Commentary on the Catholic Epistles, Commentary on Acts, Reconsideration on the Books of Acts, On the Gospel of Mark, On the Gospel of Luke, and Homilies on the Gospels. At the time of his death he was working on a translation of the Gospel of John into English. He did this for the last 40 days of his life. When the last passage had been translated he said: "All is finished." The works dealing with the Old Testament included Commentary on Samuel, Commentary on Genesis, Commentaries on Ezra and Nehemiah, On the Temple, On the Tabernacle, Commentaries on Tobit, Commentaries on Proverbs, Commentaries on the Song of Songs, Commentaries on the Canticle of Habakkuk, The works on Ezra, the tabernacle and the temple were especially influenced by Gregory the Great's writings.
De temporibus, or On Time, written in about 703, provides an introduction to the principles of Easter computus. This was based on parts of Isidore of Seville's Etymologies, and Bede also included a chronology of the world which was derived from Eusebius, with some revisions based on Jerome's translation of the Bible. In about 723, Bede wrote a longer work on the same subject, On the Reckoning of Time, which was influential throughout the Middle Ages. He also wrote several shorter letters and essays discussing specific aspects of computus.
On the Reckoning of Time (De temporum ratione) included an introduction to the traditional ancient and medieval view of the cosmos, including an explanation of how the spherical Earth influenced the changing length of daylight, of how the seasonal motion of the Sun and Moon influenced the changing appearance of the new moon at evening twilight. Bede also records the effect of the moon on tides. He shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. Since the focus of his book was the computus, Bede gave instructions for computing the date of Easter from the date of the Paschal full moon, for calculating the motion of the Sun and Moon through the zodiac, and for many other calculations related to the calendar. He gives some information about the months of the Anglo-Saxon calendar.
Any codex of Bede's Easter table is normally found together with a codex of his De temporum ratione. His Easter table, being an exact extension of Dionysius Exiguus' Paschal table and covering the time interval AD 532–1063, contains a 532-year Paschal cycle based on the so-called classical Alexandrian 19-year lunar cycle, being the close variant of bishop Theophilus' 19-year lunar cycle proposed by Annianus and adopted by bishop Cyril of Alexandria around AD 425. The ultimate similar (but rather different) predecessor of this Metonic 19-year lunar cycle is the one invented by Anatolius around AD 260.
For calendric purposes, Bede made a new calculation of the age of the world since the creation, which he dated as 3952 BC. Because of his innovations in computing the age of the world, he was accused of heresy at the table of Bishop Wilfrid, his chronology being contrary to accepted calculations. Once informed of the accusations of these "lewd rustics," Bede refuted them in his Letter to Plegwin.
In addition to these works on astronomical timekeeping, he also wrote De natura rerum, or On the Nature of Things, modelled in part after the work of the same title by Isidore of Seville. His works were so influential that late in the ninth century Notker the Stammerer, a monk of the Monastery of St Gall in Switzerland, wrote that "God, the orderer of natures, who raised the Sun from the East on the fourth day of Creation, in the sixth day of the world has made Bede rise from the West as a new Sun to illuminate the whole Earth".
Bede wrote some works designed to help teach grammar in the abbey school. One of these was De arte metrica, a discussion of the composition of Latin verse, drawing on previous grammarians' work. It was based on Donatus's De pedibus and Servius's De finalibus and used examples from Christian poets as well as Virgil. It became a standard text for the teaching of Latin verse during the next few centuries. Bede dedicated this work to Cuthbert, apparently a student, for he is named "beloved son" in the dedication, and Bede says "I have laboured to educate you in divine letters and ecclesiastical statutes." De orthographia is a work on orthography, designed to help a medieval reader of Latin with unfamiliar abbreviations and words from classical Latin works. Although it could serve as a textbook, it appears to have been mainly intended as a reference work. The date of composition for both of these works is unknown.
De schematibus et tropis sacrae scripturae discusses the Bible's use of rhetoric. Bede was familiar with pagan authors such as Virgil, but it was not considered appropriate to teach biblical grammar from such texts, and Bede argues for the superiority of Christian texts in understanding Christian literature. Similarly, his text on poetic metre uses only Christian poetry for examples.
A number of poems have been attributed to Bede. His poetic output has been systematically surveyed and edited by Michael Lapidge, who concluded that the following works belong to Bede: the Versus de die iudicii ("verses on the day of Judgement", found complete in 33 manuscripts and fragmentarily in 10); the metrical Vita Sancti Cudbercti ("Life of St Cuthbert"); and two collections of verse mentioned in the Historia ecclesiastica V.24.2. Bede names the first of these collections as "librum epigrammatum heroico metro siue elegiaco" ("a book of epigrams in the heroic or elegiac metre"), and much of its content has been reconstructed by Lapidge from scattered attestations under the title Liber epigrammatum. The second is named as "liber hymnorum diuerso metro siue rythmo" ("a book of hymns, diverse in metre or rhythm"); this has been reconstructed by Lapidge as containing ten liturgical hymns, one paraliturgical hymn (for the Feast of St Æthelthryth), and four other hymn-like compositions.
According to his disciple Cuthbert, Bede was doctus in nostris carminibus ("learned in our songs"). Cuthbert's letter on Bede's death, the Epistola Cuthberti de obitu Bedae, moreover, commonly is understood to indicate that Bede composed a five-line vernacular poem known to modern scholars as Bede's Death Song
And he used to repeat that sentence from St Paul "It is a fearful thing to fall into the hands of the living God," and many other verses of Scripture, urging us thereby to awake from the slumber of the soul by thinking in good time of our last hour. And in our own language—for he was familiar with English poetry—speaking of the soul's dread departure from the body:
As Opland notes, however, it is not entirely clear that Cuthbert is attributing this text to Bede: most manuscripts of the latter do not use a finite verb to describe Bede's presentation of the song, and the theme was relatively common in Old English and Anglo-Latin literature. The fact that Cuthbert's description places the performance of the Old English poem in the context of a series of quoted passages from Sacred Scripture might be taken as evidence simply that Bede also cited analogous vernacular texts.
On the other hand, the inclusion of the Old English text of the poem in Cuthbert's Latin letter, the observation that Bede "was learned in our song," and the fact that Bede composed a Latin poem on the same subject all point to the possibility of his having written it. By citing the poem directly, Cuthbert seems to imply that its particular wording was somehow important, either since it was a vernacular poem endorsed by a scholar who evidently frowned upon secular entertainment or because it is a direct quotation of Bede's last original composition.
There is no evidence for cult being paid to Bede in England in the 8th century. One reason for this may be that he died on the feast day of Augustine of Canterbury. Later, when he was venerated in England, he was either commemorated after Augustine on 26 May, or his feast was moved to 27 May. However, he was venerated outside England, mainly through the efforts of Boniface and Alcuin, both of whom promoted the cult on the continent. Boniface wrote repeatedly back to England during his missionary efforts, requesting copies of Bede's theological works.
Alcuin, who was taught at the school set up in York by Bede's pupil Ecgbert, praised Bede as an example for monks to follow and was instrumental in disseminating Bede's works to all of Alcuin's friends. Bede's cult became prominent in England during the 10th-century revival of monasticism and by the 14th century had spread to many of the cathedrals of England. Wulfstan, Bishop of Worcester was a particular devotee of Bede's, dedicating a church to him in 1062, which was Wulfstan's first undertaking after his consecration as bishop.
His body was 'translated' (the ecclesiastical term for relocation of relics) from Jarrow to Durham Cathedral around 1020, where it was placed in the same tomb with St Cuthbert. Later Bede's remains were moved to a shrine in the Galilee Chapel at Durham Cathedral in 1370. The shrine was destroyed during the English Reformation, but the bones were reburied in the chapel. In 1831 the bones were dug up and then reburied in a new tomb, which is still there. Other relics were claimed by York, Glastonbury and Fulda.
His scholarship and importance to Catholicism were recognised in 1899 when the Vatican declared him a Doctor of the Church. He is the only Englishman named a Doctor of the Church. He is also the only Englishman in Dante's Paradise (Paradiso X.130), mentioned among theologians and doctors of the church in the same canto as Isidore of Seville and the Scot Richard of St Victor.
His feast day was included in the General Roman Calendar in 1899, for celebration on 27 May rather than on his date of death, 26 May, which was then the feast day of St Augustine of Canterbury. He is venerated in the Catholic Church, in the Church of England and in the Episcopal Church (United States) on 25 May, and in the Eastern Orthodox Church, with a feast day on 27 May (Βεδέα του Ομολογητού).
Bede became known as Venerable Bede (Latin: Beda Venerabilis) by the 9th century because of his holiness, but this was not linked to consideration for sainthood by the Catholic Church. According to a legend, the epithet was miraculously supplied by angels, thus completing his unfinished epitaph. It is first utilised in connection with Bede in the 9th century, where Bede was grouped with others who were called "venerable" at two ecclesiastical councils held at Aachen in 816 and 836. Paul the Deacon then referred to him as venerable consistently. By the 11th and 12th century, it had become commonplace.
Bede's reputation as a historian, based mostly on the Historia Ecclesiastica, remains strong. Thomas Carlyle called him "the greatest historical writer since Herodotus". Walter Goffart says of Bede that he "holds a privileged and unrivalled place among first historians of Christian Europe". He is patron of Beda College in Rome which prepares older men for the Roman Catholic priesthood. His life and work have been celebrated with the annual Jarrow Lecture, held at St Paul's Church, Jarrow, since 1958.
Bede has been described as a progressive scholar, who made Latin and Greek teachings accessible to his fellow Anglo-Saxons.
Jarrow Hall (formerly Bede's World), in Jarrow, is a museum that celebrates the history of Bede and other parts of English heritage, on the site where he lived.
Bede Metro station, part of the Tyne and Wear Metro light rail network, is named after him. | [
{
"paragraph_id": 0,
"text": "Bede (/biːd/; Old English: Bēda [ˈbeːdɑ]; 672/3 – 26 May 735), also known as Saint Bede, The Venerable Bede, and Bede the Venerable (Latin: Beda Venerabilis), was an English monk and an author and scholar. He was one of the greatest teachers and writers during the Early Middle Ages, and his most famous work, Ecclesiastical History of the English People, gained him the title \"The Father of English History\". He served at the monastery of St Peter and its companion monastery of St Paul in the Kingdom of Northumbria of the Angles.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Born on lands belonging to the twin monastery of Monkwearmouth–Jarrow in present-day Tyne and Wear, England, Bede was sent to Monkwearmouth at the age of seven and later joined Abbot Ceolfrith at Jarrow. Both of them survived a plague that struck in 686 and killed a majority of the population there. While Bede spent most of his life in the monastery, he travelled to several abbeys and monasteries across the British Isles, even visiting the archbishop of York and King Ceolwulf of Northumbria.",
"title": ""
},
{
"paragraph_id": 2,
"text": "His ecumenical writings were extensive and included a number of Biblical commentaries and other theological works of exegetical erudition. Another important area of study for Bede was the academic discipline of computus, otherwise known to his contemporaries as the science of calculating calendar dates. One of the more important dates Bede tried to compute was Easter, an effort that was mired in controversy. He also helped popularize the practice of dating forward from the birth of Christ (Anno Domini—in the year of our Lord), a practice which eventually became commonplace in medieval Europe. He is considered by many historians to be the most important scholar of antiquity for the period between the death of Pope Gregory I in 604 and the coronation of Charlemagne in 800.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 1899, Pope Leo XIII declared him a Doctor of the Church. He is the only native of Great Britain to achieve this designation. Bede was moreover a skilled linguist and translator, and his work made the Latin and Greek writings of the early Church Fathers much more accessible to his fellow Anglo-Saxons, which contributed significantly to English Christianity. Bede's monastery had access to an impressive library which included works by Eusebius, Orosius, and many others.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Almost everything that is known of Bede's life is contained in the last chapter of his Ecclesiastical History of the English People, a history of the church in England. It was completed in about 731, and Bede implies that he was then in his fifty-ninth year, which would give a birth date in 672 or 673. A minor source of information is the letter by his disciple Cuthbert (not to be confused with the saint, Cuthbert, who is mentioned in Bede's work) which relates Bede's death. Bede, in the Historia, gives his birthplace as \"on the lands of this monastery\". He is referring to the twinned monasteries of Monkwearmouth and Jarrow, in modern-day Wearside and Tyneside respectively. There is also a tradition that he was born at Monkton, two miles from the site where the monastery at Jarrow was later built. Bede says nothing of his origins, but his connections with men of noble ancestry suggest that his own family was well-to-do. Bede's first abbot was Benedict Biscop, and the names \"Biscop\" and \"Beda\" both appear in a list of the kings of Lindsey from around 800, further suggesting that Bede came from a noble family.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "Bede's name reflects West Saxon Bīeda (Anglian Bēda). It is an Old English short name formed on the root of bēodan \"to bid, command\". The name also occurs in the Anglo-Saxon Chronicle, s.a. 501, as Bieda, one of the sons of the Saxon founder of Portsmouth. The Liber Vitae of Durham Cathedral names two priests with this name, one of whom is presumably Bede himself. Some manuscripts of the Life of Cuthbert, one of Bede's works, mention that Cuthbert's own priest was named Bede; it is possible that this priest is the other name listed in the Liber Vitae.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "At the age of seven, Bede was sent as a puer oblatus to the monastery of Monkwearmouth by his family to be educated by Benedict Biscop and later by Ceolfrith. Bede does not say whether it was already intended at that point that he would be a monk. It was fairly common in Ireland at this time for young boys, particularly those of noble birth, to be fostered out as an oblate; the practice was also likely to have been common among the Germanic peoples in England. Monkwearmouth's sister monastery at Jarrow was founded by Ceolfrith in 682, and Bede probably transferred to Jarrow with Ceolfrith that year.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "The dedication stone for the church has survived as of 1969; it is dated 23 April 685, and as Bede would have been required to assist with menial tasks in his day-to-day life it is possible that he helped in building the original church. In 686, plague broke out at Jarrow. The Life of Ceolfrith, written in about 710, records that only two surviving monks were capable of singing the full offices; one was Ceolfrith and the other a young boy, who according to the anonymous writer had been taught by Ceolfrith. The two managed to do the entire service of the liturgy until others could be trained. The young boy was almost certainly Bede, who would have been about 14.",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "When Bede was about 17 years old, Adomnán, the abbot of Iona Abbey, visited Monkwearmouth and Jarrow. Bede would probably have met the abbot during this visit, and it may be that Adomnán sparked Bede's interest in the Easter dating controversy. In about 692, in Bede's nineteenth year, Bede was ordained a deacon by his diocesan bishop, John, who was bishop of Hexham. The canonical age for the ordination of a deacon was 25; Bede's early ordination may mean that his abilities were considered exceptional, but it is also possible that the minimum age requirement was often disregarded. There might have been minor orders ranking below a deacon; but there is no record of whether Bede held any of these offices. In Bede's thirtieth year (about 702), he became a priest, with the ordination again performed by Bishop John.",
"title": "Life"
},
{
"paragraph_id": 9,
"text": "In about 701 Bede wrote his first works, the De Arte Metrica and De Schematibus et Tropis; both were intended for use in the classroom. He continued to write for the rest of his life, eventually completing over 60 books, most of which have survived. Not all his output can be easily dated, and Bede may have worked on some texts over a period of many years. His last surviving work is a letter to Ecgbert of York, a former student, written in 734. A 6th-century Greek and Latin manuscript of Acts of the Apostles that is believed to have been used by Bede survives and is now in the Bodleian Library at University of Oxford. It is known as the Codex Laudianus.",
"title": "Life"
},
{
"paragraph_id": 10,
"text": "Bede may have worked on some of the Latin Bibles that were copied at Jarrow, one of which, the Codex Amiatinus, is now held by the Laurentian Library in Florence. Bede was a teacher as well as a writer; he enjoyed music and was said to be accomplished as a singer and as a reciter of poetry in the vernacular. It is possible that he suffered a speech impediment, but this depends on a phrase in the introduction to his verse life of St Cuthbert. Translations of this phrase differ, and it is uncertain whether Bede intended to say that he was cured of a speech problem, or merely that he was inspired by the saint's works.",
"title": "Life"
},
{
"paragraph_id": 11,
"text": "In 708, some monks at Hexham accused Bede of having committed heresy in his work De Temporibus. The standard theological view of world history at the time was known as the Six Ages of the World; in his book, Bede calculated the age of the world for himself, rather than accepting the authority of Isidore of Seville, and came to the conclusion that Christ had been born 3,952 years after the creation of the world, rather than the figure of over 5,000 years that was commonly accepted by theologians. The accusation occurred in front of the bishop of Hexham, Wilfrid, who was present at a feast when some drunken monks made the accusation. Wilfrid did not respond to the accusation, but a monk present relayed the episode to Bede, who replied within a few days to the monk, writing a letter setting forth his defence and asking that the letter also be read to Wilfrid. Bede had another brush with Wilfrid, for the historian says that he met Wilfrid sometime between 706 and 709 and discussed Æthelthryth, the abbess of Ely. Wilfrid had been present at the exhumation of her body in 695, and Bede questioned the bishop about the exact circumstances of the body and asked for more details of her life, as Wilfrid had been her advisor.",
"title": "Life"
},
{
"paragraph_id": 12,
"text": "In 733, Bede travelled to York to visit Ecgbert, who was then bishop of York. The See of York was elevated to an archbishopric in 735, and it is likely that Bede and Ecgbert discussed the proposal for the elevation during his visit. Bede hoped to visit Ecgbert again in 734 but was too ill to make the journey. Bede also travelled to the monastery of Lindisfarne and at some point visited the otherwise unknown monastery of a monk named Wicthed, a visit that is mentioned in a letter to that monk. Because of his widespread correspondence with others throughout the British Isles, and because many of the letters imply that Bede had met his correspondents, it is likely that Bede travelled to some other places, although nothing further about timing or locations can be guessed.",
"title": "Life"
},
{
"paragraph_id": 13,
"text": "It seems certain that he did not visit Rome, however, as he did not mention it in the autobiographical chapter of his Historia Ecclesiastica. Nothhelm, a correspondent of Bede's who assisted him by finding documents for him in Rome, is known to have visited Bede, though the date cannot be determined beyond the fact that it was after Nothhelm's visit to Rome. Except for a few visits to other monasteries, his life was spent in a round of prayer, observance of the monastic discipline and study of the Sacred Scriptures. He was considered the most learned man of his time.",
"title": "Life"
},
{
"paragraph_id": 14,
"text": "Bede died on the Feast of the Ascension, Thursday, 26 May 735, on the floor of his cell, singing \"Glory be to the Father and to the Son and to the Holy Spirit\" and was buried at Jarrow. Cuthbert, a disciple of Bede's, wrote a letter to a Cuthwin (of whom nothing else is known), describing Bede's last days and his death. According to Cuthbert, Bede fell ill, \"with frequent attacks of breathlessness but almost without pain\", before Easter. On the Tuesday, two days before Bede died, his breathing became worse and his feet swelled. He continued to dictate to a scribe, however, and despite spending the night awake in prayer he dictated again the following day.",
"title": "Life"
},
{
"paragraph_id": 15,
"text": "At three o'clock, according to Cuthbert, he asked for a box of his to be brought and distributed among the priests of the monastery \"a few treasures\" of his: \"some pepper, and napkins, and some incense\". That night he dictated a final sentence to the scribe, a boy named Wilberht, and died soon afterwards. The account of Cuthbert does not make entirely clear whether Bede died before midnight or after. However, by the reckoning of Bede's time, passage from the old day to the new occurred at sunset, not midnight, and Cuthbert is clear that he died after sunset. Thus, while his box was brought at three o'clock Wednesday afternoon of 25 May, by the time of the final dictation it was considered 26 May, although it might still have been 25 May in modern usage.",
"title": "Life"
},
{
"paragraph_id": 16,
"text": "Cuthbert's letter also relates a five-line poem in the vernacular that Bede composed on his deathbed, known as \"Bede's Death Song\". It is the most-widely copied Old English poem and appears in 45 manuscripts, but its attribution to Bede is not certain—not all manuscripts name Bede as the author, and the ones that do are of later origin than those that do not. Bede's remains may have been transferred to Durham Cathedral in the 11th century; his tomb there was looted in 1541, but the contents were probably re-interred in the Galilee chapel at the cathedral.",
"title": "Life"
},
{
"paragraph_id": 17,
"text": "One further oddity in his writings is that in one of his works, the Commentary on the Seven Catholic Epistles, he writes in a manner that gives the impression he was married. The section in question is the only one in that work that is written in first-person view. Bede says: \"Prayers are hindered by the conjugal duty because as often as I perform what is due to my wife I am not able to pray.\" Another passage, in the Commentary on Luke, also mentions a wife in the first person: \"Formerly I possessed a wife in the lustful passion of desire and now I possess her in honourable sanctification and true love of Christ.\" The historian Benedicta Ward argues that these passages are Bede employing a rhetorical device.",
"title": "Life"
},
{
"paragraph_id": 18,
"text": "Bede wrote scientific, historical and theological works, reflecting the range of his writings from music and metrics to exegetical Scripture commentaries. He knew patristic literature, as well as Pliny the Elder, Virgil, Lucretius, Ovid, Horace and other classical writers. He knew some Greek. Bede's scriptural commentaries employed the allegorical method of interpretation, and his history includes accounts of miracles, which to modern historians has seemed at odds with his critical approach to the materials in his history. Modern studies have shown the important role such concepts played in the world-view of Early Medieval scholars. Although Bede is mainly studied as a historian now, in his time his works on grammar, chronology, and biblical studies were as important as his historical and hagiographical works. The non-historical works contributed greatly to the Carolingian renaissance. He has been credited with writing a penitential, though his authorship of this work is disputed.",
"title": "Works"
},
{
"paragraph_id": 19,
"text": "Bede's best-known work is the Historia ecclesiastica gentis Anglorum, or An Ecclesiastical History of the English People, completed in about 731. Bede was aided in writing this book by Albinus, abbot of St Augustine's Abbey, Canterbury. The first of the five books begins with some geographical background and then sketches the history of England, beginning with Caesar's invasion in 55 BC. A brief account of Christianity in Roman Britain, including the martyrdom of St Alban, is followed by the story of Augustine's mission to England in 597, which brought Christianity to the Anglo-Saxons.",
"title": "Works"
},
{
"paragraph_id": 20,
"text": "The second book begins with the death of Gregory the Great in 604 and follows the further progress of Christianity in Kent and the first attempts to evangelise Northumbria. These ended in disaster when Penda, the pagan king of Mercia, killed the newly Christian Edwin of Northumbria at the Battle of Hatfield Chase in about 632. The setback was temporary, and the third book recounts the growth of Christianity in Northumbria under kings Oswald of Northumbria and Oswy. The climax of the third book is the account of the Council of Whitby, traditionally seen as a major turning point in English history. The fourth book begins with the consecration of Theodore as Archbishop of Canterbury and recounts Wilfrid's efforts to bring Christianity to the Kingdom of Sussex.",
"title": "Works"
},
{
"paragraph_id": 21,
"text": "The fifth book brings the story up to Bede's day and includes an account of missionary work in Frisia and of the conflict with the British church over the correct dating of Easter. Bede wrote a preface for the work, in which he dedicates it to Ceolwulf, king of Northumbria. The preface mentions that Ceolwulf received an earlier draft of the book; presumably Ceolwulf knew enough Latin to understand it, and he may even have been able to read it. The preface makes it clear that Ceolwulf had requested the earlier copy, and Bede had asked for Ceolwulf's approval; this correspondence with the king indicates that Bede's monastery had connections among the Northumbrian nobility.",
"title": "Works"
},
{
"paragraph_id": 22,
"text": "The monastery at Wearmouth-Jarrow had an excellent library. Both Benedict Biscop and Ceolfrith had acquired books from the Continent, and in Bede's day the monastery was a renowned centre of learning. It has been estimated that there were about 200 books in the monastic library.",
"title": "Works"
},
{
"paragraph_id": 23,
"text": "For the period prior to Augustine's arrival in 597, Bede drew on earlier writers, including Solinus. He had access to two works of Eusebius: the Historia Ecclesiastica, and also the Chronicon, though he had neither in the original Greek; instead he had a Latin translation of the Historia, by Rufinus, and Jerome's translation of the Chronicon. He also knew Orosius's Adversus Paganus, and Gregory of Tours' Historia Francorum, both Christian histories, as well as the work of Eutropius, a pagan historian. He used Constantius's Life of Germanus as a source for Germanus's visits to Britain.",
"title": "Works"
},
{
"paragraph_id": 24,
"text": "Bede's account of the Anglo-Saxon settlement of Britain is drawn largely from Gildas's De Excidio et Conquestu Britanniae. Bede would also have been familiar with more recent accounts such as Stephen of Ripon's Life of Wilfrid, and anonymous Life of Gregory the Great and Life of Cuthbert. He also drew on Josephus's Antiquities, and the works of Cassiodorus, and there was a copy of the Liber Pontificalis in Bede's monastery. Bede quotes from several classical authors, including Cicero, Plautus, and Terence, but he may have had access to their work via a Latin grammar rather than directly. However, it is clear he was familiar with the works of Virgil and with Pliny the Elder's Natural History, and his monastery also owned copies of the works of Dionysius Exiguus.",
"title": "Works"
},
{
"paragraph_id": 25,
"text": "He probably drew his account of Alban from a life of that saint which has not survived. He acknowledges two other lives of saints directly; one is a life of Fursa, and the other of Æthelburh; the latter no longer survives. He also had access to a life of Ceolfrith. Some of Bede's material came from oral traditions, including a description of the physical appearance of Paulinus of York, who had died nearly 90 years before Bede's Historia Ecclesiastica was written.",
"title": "Works"
},
{
"paragraph_id": 26,
"text": "Bede had correspondents who supplied him with material. Albinus, the abbot of the monastery in Canterbury, provided much information about the church in Kent, and with the assistance of Nothhelm, at that time a priest in London, obtained copies of Gregory the Great's correspondence from Rome relating to Augustine's mission. Almost all of Bede's information regarding Augustine is taken from these letters. Bede acknowledged his correspondents in the preface to the Historia Ecclesiastica; he was in contact with Bishop Daniel of Winchester, for information about the history of the church in Wessex and also wrote to the monastery at Lastingham for information about Cedd and Chad. Bede also mentions an Abbot Esi as a source for the affairs of the East Anglian church, and Bishop Cynibert for information about Lindsey.",
"title": "Works"
},
{
"paragraph_id": 27,
"text": "The historian Walter Goffart argues that Bede based the structure of the Historia on three works, using them as the framework around which the three main sections of the work were structured. For the early part of the work, up until the Gregorian mission, Goffart feels that Bede used De excidio. The second section, detailing the Gregorian mission of Augustine of Canterbury was framed on Life of Gregory the Great written at Whitby. The last section, detailing events after the Gregorian mission, Goffart feels was modelled on Life of Wilfrid. Most of Bede's informants for information after Augustine's mission came from the eastern part of Britain, leaving significant gaps in the knowledge of the western areas, which were those areas likely to have a native Briton presence.",
"title": "Works"
},
{
"paragraph_id": 28,
"text": "Bede's stylistic models included some of the same authors from whom he drew the material for the earlier parts of his history. His introduction imitates the work of Orosius, and his title is an echo of Eusebius's Historia Ecclesiastica. Bede also followed Eusebius in taking the Acts of the Apostles as the model for the overall work: where Eusebius used the Acts as the theme for his description of the development of the church, Bede made it the model for his history of the Anglo-Saxon church. Bede quoted his sources at length in his narrative, as Eusebius had done. Bede also appears to have taken quotes directly from his correspondents at times. For example, he almost always uses the terms \"Australes\" and \"Occidentales\" for the South and West Saxons respectively, but in a passage in the first book he uses \"Meridiani\" and \"Occidui\" instead, as perhaps his informant had done. At the end of the work, Bede adds a brief autobiographical note; this was an idea taken from Gregory of Tours' earlier History of the Franks.",
"title": "Works"
},
{
"paragraph_id": 29,
"text": "Bede's work as a hagiographer and his detailed attention to dating were both useful preparations for the task of writing the Historia Ecclesiastica. His interest in computus, the science of calculating the date of Easter, was also useful in the account he gives of the controversy between the British and Anglo-Saxon church over the correct method of obtaining the Easter date.",
"title": "Works"
},
{
"paragraph_id": 30,
"text": "Bede is described by Michael Lapidge as \"without question the most accomplished Latinist produced in these islands in the Anglo-Saxon period\". His Latin has been praised for its clarity, but his style in the Historia Ecclesiastica is not simple. He knew rhetoric and often used figures of speech and rhetorical forms which cannot easily be reproduced in translation, depending as they often do on the connotations of the Latin words. However, unlike contemporaries such as Aldhelm, whose Latin is full of difficulties, Bede's own text is easy to read. In the words of Charles Plummer, one of the best-known editors of the Historia Ecclesiastica, Bede's Latin is \"clear and limpid ... it is very seldom that we have to pause to think of the meaning of a sentence ... Alcuin rightly praises Bede for his unpretending style.\"",
"title": "Works"
},
{
"paragraph_id": 31,
"text": "Bede's primary intention in writing the Historia Ecclesiastica was to show the growth of the united church throughout England. The native Britons, whose Christian church survived the departure of the Romans, earn Bede's ire for refusing to help convert the Anglo-Saxons; by the end of the Historia the English, and their church, are dominant over the Britons. This goal, of showing the movement towards unity, explains Bede's animosity towards the British method of calculating Easter: much of the Historia is devoted to a history of the dispute, including the final resolution at the Synod of Whitby in 664. Bede is also concerned to show the unity of the English, despite the disparate kingdoms that still existed when he was writing. He also wants to instruct the reader by spiritual example and to entertain, and to the latter end he adds stories about many of the places and people about which he wrote.",
"title": "Works"
},
{
"paragraph_id": 32,
"text": "N. J. Higham argues that Bede designed his work to promote his reform agenda to Ceolwulf, the Northumbrian king. Bede painted a highly optimistic picture of the current situation in the Church, as opposed to the more pessimistic picture found in his private letters.",
"title": "Works"
},
{
"paragraph_id": 33,
"text": "Bede's extensive use of miracles can prove difficult for readers who consider him a more or less reliable historian but do not accept the possibility of miracles. Yet both reflect an inseparable integrity and regard for accuracy and truth, expressed in terms both of historical events and of a tradition of Christian faith that continues. Bede, like Gregory the Great whom Bede quotes on the subject in the Historia, felt that faith brought about by miracles was a stepping stone to a higher, truer faith, and that as a result miracles had their place in a work designed to instruct.",
"title": "Works"
},
{
"paragraph_id": 34,
"text": "Bede is somewhat reticent about the career of Wilfrid, a contemporary and one of the most prominent clerics of his day. This may be because Wilfrid's opulent lifestyle was uncongenial to Bede's monastic mind; it may also be that the events of Wilfrid's life, divisive and controversial as they were, simply did not fit with Bede's theme of the progression to a unified and harmonious church.",
"title": "Works"
},
{
"paragraph_id": 35,
"text": "Bede's account of the early migrations of the Angles and Saxons to England omits any mention of a movement of those peoples across the English Channel from Britain to Brittany described by Procopius, who was writing in the sixth century. Frank Stenton describes this omission as \"a scholar's dislike of the indefinite\"; traditional material that could not be dated or used for Bede's didactic purposes had no interest for him.",
"title": "Works"
},
{
"paragraph_id": 36,
"text": "Bede was a Northumbrian, and this tinged his work with a local bias. The sources to which he had access gave him less information about the west of England than for other areas. He says relatively little about the achievements of Mercia and Wessex, omitting, for example, any mention of Boniface, a West Saxon missionary to the continent of some renown and of whom Bede had almost certainly heard, though Bede does discuss Northumbrian missionaries to the continent. He is also parsimonious in his praise for Aldhelm, a West Saxon who had done much to convert the native Britons to the Roman form of Christianity. He lists seven kings of the Anglo-Saxons whom he regards as having held imperium, or overlordship; only one king of Wessex, Ceawlin, is listed as Bretwalda, and none from Mercia, though elsewhere he acknowledges the secular power several of the Mercians held. Historian Robin Fleming states that he was so hostile to Mercia because Northumbria had been diminished by Mercian power that he consulted no Mercian informants and included no stories about its saints.",
"title": "Works"
},
{
"paragraph_id": 37,
"text": "Bede relates the story of Augustine's mission from Rome, and tells how the British clergy refused to assist Augustine in the conversion of the Anglo-Saxons. This, combined with Gildas's negative assessment of the British church at the time of the Anglo-Saxon invasions, led Bede to a very critical view of the native church. However, Bede ignores the fact that at the time of Augustine's mission, the history between the two was one of warfare and conquest, which, in the words of Barbara Yorke, would have naturally \"curbed any missionary impulses towards the Anglo-Saxons from the British clergy.\"",
"title": "Works"
},
{
"paragraph_id": 38,
"text": "At the time Bede wrote the Historia Ecclesiastica, there were two common ways of referring to dates. One was to use indictions, which were 15-year cycles, counting from 312 AD. There were three different varieties of indiction, each starting on a different day of the year. The other approach was to use regnal years—the reigning Roman emperor, for example, or the ruler of whichever kingdom was under discussion. This meant that in discussing conflicts between kingdoms, the date would have to be given in the regnal years of all the kings involved. Bede used both these approaches on occasion but adopted a third method as his main approach to dating: the Anno Domini method invented by Dionysius Exiguus. Although Bede did not invent this method, his adoption of it and his promulgation of it in De Temporum Ratione, his work on chronology, is the main reason it is now so widely used. Bede's Easter table, contained in De Temporum Ratione, was developed from Dionysius Exiguus' Easter table.",
"title": "Works"
},
{
"paragraph_id": 39,
"text": "The Historia Ecclesiastica was copied often in the Middle Ages, and about 160 manuscripts containing it survive. About half of those are located on the European continent, rather than in the British Isles. Most of the 8th- and 9th-century texts of Bede's Historia come from the northern parts of the Carolingian Empire. This total does not include manuscripts with only a part of the work, of which another 100 or so survive. It was printed for the first time between 1474 and 1482, probably at Strasbourg.",
"title": "Works"
},
{
"paragraph_id": 40,
"text": "Modern historians have studied the Historia extensively, and several editions have been produced. For many years, early Anglo-Saxon history was essentially a retelling of the Historia, but recent scholarship has focused as much on what Bede did not write as what he did. The belief that the Historia was the culmination of Bede's works, the aim of all his scholarship, was a belief common among historians in the past but is no longer accepted by most scholars.",
"title": "Works"
},
{
"paragraph_id": 41,
"text": "Modern historians and editors of Bede have been lavish in their praise of his achievement in the Historia Ecclesiastica. Stenton regards it as one of the \"small class of books which transcend all but the most fundamental conditions of time and place\", and regards its quality as dependent on Bede's \"astonishing power of co-ordinating the fragments of information which came to him through tradition, the relation of friends, or documentary evidence ... In an age where little was attempted beyond the registration of fact, he had reached the conception of history.\" Patrick Wormald describes him as \"the first and greatest of England's historians\".",
"title": "Works"
},
{
"paragraph_id": 42,
"text": "The Historia Ecclesiastica has given Bede a high reputation, but his concerns were different from those of a modern writer of history. His focus on the history of the organisation of the English church, and on heresies and the efforts made to root them out, led him to exclude the secular history of kings and kingdoms except where a moral lesson could be drawn or where they illuminated events in the church. Besides the Anglo-Saxon Chronicle, the medieval writers William of Malmesbury, Henry of Huntingdon, and Geoffrey of Monmouth used his works as sources and inspirations. Early modern writers, such as Polydore Vergil and Matthew Parker, the Elizabethan Archbishop of Canterbury, also utilised the Historia, and his works were used by both Protestant and Catholic sides in the wars of religion.",
"title": "Works"
},
{
"paragraph_id": 43,
"text": "Some historians have questioned the reliability of some of Bede's accounts. One historian, Charlotte Behr, thinks that the Historia's account of the arrival of the Germanic invaders in Kent should not be considered to relate what actually happened, but rather relates myths that were current in Kent during Bede's time.",
"title": "Works"
},
{
"paragraph_id": 44,
"text": "It is likely that Bede's work, because it was so widely copied, discouraged others from writing histories and may even have led to the disappearance of manuscripts containing older historical works.",
"title": "Works"
},
{
"paragraph_id": 45,
"text": "As Chapter 66 of his On the Reckoning of Time, in 725 Bede wrote the Greater Chronicle (chronica maiora), which sometimes circulated as a separate work. For recent events the Chronicle, like his Ecclesiastical History, relied upon Gildas, upon a version of the Liber Pontificalis current at least to the papacy of Pope Sergius I (687–701), and other sources. For earlier events he drew on Eusebius's Chronikoi Kanones. The dating of events in the Chronicle is inconsistent with his other works, using the era of creation, the Anno Mundi.",
"title": "Works"
},
{
"paragraph_id": 46,
"text": "His other historical works included lives of the abbots of Wearmouth and Jarrow, as well as verse and prose lives of St Cuthbert, an adaptation of Paulinus of Nola's Life of St Felix, and a translation of the Greek Passion of St Anastasius. He also created a listing of saints, the Martyrology.",
"title": "Works"
},
{
"paragraph_id": 47,
"text": "In his own time, Bede was as well known for his biblical commentaries, and for his exegetical and other theological works. The majority of his writings were of this type and covered the Old Testament and the New Testament. Most survived the Middle Ages, but a few were lost. It was for his theological writings that he earned the title of Doctor Anglorum and why he was declared a saint.",
"title": "Works"
},
{
"paragraph_id": 48,
"text": "Bede synthesised and transmitted the learning from his predecessors, as well as made careful, judicious innovation in knowledge (such as recalculating the age of the Earth—for which he was censured before surviving the heresy accusations and eventually having his views championed by Archbishop Ussher in the sixteenth century—see below) that had theological implications. In order to do this, he learned Greek and attempted to learn Hebrew. He spent time reading and rereading both the Old and the New Testaments. He mentions that he studied from a text of Jerome's Vulgate, which itself was from the Hebrew text.",
"title": "Works"
},
{
"paragraph_id": 49,
"text": "He also studied both the Latin and the Greek Fathers of the Church. In the monastic library at Jarrow were numerous books by theologians, including works by Basil, Cassian, John Chrysostom, Isidore of Seville, Origen, Gregory of Nazianzus, Augustine of Hippo, Jerome, Pope Gregory I, Ambrose of Milan, Cassiodorus, and Cyprian. He used these, in conjunction with the Biblical texts themselves, to write his commentaries and other theological works.",
"title": "Works"
},
{
"paragraph_id": 50,
"text": "He had a Latin translation by Evagrius of Athanasius's Life of Antony and a copy of Sulpicius Severus' Life of St Martin. He also used lesser known writers, such as Fulgentius, Julian of Eclanum, Tyconius, and Prosper of Aquitaine. Bede was the first to refer to Jerome, Augustine, Pope Gregory and Ambrose as the four Latin Fathers of the Church. It is clear from Bede's own comments that he felt his calling was to explain to his students and readers the theology and thoughts of the Church Fathers.",
"title": "Works"
},
{
"paragraph_id": 51,
"text": "Bede also wrote homilies, works written to explain theology used in worship services. He wrote homilies on the major Christian seasons such as Advent, Lent, or Easter, as well as on other subjects such as anniversaries of significant events.",
"title": "Works"
},
{
"paragraph_id": 52,
"text": "Both types of Bede's theological works circulated widely in the Middle Ages. Several of his biblical commentaries were incorporated into the Glossa Ordinaria, an 11th-century collection of biblical commentaries. Some of Bede's homilies were collected by Paul the Deacon, and they were used in that form in the Monastic Office. Boniface used Bede's homilies in his missionary efforts on the continent.",
"title": "Works"
},
{
"paragraph_id": 53,
"text": "Bede sometimes included in his theological books an acknowledgement of the predecessors on whose works he drew. In two cases he left instructions that his marginal notes, which gave the details of his sources, should be preserved by the copyist, and he may have originally added marginal comments about his sources to others of his works. Where he does not specify, it is still possible to identify books to which he must have had access by quotations that he uses. A full catalogue of the library available to Bede in the monastery cannot be reconstructed, but it is possible to tell, for example, that Bede was very familiar with the works of Virgil.",
"title": "Works"
},
{
"paragraph_id": 54,
"text": "There is little evidence that he had access to any other of the pagan Latin writers—he quotes many of these writers, but the quotes are almost always found in the Latin grammars that were common in his day, one or more of which would certainly have been at the monastery. Another difficulty is that manuscripts of early writers were often incomplete: it is apparent that Bede had access to Pliny's Encyclopaedia, for example, but it seems that the version he had was missing book xviii, since he did not quote from it in his De temporum ratione.",
"title": "Works"
},
{
"paragraph_id": 55,
"text": "Bede's works included Commentary on Revelation, Commentary on the Catholic Epistles, Commentary on Acts, Reconsideration on the Books of Acts, On the Gospel of Mark, On the Gospel of Luke, and Homilies on the Gospels. At the time of his death he was working on a translation of the Gospel of John into English. He did this for the last 40 days of his life. When the last passage had been translated he said: \"All is finished.\" The works dealing with the Old Testament included Commentary on Samuel, Commentary on Genesis, Commentaries on Ezra and Nehemiah, On the Temple, On the Tabernacle, Commentaries on Tobit, Commentaries on Proverbs, Commentaries on the Song of Songs, Commentaries on the Canticle of Habakkuk, The works on Ezra, the tabernacle and the temple were especially influenced by Gregory the Great's writings.",
"title": "Works"
},
{
"paragraph_id": 56,
"text": "De temporibus, or On Time, written in about 703, provides an introduction to the principles of Easter computus. This was based on parts of Isidore of Seville's Etymologies, and Bede also included a chronology of the world which was derived from Eusebius, with some revisions based on Jerome's translation of the Bible. In about 723, Bede wrote a longer work on the same subject, On the Reckoning of Time, which was influential throughout the Middle Ages. He also wrote several shorter letters and essays discussing specific aspects of computus.",
"title": "Works"
},
{
"paragraph_id": 57,
"text": "On the Reckoning of Time (De temporum ratione) included an introduction to the traditional ancient and medieval view of the cosmos, including an explanation of how the spherical Earth influenced the changing length of daylight, of how the seasonal motion of the Sun and Moon influenced the changing appearance of the new moon at evening twilight. Bede also records the effect of the moon on tides. He shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. Since the focus of his book was the computus, Bede gave instructions for computing the date of Easter from the date of the Paschal full moon, for calculating the motion of the Sun and Moon through the zodiac, and for many other calculations related to the calendar. He gives some information about the months of the Anglo-Saxon calendar.",
"title": "Works"
},
{
"paragraph_id": 58,
"text": "Any codex of Bede's Easter table is normally found together with a codex of his De temporum ratione. His Easter table, being an exact extension of Dionysius Exiguus' Paschal table and covering the time interval AD 532–1063, contains a 532-year Paschal cycle based on the so-called classical Alexandrian 19-year lunar cycle, being the close variant of bishop Theophilus' 19-year lunar cycle proposed by Annianus and adopted by bishop Cyril of Alexandria around AD 425. The ultimate similar (but rather different) predecessor of this Metonic 19-year lunar cycle is the one invented by Anatolius around AD 260.",
"title": "Works"
},
{
"paragraph_id": 59,
"text": "For calendric purposes, Bede made a new calculation of the age of the world since the creation, which he dated as 3952 BC. Because of his innovations in computing the age of the world, he was accused of heresy at the table of Bishop Wilfrid, his chronology being contrary to accepted calculations. Once informed of the accusations of these \"lewd rustics,\" Bede refuted them in his Letter to Plegwin.",
"title": "Works"
},
{
"paragraph_id": 60,
"text": "In addition to these works on astronomical timekeeping, he also wrote De natura rerum, or On the Nature of Things, modelled in part after the work of the same title by Isidore of Seville. His works were so influential that late in the ninth century Notker the Stammerer, a monk of the Monastery of St Gall in Switzerland, wrote that \"God, the orderer of natures, who raised the Sun from the East on the fourth day of Creation, in the sixth day of the world has made Bede rise from the West as a new Sun to illuminate the whole Earth\".",
"title": "Works"
},
{
"paragraph_id": 61,
"text": "Bede wrote some works designed to help teach grammar in the abbey school. One of these was De arte metrica, a discussion of the composition of Latin verse, drawing on previous grammarians' work. It was based on Donatus's De pedibus and Servius's De finalibus and used examples from Christian poets as well as Virgil. It became a standard text for the teaching of Latin verse during the next few centuries. Bede dedicated this work to Cuthbert, apparently a student, for he is named \"beloved son\" in the dedication, and Bede says \"I have laboured to educate you in divine letters and ecclesiastical statutes.\" De orthographia is a work on orthography, designed to help a medieval reader of Latin with unfamiliar abbreviations and words from classical Latin works. Although it could serve as a textbook, it appears to have been mainly intended as a reference work. The date of composition for both of these works is unknown.",
"title": "Works"
},
{
"paragraph_id": 62,
"text": "De schematibus et tropis sacrae scripturae discusses the Bible's use of rhetoric. Bede was familiar with pagan authors such as Virgil, but it was not considered appropriate to teach biblical grammar from such texts, and Bede argues for the superiority of Christian texts in understanding Christian literature. Similarly, his text on poetic metre uses only Christian poetry for examples.",
"title": "Works"
},
{
"paragraph_id": 63,
"text": "A number of poems have been attributed to Bede. His poetic output has been systematically surveyed and edited by Michael Lapidge, who concluded that the following works belong to Bede: the Versus de die iudicii (\"verses on the day of Judgement\", found complete in 33 manuscripts and fragmentarily in 10); the metrical Vita Sancti Cudbercti (\"Life of St Cuthbert\"); and two collections of verse mentioned in the Historia ecclesiastica V.24.2. Bede names the first of these collections as \"librum epigrammatum heroico metro siue elegiaco\" (\"a book of epigrams in the heroic or elegiac metre\"), and much of its content has been reconstructed by Lapidge from scattered attestations under the title Liber epigrammatum. The second is named as \"liber hymnorum diuerso metro siue rythmo\" (\"a book of hymns, diverse in metre or rhythm\"); this has been reconstructed by Lapidge as containing ten liturgical hymns, one paraliturgical hymn (for the Feast of St Æthelthryth), and four other hymn-like compositions.",
"title": "Works"
},
{
"paragraph_id": 64,
"text": "According to his disciple Cuthbert, Bede was doctus in nostris carminibus (\"learned in our songs\"). Cuthbert's letter on Bede's death, the Epistola Cuthberti de obitu Bedae, moreover, commonly is understood to indicate that Bede composed a five-line vernacular poem known to modern scholars as Bede's Death Song",
"title": "Works"
},
{
"paragraph_id": 65,
"text": "And he used to repeat that sentence from St Paul \"It is a fearful thing to fall into the hands of the living God,\" and many other verses of Scripture, urging us thereby to awake from the slumber of the soul by thinking in good time of our last hour. And in our own language—for he was familiar with English poetry—speaking of the soul's dread departure from the body:",
"title": "Works"
},
{
"paragraph_id": 66,
"text": "As Opland notes, however, it is not entirely clear that Cuthbert is attributing this text to Bede: most manuscripts of the latter do not use a finite verb to describe Bede's presentation of the song, and the theme was relatively common in Old English and Anglo-Latin literature. The fact that Cuthbert's description places the performance of the Old English poem in the context of a series of quoted passages from Sacred Scripture might be taken as evidence simply that Bede also cited analogous vernacular texts.",
"title": "Works"
},
{
"paragraph_id": 67,
"text": "On the other hand, the inclusion of the Old English text of the poem in Cuthbert's Latin letter, the observation that Bede \"was learned in our song,\" and the fact that Bede composed a Latin poem on the same subject all point to the possibility of his having written it. By citing the poem directly, Cuthbert seems to imply that its particular wording was somehow important, either since it was a vernacular poem endorsed by a scholar who evidently frowned upon secular entertainment or because it is a direct quotation of Bede's last original composition.",
"title": "Works"
},
{
"paragraph_id": 68,
"text": "There is no evidence for cult being paid to Bede in England in the 8th century. One reason for this may be that he died on the feast day of Augustine of Canterbury. Later, when he was venerated in England, he was either commemorated after Augustine on 26 May, or his feast was moved to 27 May. However, he was venerated outside England, mainly through the efforts of Boniface and Alcuin, both of whom promoted the cult on the continent. Boniface wrote repeatedly back to England during his missionary efforts, requesting copies of Bede's theological works.",
"title": "Veneration"
},
{
"paragraph_id": 69,
"text": "Alcuin, who was taught at the school set up in York by Bede's pupil Ecgbert, praised Bede as an example for monks to follow and was instrumental in disseminating Bede's works to all of Alcuin's friends. Bede's cult became prominent in England during the 10th-century revival of monasticism and by the 14th century had spread to many of the cathedrals of England. Wulfstan, Bishop of Worcester was a particular devotee of Bede's, dedicating a church to him in 1062, which was Wulfstan's first undertaking after his consecration as bishop.",
"title": "Veneration"
},
{
"paragraph_id": 70,
"text": "His body was 'translated' (the ecclesiastical term for relocation of relics) from Jarrow to Durham Cathedral around 1020, where it was placed in the same tomb with St Cuthbert. Later Bede's remains were moved to a shrine in the Galilee Chapel at Durham Cathedral in 1370. The shrine was destroyed during the English Reformation, but the bones were reburied in the chapel. In 1831 the bones were dug up and then reburied in a new tomb, which is still there. Other relics were claimed by York, Glastonbury and Fulda.",
"title": "Veneration"
},
{
"paragraph_id": 71,
"text": "His scholarship and importance to Catholicism were recognised in 1899 when the Vatican declared him a Doctor of the Church. He is the only Englishman named a Doctor of the Church. He is also the only Englishman in Dante's Paradise (Paradiso X.130), mentioned among theologians and doctors of the church in the same canto as Isidore of Seville and the Scot Richard of St Victor.",
"title": "Veneration"
},
{
"paragraph_id": 72,
"text": "His feast day was included in the General Roman Calendar in 1899, for celebration on 27 May rather than on his date of death, 26 May, which was then the feast day of St Augustine of Canterbury. He is venerated in the Catholic Church, in the Church of England and in the Episcopal Church (United States) on 25 May, and in the Eastern Orthodox Church, with a feast day on 27 May (Βεδέα του Ομολογητού).",
"title": "Veneration"
},
{
"paragraph_id": 73,
"text": "Bede became known as Venerable Bede (Latin: Beda Venerabilis) by the 9th century because of his holiness, but this was not linked to consideration for sainthood by the Catholic Church. According to a legend, the epithet was miraculously supplied by angels, thus completing his unfinished epitaph. It is first utilised in connection with Bede in the 9th century, where Bede was grouped with others who were called \"venerable\" at two ecclesiastical councils held at Aachen in 816 and 836. Paul the Deacon then referred to him as venerable consistently. By the 11th and 12th century, it had become commonplace.",
"title": "Veneration"
},
{
"paragraph_id": 74,
"text": "Bede's reputation as a historian, based mostly on the Historia Ecclesiastica, remains strong. Thomas Carlyle called him \"the greatest historical writer since Herodotus\". Walter Goffart says of Bede that he \"holds a privileged and unrivalled place among first historians of Christian Europe\". He is patron of Beda College in Rome which prepares older men for the Roman Catholic priesthood. His life and work have been celebrated with the annual Jarrow Lecture, held at St Paul's Church, Jarrow, since 1958.",
"title": "Veneration"
},
{
"paragraph_id": 75,
"text": "Bede has been described as a progressive scholar, who made Latin and Greek teachings accessible to his fellow Anglo-Saxons.",
"title": "Veneration"
},
{
"paragraph_id": 76,
"text": "Jarrow Hall (formerly Bede's World), in Jarrow, is a museum that celebrates the history of Bede and other parts of English heritage, on the site where he lived.",
"title": "Veneration"
},
{
"paragraph_id": 77,
"text": "Bede Metro station, part of the Tyne and Wear Metro light rail network, is named after him.",
"title": "Veneration"
}
] | Bede, also known as Saint Bede, The Venerable Bede, and Bede the Venerable, was an English monk and an author and scholar. He was one of the greatest teachers and writers during the Early Middle Ages, and his most famous work, Ecclesiastical History of the English People, gained him the title "The Father of English History". He served at the monastery of St Peter and its companion monastery of St Paul in the Kingdom of Northumbria of the Angles. Born on lands belonging to the twin monastery of Monkwearmouth–Jarrow in present-day Tyne and Wear, England, Bede was sent to Monkwearmouth at the age of seven and later joined Abbot Ceolfrith at Jarrow. Both of them survived a plague that struck in 686 and killed a majority of the population there. While Bede spent most of his life in the monastery, he travelled to several abbeys and monasteries across the British Isles, even visiting the archbishop of York and King Ceolwulf of Northumbria. His ecumenical writings were extensive and included a number of Biblical commentaries and other theological works of exegetical erudition. Another important area of study for Bede was the academic discipline of computus, otherwise known to his contemporaries as the science of calculating calendar dates. One of the more important dates Bede tried to compute was Easter, an effort that was mired in controversy. He also helped popularize the practice of dating forward from the birth of Christ, a practice which eventually became commonplace in medieval Europe. He is considered by many historians to be the most important scholar of antiquity for the period between the death of Pope Gregory I in 604 and the coronation of Charlemagne in 800. In 1899, Pope Leo XIII declared him a Doctor of the Church. He is the only native of Great Britain to achieve this designation. Bede was moreover a skilled linguist and translator, and his work made the Latin and Greek writings of the early Church Fathers much more accessible to his fellow Anglo-Saxons, which contributed significantly to English Christianity. Bede's monastery had access to an impressive library which included works by Eusebius, Orosius, and many others. | 2001-08-13T13:44:22Z | 2023-12-29T15:47:49Z | [
"Template:Redirect",
"Template:Use British English",
"Template:Sfn",
"Template:Library resources box",
"Template:Bots",
"Template:Efn",
"Template:Blockquote",
"Template:Cite web",
"Template:Internet Archive author",
"Template:Authority control",
"Template:Short description",
"Template:Infobox saint",
"Template:IPA-ang",
"Template:Spnd",
"Template:Sic",
"Template:Notelist",
"Template:DNB Cite",
"Template:Cite ODNB",
"Template:Use dmy dates",
"Template:Portal-inline",
"Template:Harvnb",
"Template:Cite encyclopedia",
"Template:Sister project links",
"Template:IPAc-en",
"Template:Circa",
"Template:Reflist",
"Template:Cite book",
"Template:Refbegin",
"Template:Subject bar",
"Template:Good article",
"Template:As of",
"Template:Main",
"Template:Lang",
"Template:Refend",
"Template:Librivox author",
"Template:Navboxes",
"Template:Lang-ang",
"Template:ISBN",
"Template:Cite news",
"Template:Gutenberg author",
"Template:Bede",
"Template:Lang-la",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Bede |
4,045 | Bubble tea | Bubble tea (also known as pearl milk tea, bubble milk tea, tapioca milk tea, boba tea, or boba; Chinese: 珍珠奶茶; pinyin: zhēnzhū nǎichá, 波霸奶茶; bōbà nǎichá) is a tea-based drink that originated in Taiwan in the early 1980s. Taiwanese immigrants brought it to the United States in the 1990s, initially in California through regions including Los Angeles County, but the drink has also spread to other countries where there is a large East Asian diaspora population.
Bubble tea most commonly consists of tea accompanied by chewy tapioca balls ("boba" or "pearls"), but it can be made with other toppings as well, such as grass jelly, aloe vera, red bean, and popping boba. It has many varieties and flavors, but the two most popular varieties are pearl black milk tea and pearl green milk tea ("pearl" for the tapioca balls at the bottom).
Bubble teas fall under two categories: teas without milk and milk teas. Both varieties come with a choice of black, green, or oolong tea as the base. Milk teas usually include powdered or fresh milk, but may also use condensed milk, almond milk, soy milk, or coconut milk.
The oldest known bubble tea drink consisted of a mixture of hot Taiwanese black tea, tapioca pearls (Chinese: 粉圓; pinyin: fěn yuán; Pe̍h-ōe-jī: hún-îⁿ), condensed milk, and syrup (Chinese: 糖漿; pinyin: táng jiāng) or honey. Nowadays, bubble tea is most commonly served cold. The tapioca pearls that give bubble tea its name were originally made from the starch of the cassava, a tropical shrub known for its starchy roots which was introduced to Taiwan from South America during Japanese colonial rule. Larger pearls (Chinese: 波霸/黑珍珠; pinyin: bō bà/hēi zhēn zhū) quickly replaced these.
Today, there are some cafés that specialize in bubble tea production. While some cafés may serve bubble tea in a glass, most Taiwanese bubble tea shops serve the drink in a plastic cup and use a machine to seal the top of the cup with heated plastic cellophane. The method allows the tea to be shaken in the serving cup and makes it spill-free until a person is ready to drink it. The cellophane is then pierced with an oversized straw, now referred to as a boba straw, which is larger than a typical drinking straw to allow the toppings to pass through.
Due to its popularity, bubble tea has inspired a variety of bubble tea flavored snacks, such as bubble tea ice cream and bubble tea candy. The market size of bubble tea was valued at $2.4 billion in 2022 and is projected to reach $4.3 billion by the end of 2027. Some of the largest global bubble tea chains include Chatime, CoCo Fresh Tea & Juice and Gong Cha.
Bubble tea comes in many variations which usually consist of black tea, green tea, oolong tea, and sometimes white tea. Another variation, yuenyeung, (Chinese: 鴛鴦, named after the Mandarin duck) originated in Hong Kong and consists of black tea, coffee, and milk.
Other varieties of the drink include blended tea drinks. These variations are often either blended using ice cream, or are smoothies that contain both tea and fruit. Boba ice cream bars have also been produced.
There are many popular flavours of bubble tea, such as taro, mango, coffee, and coconut. Flavouring ingredients such as a syrup or powder determines the flavour and usually the colour of the bubble tea, while other ingredients such as tea, milk and boba are the basis.
Tapioca pearls (boba) are the most common ingredient, although there are other ways to make the chewy spheres found in bubble tea. The pearls vary in color according to the ingredients mixed in with the tapioca. Most pearls are black from brown sugar.
Jelly comes in different shapes: small cubes, stars, or rectangular strips, and flavors such as coconut jelly, konjac, lychee, grass jelly, mango, coffee and green tea. Azuki bean or mung bean paste, typical toppings for Taiwanese shaved ice desserts, give bubble tea an added subtle flavor as well as texture. Aloe, egg pudding (custard), and sago also can be found in many bubble tea shops. Popping boba, or spheres that have fruit juices or syrups inside them, are another popular bubble tea topping. Flavors include mango, strawberry, coconut, kiwi and honey melon.
Some shops offer milk or cheese foam on top of the drink, giving the drink a consistency similar to that of whipped cream, and a saltier flavor profile. One shop described the effect of the cheese foam as "neutraliz[ing] the bitterness of the tea...and as you drink it you taste the returning sweetness of the tea".
Bubble tea shops often give customers the option of choosing the amount of ice or sugar in their drink. Sugar and ice levels are usually specified ordinally (e.g. no ice, less ice, normal ice, more ice), corresponding to quarterly intervals (0%, 25%, 50%, 75%, 100%).
In Southeast Asia, bubble tea is usually packaged in a plastic takeaway cup, sealed with plastic or a rounded cap. New entrants into the market have attempted to distinguish their products by packaging it in bottles and other shapes. Some have used sealed plastic bags. Nevertheless, the plastic takeaway cup with a sealed cap is still the most common packaging method.
The traditional way of bubble tea preparation is to mix the ingredients (sugar, powders and other flavorings) together using a bubble tea shaker cup, by hand.
Many present-day bubble tea shops use a bubble tea shaker machine. This eliminates the need for humans to shake the bubble tea by hand. It also reduces staffing needs as multiple cups of bubble tea may be prepared by a single barista.
One bubble tea shop in Taiwan, named Jhu Dong Auto Tea, makes bubble tea entirely without manual work. All stages of the bubble tea sales process, from ordering, to making, to collection, are fully automated.
Milk and sugar have been added to tea in Taiwan since the Dutch colonization of Taiwan in 1624–1662.
There are two competing stories for the discovery of bubble tea. One is associated with the Chun Shui Tang tea room (春水堂人文茶館) in Taichung. Its founder, Liu Han-Chieh, began serving Chinese tea cold after he observed coffee was served cold in Japan while on a visit in the 1980s. The new style of serving tea propelled his business, and multiple chains serving this tea were established. The company's product development manager, Lin Hsiu Hui, said she created the first bubble tea in 1988 when she poured tapioca balls into her tea during a staff meeting and encouraged others to drink it. The beverage was well received by everyone at the meeting, leading to its inclusion on the menu. It ultimately became the franchise's top-selling product.
Another claim for the invention of bubble tea comes from the Hanlin Tea Room (翰林茶館) in Tainan. It claims that bubble tea was invented in 1986 when teahouse owner Tu Tsong-he was inspired by white tapioca balls he saw in the local market of Ah-bó-liâu (鴨母寮, or Yāmǔliáo in Mandarin). He later made tea using these traditional Taiwanese snacks. This resulted in what is known as "pearl tea".
On 29 January 2023, Google celebrated Bubble Tea with a doodle.
In the 1990s, bubble tea spread all over East and Southeast Asia with its ever-growing popularity. In regions like Hong Kong, Mainland China, Japan, Vietnam, and Singapore, the bubble tea trend expanded rapidly among young people. In some popular shops, people would line up for more than thirty minutes to get a cup of the drink. In recent years, the popularity of bubble tea has gone beyond the beverage itself, with boba lovers inventing various bubble tea flavored-foods, including ice cream, pizza, toast, sushi, and ramen.
In Taiwan, bubble tea has become not just a beverage, but an enduring icon of the culture and food history for the nation. In 2020, the date April 30 was officially declared as National Bubble Tea Day in Taiwan. That same year, the image of bubble tea was proposed as an alternative cover design for Taiwan's passport. According to Al Jazeera, bubble tea has become synonymous with Taiwan and is an important symbol of Taiwanese identity both domestically and internationally. Bubble tea is used to represent Taiwan in the context of the Milk Tea Alliance.
Hong Kong is famous for its traditional Hong Kong-style milk tea, which is made with brewed black tea and evaporated milk. While milk tea has long become integrated into people's daily life, the expansion of Taiwanese bubble tea chains, including Tiger Sugar, Youiccha, and Xing Fu Tang, into Hong Kong created a new wave for “boba tea”.
Since the idea of adding tapioca pearls into milk tea was introduced into China in the 1990s, bubble tea has increased in popularity. In 2020 it was estimated that the consumption of bubble tea was 5 times that of coffee in recent years. According to data from QianZhen Industry Research Institute, the value of the tea-related beverage market in China reached 53.7 billion yuan (about $7.63 billion) in 2018. In 2019, annual sales from bubble tea shops reached as high as 140.5 billion RMB (roughly US$20 billion). While bubble tea chains from Taiwan (e.g., Gong Cha and Coco) are still popular, more local brands, like Yi Dian Dian, Nayuki, Hey Tea, etc., are now dominating the market.
In China, young people's growing obsession with bubble tea shaped their way of social interaction. Buying someone a cup of bubble tea has become a new way of informally thanking someone. It is also a favored topic among friends and on social media.
Bubble tea first entered Japan by the late 1990s, but it failed to leave a lasting impression on the public markets. It was not until the 2010s when the bubble tea trend finally swept Japan. Shops from Taiwan, Korea, and China, as well as local brands, began to pop up in cities, and bubble tea has remained one of the hottest trends since then. Bubble tea has become so commonplace among teenagers that teenage girls in Japan invented slang for it: tapiru (タピる). The word is short for drinking tapioca tea in Japanese, and it won first place in a survey of "Japanese slang for middle school girls" in 2018. A bubble tea theme park was open for a limited time in 2019 in Harajuku, Tokyo.
Known locally in Chinese as 泡泡茶 (Pinyin: pào pào chá), bubble tea is loved by many in Singapore. The drink was sold in Singapore as early as 1992 and became phenomenally popular among young people in 2001. This soon ended because of the intense competition and price wars among shops. As a result, most bubble tea shops closed and bubble tea lost its popularity by 2003. When Taiwanese chains like Koi and Gong Cha came to Singapore in 2007 and 2009, the beverage experienced only short resurgences in popularity. In 2018, the interest in bubble tea rose again at an unprecedented speed in Singapore, as new brands like The Alley and Tiger Sugar entered the market; social media also played an important role in driving this renaissance of bubble tea.
In the 1990s, Taiwanese immigrants began to introduce bubble tea in Taiwanese restaurants in California. Some of the first stand-alone bubble tea shops can be traced to a food court in Arcadia, in Southern California, and Fantasia Coffee & Tea in Cupertino, in Northern California. Chains like Tapioca Express, Quickly, Lollicup and Q-Cup emerged in the late 1990s and early 2000s, bringing the Taiwanese bubble tea trend to the US. Within the Asian American community, bubble tea is commonly known under its colloquial term "boba".
As the beverage gained popularity in the US, it gradually became more than a drink, but a cultural identity for Asian Americans. This phenomenon was referred to as “boba life” by Chinese-American brothers Andrew and David Fung in their music video, “Bobalife,” released in 2013. Boba symbolizes a subculture that Asian Americans as social minorities could define themselves as, and “boba life” is a reflection of their desire for both cultural and political recognition. It is also used disparagingly in the term boba liberal.
Other regions with large concentrations of bubble tea restaurants in the United States are the Northeast and Southwest. This is reflected in the coffeehouse-style teahouse chains that originate from the regions, such as Boba Tea Company from Albuquerque, New Mexico, No. 1 Boba Tea in Las Vegas, Nevada, and Kung Fu Tea from New York City. Albuquerque and Las Vegas have a large concentrations of boba tea restaurants, as the drink is popular especially among the Hispano, Navajo, Pueblo, and other Native American, Hispanic and Latino American communities in the Southwest.
A massive shipping and supply chain crisis on the U.S. West coast, coupled with the obstruction of the Suez Canal in March 2021, caused a shortage of tapioca pearls for bubble tea shops in the U.S. and Canada. Most of the tapioca consumed in the U.S. is imported from Asia, since the critical ingredient, tapioca starch, is mostly grown in Asia.
TikTok trends and the Korean Wave also fueled the popularity of bubble tea in the United States.
Individual bubble tea shops began to appear in Australia in the 1990s, along with other regional drinks like Eis Cendol. Chains of stores were established as early as 2002, when the Bubble Cup franchise opened its first store in Melbourne. Although originally associated with the rapid growth of immigration from Asia and the vast tertiary student cohort from Asia, in Melbourne and Sydney bubble tea has become popular across many communities. Many suburban shopping centres have a branch of a bubble tea franchise.
The first bubble tea shop in Mauritius opened in late 2012, and since then there have been bubble tea shops in most shopping malls on the island. Bubble tea shops have become a popular place for teenagers to hang out.
In July 2019, Singapore's Mount Alvernia Hospital warned against the high sugar content of bubble tea since the drink had become extremely popular in Singapore. While it acknowledged the benefits of drinking green tea and black tea in reducing risk of cardiovascular disease, diabetes, arthritis and cancer, respectively, the hospital cautions that the addition of other ingredients like non-dairy creamer and toppings in the tea could raise the fat and sugar content of the tea and increase the risk of chronic diseases. Non-dairy creamer is a milk substitute that contains trans fat in the form of hydrogenated palm oil. The hospital warned that this oil has been strongly correlated with an increased risk of heart disease and stroke.
The other concern about bubble tea is its high calorie content, partially attributed to the high-carbohydrate tapioca pearls (or 珍珠 zhēn zhū), which can make up to half the calorie-count in a 500 ml serving of bubble tea. | [
{
"paragraph_id": 0,
"text": "Bubble tea (also known as pearl milk tea, bubble milk tea, tapioca milk tea, boba tea, or boba; Chinese: 珍珠奶茶; pinyin: zhēnzhū nǎichá, 波霸奶茶; bōbà nǎichá) is a tea-based drink that originated in Taiwan in the early 1980s. Taiwanese immigrants brought it to the United States in the 1990s, initially in California through regions including Los Angeles County, but the drink has also spread to other countries where there is a large East Asian diaspora population.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bubble tea most commonly consists of tea accompanied by chewy tapioca balls (\"boba\" or \"pearls\"), but it can be made with other toppings as well, such as grass jelly, aloe vera, red bean, and popping boba. It has many varieties and flavors, but the two most popular varieties are pearl black milk tea and pearl green milk tea (\"pearl\" for the tapioca balls at the bottom).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Bubble teas fall under two categories: teas without milk and milk teas. Both varieties come with a choice of black, green, or oolong tea as the base. Milk teas usually include powdered or fresh milk, but may also use condensed milk, almond milk, soy milk, or coconut milk.",
"title": "Description"
},
{
"paragraph_id": 3,
"text": "The oldest known bubble tea drink consisted of a mixture of hot Taiwanese black tea, tapioca pearls (Chinese: 粉圓; pinyin: fěn yuán; Pe̍h-ōe-jī: hún-îⁿ), condensed milk, and syrup (Chinese: 糖漿; pinyin: táng jiāng) or honey. Nowadays, bubble tea is most commonly served cold. The tapioca pearls that give bubble tea its name were originally made from the starch of the cassava, a tropical shrub known for its starchy roots which was introduced to Taiwan from South America during Japanese colonial rule. Larger pearls (Chinese: 波霸/黑珍珠; pinyin: bō bà/hēi zhēn zhū) quickly replaced these.",
"title": "Description"
},
{
"paragraph_id": 4,
"text": "Today, there are some cafés that specialize in bubble tea production. While some cafés may serve bubble tea in a glass, most Taiwanese bubble tea shops serve the drink in a plastic cup and use a machine to seal the top of the cup with heated plastic cellophane. The method allows the tea to be shaken in the serving cup and makes it spill-free until a person is ready to drink it. The cellophane is then pierced with an oversized straw, now referred to as a boba straw, which is larger than a typical drinking straw to allow the toppings to pass through.",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "Due to its popularity, bubble tea has inspired a variety of bubble tea flavored snacks, such as bubble tea ice cream and bubble tea candy. The market size of bubble tea was valued at $2.4 billion in 2022 and is projected to reach $4.3 billion by the end of 2027. Some of the largest global bubble tea chains include Chatime, CoCo Fresh Tea & Juice and Gong Cha.",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "Bubble tea comes in many variations which usually consist of black tea, green tea, oolong tea, and sometimes white tea. Another variation, yuenyeung, (Chinese: 鴛鴦, named after the Mandarin duck) originated in Hong Kong and consists of black tea, coffee, and milk.",
"title": "Description"
},
{
"paragraph_id": 7,
"text": "Other varieties of the drink include blended tea drinks. These variations are often either blended using ice cream, or are smoothies that contain both tea and fruit. Boba ice cream bars have also been produced.",
"title": "Description"
},
{
"paragraph_id": 8,
"text": "There are many popular flavours of bubble tea, such as taro, mango, coffee, and coconut. Flavouring ingredients such as a syrup or powder determines the flavour and usually the colour of the bubble tea, while other ingredients such as tea, milk and boba are the basis.",
"title": "Description"
},
{
"paragraph_id": 9,
"text": "Tapioca pearls (boba) are the most common ingredient, although there are other ways to make the chewy spheres found in bubble tea. The pearls vary in color according to the ingredients mixed in with the tapioca. Most pearls are black from brown sugar.",
"title": "Description"
},
{
"paragraph_id": 10,
"text": "Jelly comes in different shapes: small cubes, stars, or rectangular strips, and flavors such as coconut jelly, konjac, lychee, grass jelly, mango, coffee and green tea. Azuki bean or mung bean paste, typical toppings for Taiwanese shaved ice desserts, give bubble tea an added subtle flavor as well as texture. Aloe, egg pudding (custard), and sago also can be found in many bubble tea shops. Popping boba, or spheres that have fruit juices or syrups inside them, are another popular bubble tea topping. Flavors include mango, strawberry, coconut, kiwi and honey melon.",
"title": "Description"
},
{
"paragraph_id": 11,
"text": "Some shops offer milk or cheese foam on top of the drink, giving the drink a consistency similar to that of whipped cream, and a saltier flavor profile. One shop described the effect of the cheese foam as \"neutraliz[ing] the bitterness of the tea...and as you drink it you taste the returning sweetness of the tea\".",
"title": "Description"
},
{
"paragraph_id": 12,
"text": "Bubble tea shops often give customers the option of choosing the amount of ice or sugar in their drink. Sugar and ice levels are usually specified ordinally (e.g. no ice, less ice, normal ice, more ice), corresponding to quarterly intervals (0%, 25%, 50%, 75%, 100%).",
"title": "Description"
},
{
"paragraph_id": 13,
"text": "In Southeast Asia, bubble tea is usually packaged in a plastic takeaway cup, sealed with plastic or a rounded cap. New entrants into the market have attempted to distinguish their products by packaging it in bottles and other shapes. Some have used sealed plastic bags. Nevertheless, the plastic takeaway cup with a sealed cap is still the most common packaging method.",
"title": "Description"
},
{
"paragraph_id": 14,
"text": "The traditional way of bubble tea preparation is to mix the ingredients (sugar, powders and other flavorings) together using a bubble tea shaker cup, by hand.",
"title": "Description"
},
{
"paragraph_id": 15,
"text": "Many present-day bubble tea shops use a bubble tea shaker machine. This eliminates the need for humans to shake the bubble tea by hand. It also reduces staffing needs as multiple cups of bubble tea may be prepared by a single barista.",
"title": "Description"
},
{
"paragraph_id": 16,
"text": "One bubble tea shop in Taiwan, named Jhu Dong Auto Tea, makes bubble tea entirely without manual work. All stages of the bubble tea sales process, from ordering, to making, to collection, are fully automated.",
"title": "Description"
},
{
"paragraph_id": 17,
"text": "Milk and sugar have been added to tea in Taiwan since the Dutch colonization of Taiwan in 1624–1662.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "There are two competing stories for the discovery of bubble tea. One is associated with the Chun Shui Tang tea room (春水堂人文茶館) in Taichung. Its founder, Liu Han-Chieh, began serving Chinese tea cold after he observed coffee was served cold in Japan while on a visit in the 1980s. The new style of serving tea propelled his business, and multiple chains serving this tea were established. The company's product development manager, Lin Hsiu Hui, said she created the first bubble tea in 1988 when she poured tapioca balls into her tea during a staff meeting and encouraged others to drink it. The beverage was well received by everyone at the meeting, leading to its inclusion on the menu. It ultimately became the franchise's top-selling product.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Another claim for the invention of bubble tea comes from the Hanlin Tea Room (翰林茶館) in Tainan. It claims that bubble tea was invented in 1986 when teahouse owner Tu Tsong-he was inspired by white tapioca balls he saw in the local market of Ah-bó-liâu (鴨母寮, or Yāmǔliáo in Mandarin). He later made tea using these traditional Taiwanese snacks. This resulted in what is known as \"pearl tea\".",
"title": "History"
},
{
"paragraph_id": 20,
"text": "On 29 January 2023, Google celebrated Bubble Tea with a doodle.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In the 1990s, bubble tea spread all over East and Southeast Asia with its ever-growing popularity. In regions like Hong Kong, Mainland China, Japan, Vietnam, and Singapore, the bubble tea trend expanded rapidly among young people. In some popular shops, people would line up for more than thirty minutes to get a cup of the drink. In recent years, the popularity of bubble tea has gone beyond the beverage itself, with boba lovers inventing various bubble tea flavored-foods, including ice cream, pizza, toast, sushi, and ramen.",
"title": "Popularity"
},
{
"paragraph_id": 22,
"text": "In Taiwan, bubble tea has become not just a beverage, but an enduring icon of the culture and food history for the nation. In 2020, the date April 30 was officially declared as National Bubble Tea Day in Taiwan. That same year, the image of bubble tea was proposed as an alternative cover design for Taiwan's passport. According to Al Jazeera, bubble tea has become synonymous with Taiwan and is an important symbol of Taiwanese identity both domestically and internationally. Bubble tea is used to represent Taiwan in the context of the Milk Tea Alliance.",
"title": "Popularity"
},
{
"paragraph_id": 23,
"text": "Hong Kong is famous for its traditional Hong Kong-style milk tea, which is made with brewed black tea and evaporated milk. While milk tea has long become integrated into people's daily life, the expansion of Taiwanese bubble tea chains, including Tiger Sugar, Youiccha, and Xing Fu Tang, into Hong Kong created a new wave for “boba tea”.",
"title": "Popularity"
},
{
"paragraph_id": 24,
"text": "Since the idea of adding tapioca pearls into milk tea was introduced into China in the 1990s, bubble tea has increased in popularity. In 2020 it was estimated that the consumption of bubble tea was 5 times that of coffee in recent years. According to data from QianZhen Industry Research Institute, the value of the tea-related beverage market in China reached 53.7 billion yuan (about $7.63 billion) in 2018. In 2019, annual sales from bubble tea shops reached as high as 140.5 billion RMB (roughly US$20 billion). While bubble tea chains from Taiwan (e.g., Gong Cha and Coco) are still popular, more local brands, like Yi Dian Dian, Nayuki, Hey Tea, etc., are now dominating the market.",
"title": "Popularity"
},
{
"paragraph_id": 25,
"text": "In China, young people's growing obsession with bubble tea shaped their way of social interaction. Buying someone a cup of bubble tea has become a new way of informally thanking someone. It is also a favored topic among friends and on social media.",
"title": "Popularity"
},
{
"paragraph_id": 26,
"text": "Bubble tea first entered Japan by the late 1990s, but it failed to leave a lasting impression on the public markets. It was not until the 2010s when the bubble tea trend finally swept Japan. Shops from Taiwan, Korea, and China, as well as local brands, began to pop up in cities, and bubble tea has remained one of the hottest trends since then. Bubble tea has become so commonplace among teenagers that teenage girls in Japan invented slang for it: tapiru (タピる). The word is short for drinking tapioca tea in Japanese, and it won first place in a survey of \"Japanese slang for middle school girls\" in 2018. A bubble tea theme park was open for a limited time in 2019 in Harajuku, Tokyo.",
"title": "Popularity"
},
{
"paragraph_id": 27,
"text": "Known locally in Chinese as 泡泡茶 (Pinyin: pào pào chá), bubble tea is loved by many in Singapore. The drink was sold in Singapore as early as 1992 and became phenomenally popular among young people in 2001. This soon ended because of the intense competition and price wars among shops. As a result, most bubble tea shops closed and bubble tea lost its popularity by 2003. When Taiwanese chains like Koi and Gong Cha came to Singapore in 2007 and 2009, the beverage experienced only short resurgences in popularity. In 2018, the interest in bubble tea rose again at an unprecedented speed in Singapore, as new brands like The Alley and Tiger Sugar entered the market; social media also played an important role in driving this renaissance of bubble tea.",
"title": "Popularity"
},
{
"paragraph_id": 28,
"text": "In the 1990s, Taiwanese immigrants began to introduce bubble tea in Taiwanese restaurants in California. Some of the first stand-alone bubble tea shops can be traced to a food court in Arcadia, in Southern California, and Fantasia Coffee & Tea in Cupertino, in Northern California. Chains like Tapioca Express, Quickly, Lollicup and Q-Cup emerged in the late 1990s and early 2000s, bringing the Taiwanese bubble tea trend to the US. Within the Asian American community, bubble tea is commonly known under its colloquial term \"boba\".",
"title": "Popularity"
},
{
"paragraph_id": 29,
"text": "As the beverage gained popularity in the US, it gradually became more than a drink, but a cultural identity for Asian Americans. This phenomenon was referred to as “boba life” by Chinese-American brothers Andrew and David Fung in their music video, “Bobalife,” released in 2013. Boba symbolizes a subculture that Asian Americans as social minorities could define themselves as, and “boba life” is a reflection of their desire for both cultural and political recognition. It is also used disparagingly in the term boba liberal.",
"title": "Popularity"
},
{
"paragraph_id": 30,
"text": "Other regions with large concentrations of bubble tea restaurants in the United States are the Northeast and Southwest. This is reflected in the coffeehouse-style teahouse chains that originate from the regions, such as Boba Tea Company from Albuquerque, New Mexico, No. 1 Boba Tea in Las Vegas, Nevada, and Kung Fu Tea from New York City. Albuquerque and Las Vegas have a large concentrations of boba tea restaurants, as the drink is popular especially among the Hispano, Navajo, Pueblo, and other Native American, Hispanic and Latino American communities in the Southwest.",
"title": "Popularity"
},
{
"paragraph_id": 31,
"text": "A massive shipping and supply chain crisis on the U.S. West coast, coupled with the obstruction of the Suez Canal in March 2021, caused a shortage of tapioca pearls for bubble tea shops in the U.S. and Canada. Most of the tapioca consumed in the U.S. is imported from Asia, since the critical ingredient, tapioca starch, is mostly grown in Asia.",
"title": "Popularity"
},
{
"paragraph_id": 32,
"text": "TikTok trends and the Korean Wave also fueled the popularity of bubble tea in the United States.",
"title": "Popularity"
},
{
"paragraph_id": 33,
"text": "Individual bubble tea shops began to appear in Australia in the 1990s, along with other regional drinks like Eis Cendol. Chains of stores were established as early as 2002, when the Bubble Cup franchise opened its first store in Melbourne. Although originally associated with the rapid growth of immigration from Asia and the vast tertiary student cohort from Asia, in Melbourne and Sydney bubble tea has become popular across many communities. Many suburban shopping centres have a branch of a bubble tea franchise.",
"title": "Popularity"
},
{
"paragraph_id": 34,
"text": "The first bubble tea shop in Mauritius opened in late 2012, and since then there have been bubble tea shops in most shopping malls on the island. Bubble tea shops have become a popular place for teenagers to hang out.",
"title": "Popularity"
},
{
"paragraph_id": 35,
"text": "In July 2019, Singapore's Mount Alvernia Hospital warned against the high sugar content of bubble tea since the drink had become extremely popular in Singapore. While it acknowledged the benefits of drinking green tea and black tea in reducing risk of cardiovascular disease, diabetes, arthritis and cancer, respectively, the hospital cautions that the addition of other ingredients like non-dairy creamer and toppings in the tea could raise the fat and sugar content of the tea and increase the risk of chronic diseases. Non-dairy creamer is a milk substitute that contains trans fat in the form of hydrogenated palm oil. The hospital warned that this oil has been strongly correlated with an increased risk of heart disease and stroke.",
"title": "Potential health concerns"
},
{
"paragraph_id": 36,
"text": "The other concern about bubble tea is its high calorie content, partially attributed to the high-carbohydrate tapioca pearls (or 珍珠 zhēn zhū), which can make up to half the calorie-count in a 500 ml serving of bubble tea.",
"title": "Potential health concerns"
}
] | Bubble tea is a tea-based drink that originated in Taiwan in the early 1980s. Taiwanese immigrants brought it to the United States in the 1990s, initially in California through regions including Los Angeles County, but the drink has also spread to other countries where there is a large East Asian diaspora population. Bubble tea most commonly consists of tea accompanied by chewy tapioca balls, but it can be made with other toppings as well, such as grass jelly, aloe vera, red bean, and popping boba. It has many varieties and flavors, but the two most popular varieties are pearl black milk tea and pearl green milk tea. | 2001-11-04T23:52:27Z | 2023-12-30T00:02:39Z | [
"Template:Pp",
"Template:Infobox food",
"Template:Commons category-inline",
"Template:Teas",
"Template:Short description",
"Template:Citation needed",
"Template:Annotated link",
"Template:Cite magazine",
"Template:Cbignore",
"Template:Taiwanese cuisine",
"Template:Authority control",
"Template:Zh",
"Template:Reflist",
"Template:Cite news",
"Template:Cite thesis",
"Template:Portal bar",
"Template:Portal",
"Template:Ill",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite book",
"Template:Use dmy dates"
] | https://en.wikipedia.org/wiki/Bubble_tea |
4,049 | Battle of Blenheim | The Battle of Blenheim (German: Zweite Schlacht bei Höchstädt; French: Bataille de Höchstädt; Dutch: Slag bij Blenheim) fought on 13 August [O.S. 2 August] 1704, was a major battle of the War of the Spanish Succession. The overwhelming Allied victory ensured the safety of Vienna from the Franco-Bavarian army, thus preventing the collapse of the reconstituted Grand Alliance.
Louis XIV of France sought to knock the Holy Roman Emperor, Leopold, out of the war by seizing Vienna, the Habsburg capital, and gain a favourable peace settlement. The dangers to Vienna were considerable: Maximilian II Emanuel, Elector of Bavaria, and Marshal Ferdinand de Marsin's forces in Bavaria threatened from the west, and Marshal Louis Joseph de Bourbon, duc de Vendôme's large army in northern Italy posed a serious danger with a potential offensive through the Brenner Pass. Vienna was also under pressure from Rákóczi's Hungarian revolt from its eastern approaches. Realising the danger, the Duke of Marlborough resolved to alleviate the peril to Vienna by marching his forces south from Bedburg to help maintain Emperor Leopold within the Grand Alliance.
A combination of deception and skilled administration – designed to conceal his true destination from friend and foe alike – enabled Marlborough to march 400 km (250 mi) unhindered from the Low Countries to the River Danube in five weeks. After securing Donauwörth on the Danube, Marlborough sought to engage Maximilian's and Marsin's army before Marshal Camille d'Hostun, duc de Tallard, could bring reinforcements through the Black Forest. The Franco-Bavarian commanders proved reluctant to fight until their numbers were deemed sufficient, and Marlborough failed in his attempts to force an engagement. When Tallard arrived to bolster Maximilian's army, and Prince Eugene of Savoy arrived with reinforcements for the Allies, the two armies finally met on the banks of the Danube in and around the small village of Blindheim, from which the English "Blenheim" is derived.
Blenheim was one of the battles that altered the course of the war, which until then was favouring the French and Spanish Bourbons. Although the battle did not win the war, it prevented a potentially devastating loss for the Grand Alliance and shifted the war's momentum, ending French plans of knocking Emperor Leopold out of the war. The French suffered catastrophic casualties in the battle, including their commander-in-chief, Tallard, who was taken captive to England. Before the 1704 campaign ended, the Allies had taken Landau, and the towns of Trier and Trarbach on the Moselle in preparation for the following year's campaign into France itself. This offensive never materialised, for the Grand Alliance's army had to depart the Moselle to defend Liège from a French counter-offensive. The war continued for another decade before ending in 1714.
By 1704, the War of the Spanish Succession was in its fourth year. The previous year had been one of successes for France and her allies, most particularly on the Danube, where Marshal Claude-Louis-Hector de Villars and Maximilian II Emanuel, Elector of Bavaria, had created a direct threat to Vienna, the Habsburg capital. Vienna had been saved by dissension between the two commanders, leading to Villars being replaced by the less dynamic Marshal Ferdinand de Marsin. Nevertheless, the threat was still real: Rákóczi's Hungarian revolt was threatening the Empire's eastern approaches, and Marshal Louis Joseph, Duke of Vendôme's forces threatened an invasion from northern Italy. In the courts of Versailles and Madrid, Vienna's fall was confidently anticipated, an event which would almost certainly have led to the collapse of the reconstituted Grand Alliance.
To isolate the Danube from any Allied intervention, Marshal François de Neufville, duc de Villeroi's 46,000 troops were expected to pin the 70,000 Dutch and British troops around Maastricht in the Low Countries, while General Robert Jean Antoine de Franquetot de Coigny protected Alsace against surprise with a further corps. The only forces immediately available for Vienna's defence were the imperial army under Margrave Louis William of Baden of 36,000 men stationed in the Lines of Stollhofen to watch Marshal Camille d'Hostun, duc de Tallard, at Strasbourg; and 10,000 men under Prince Eugene of Savoy south of Ulm.
Both the Imperial Austrian Ambassador in London, Count Wratislaw, and the Duke of Marlborough realised the implications of the situation on the Danube. The Dutch were against any adventurous military operation as far south as the Danube and would not permit any major weakening of the forces in the Spanish Netherlands. Marlborough, realising the only way to reinforce the Austrians was by the use of secrecy and guile, set out to deceive his Dutch allies by pretending to move his troops to the Moselle – a plan approved of by The Hague – but once there, he would slip the Dutch leash and link up with Austrian forces in southern Germany.
This does not mean that he proceeded entirely without consultation with the Dutch. Without them, the army's logistics system would have simply collapsed. Intensive consultations preceded the campaign and Anthonie Heinsius, the Dutch Grand Pensionary, was likely informed by Marlborough of his secret plan to link up with Austrian forces. Many other important Dutchmen, like Major-General Johan Wijnand van Goor, were in favour of helping the Emperor and participated in the campaign. The Dutch diplomat and field deputy Van Rechteren-Almelo also played an important role. He made sure that on their 450-kilometer-long march, the Allies would nowhere be denied passage by local rulers, nor would they need to look for provisions, horsefeed or new boots. He also saw to it that sufficient stopovers were arranged along the way to ensure that the Allies arrived at their destination in good condition. This was of paramount importance, for the success of the operation depended on a quick elimination of the Bavarian elector. However, it was not possible to make the logistical arrangements in advance that would have been indispensable to supply the Allied army south of the Danube. For this, the Allies should have had access to the free imperial cities of Ulm and Augsburg, but the Bavarian elector had taken these two cities. This could have become a problem for Marlborough had the Elector avoided a battle and instead entrenched himself south of the Danube. Had Villeroy then managed to take advantage of the weakening of Allied forces in the Netherlands by recapturing Liège and besieging Maastricht, it would have validated the concerns of his Dutch adversaries.
A scarlet caterpillar, upon which all eyes were at once fixed, began to crawl steadfastly day by day across the map of Europe, dragging the whole war with it. – Winston Churchill
Marlborough's march started on 19 May from Bedburg, 32 km (20 mi) northwest of Cologne. The army assembled by Marlborough's brother, General Charles Churchill, consisted of 66 squadrons of cavalry, 31 battalions of infantry and 38 guns and mortars, totalling 21,000 men, 16,000 of whom were British. This force was augmented en route, and by the time it reached the Danube it numbered 40,000 – 47 battalions and 88 squadrons. While Marlborough led this army south, the Dutch general, Henry Overkirk, Count of Nassau, maintained a defensive position in the Dutch Republic against the possibility of Villeroi mounting an attack. Marlborough had assured the Dutch that if the French were to launch an offensive he would return in good time, but he calculated that as he marched south, the French army would be drawn after him. In this assumption Marlborough proved correct: Villeroi shadowed Marlborough with 30,000 men in 60 squadrons and 42 battalions. Marlborough wrote to Godolphin: "I am very sensible that I take a great deal upon me, but should I act otherwise, the Empire would be undone ..."
In the meantime, the appointment of Henry Overkirk as Field Marshal caused significant controversy in the Dutch Republic. After the Earl of Athlone's death, the Dutch States General had put Overkirk in charge of the Dutch States Army, which led to much discontent among the other high-ranking Dutch generals. Ernst Wilhelm von Salisch, Daniël van Dopff and Menno van Coehoorn threatened to resign or go into the service of other countries, although all were eventually convinced to stay. The new infantry generals were also disgruntled — the Lord of Slangenburg because he had to serve the less experienced Overkirk; and the Count of Noyelles because he had to serve the orders of the 'insupportable' Slangenburg. Then there was the major problem of the position of the Prince of Orange. The provinces of Friesland and Groningen demanded that their 17-year-old stadtholder be appointed supreme infantry general. This divided the parties so much that a second Grand Assembly, as had existed in 1651, was considered. However, after pressure from the other provinces, Friesland and Groningen adjusted their demands and a compromise was found. The Prince of Orange would nominally be appointed infantry general, behind Slangenburg and Noyelles, but he would not really be in command until he was 20.
While the Allies were making their preparations, the French were striving to maintain and re-supply Marsin. He had been operating with Maximilian II against Margrave Louis William, and was somewhat isolated from France: his only lines of communication lay through the rocky passes of the Black Forest. On 14 May, Tallard brought 8,000 reinforcements and vast supplies and munitions through the difficult terrain, whilst outmanoeuvring Johann Karl von Thüngen [de], the Imperial general who sought to block his path. Tallard then returned with his own force to the Rhine, once again side-stepping Thüngen's efforts to intercept him.
On 26 May, Marlborough reached Coblenz, where the Moselle meets the Rhine. If he intended an attack along the Moselle his army would now have to turn west; instead it crossed to the right bank of the Rhine, and was reinforced by 5,000 waiting Hanoverians and Prussians. The French realised that there would be no campaign on the Moselle. A second possible objective now occurred to them – an Allied incursion into Alsace and an attack on Strasbourg. Marlborough furthered this apprehension by constructing bridges across the Rhine at Philippsburg, a ruse that not only encouraged Villeroi to come to Tallard's aid in the defence of Alsace, but one that ensured the French plan to march on Vienna was delayed while they waited to see what Marlborough's army would do.
Encouraged by Marlborough's promise to return to the Netherlands if a French attack developed there, transferring his troops up the Rhine on barges at a rate of 130 km (80 mi) a day, the Dutch States General agreed to release the Danish contingent of seven battalions and 22 squadrons as reinforcements. Marlborough reached Ladenburg, in the plain of the Neckar and the Rhine, and there halted for three days to rest his cavalry and allow the guns and infantry to close up. On 6 June he arrived at Wiesloch, south of Heidelberg. The following day, the Allied army swung away from the Rhine towards the hills of the Swabian Jura and the Danube beyond. At last Marlborough's destination was established without doubt.
On 10 June, Marlborough met for the first time the President of the Imperial War Council, Prince Eugene – accompanied by Count Wratislaw – at the village of Mundelsheim, halfway between the Danube and the Rhine. By 13 June, the Imperial Field Commander, Margrave Louis William of Baden, had joined them in Großheppach. The three generals commanded a force of nearly 110,000 men. At this conference, it was decided that Prince Eugene would return with 28,000 men to the Lines of Stollhofen on the Rhine to watch Villeroi and Tallard and prevent them going to the aid of the Franco-Bavarian army on the Danube. Meanwhile, Marlborough's and Margrave Louis William's forces would combine, totalling 80,000 men, and march on the Danube to seek out Maximilian II and Marsin before they could be reinforced.
Knowing Marlborough's destination, Tallard and Villeroi met at Landau in the Palatinate on 13 June to construct a plan to save Bavaria. The rigidity of the French command system was such that any variations from the original plan had to be sanctioned by Versailles. The Count of Mérode-Westerloo, commander of the Flemish troops in Tallard's army, wrote "One thing is certain: we delayed our march from Alsace for far too long and quite inexplicably." Approval from King Louis arrived on 27 June: Tallard was to reinforce Marsin and Maximilian II on the Danube via the Black Forest, with 40 battalions and 50 squadrons; Villeroi was to pin down the Allies defending the Lines of Stollhofen, or, if the Allies should move all their forces to the Danube, he was to join with Tallard; Coigny with 8,000 men would protect Alsace. On 1 July Tallard's army of 35,000 re-crossed the Rhine at Kehl and began its march.
On 22 June, Marlborough's forces linked up with the Imperial forces at Launsheim, having covered 400 km (250 mi) in five weeks. Thanks to a carefully planned timetable, the effects of wear and tear had been kept to a minimum. Captain Parker described the march discipline: "As we marched through the country of our Allies, commissars were appointed to furnish us with all manner of necessaries for man and horse ... the soldiers had nothing to do but pitch their tents, boil kettles and lie down to rest." In response to Marlborough's manoeuvres, Maximilian and Marsin, conscious of their numerical disadvantage with only 40,000 men, moved their forces to the entrenched camp at Dillingen on the north bank of the Danube. Marlborough could not attack Dillingen because of a lack of siege guns – he had been unable to bring any from the Low Countries, and Margrave Louis William had failed to supply any, despite prior assurances that he would.
The Allies needed a base for provisions and a good river crossing. Consequently, on 2 July Marlborough stormed the fortress of Schellenberg on the heights above the town of Donauwörth. Count Jean d'Arco had been sent with 12,000 men from the Franco-Bavarian camp to hold the town and grassy hill, but after a fierce battle, with heavy casualties on both sides, Schellenberg fell. This forced Donauwörth to surrender shortly afterward. Maximilian, knowing his position at Dillingen was now not tenable, took up a position behind the strong fortifications of Augsburg.
Tallard's march presented a dilemma for Prince Eugene. If the Allies were not to be outnumbered on the Danube, he realised that he had to either try to cut Tallard off before he could get there, or to reinforce Marlborough. If he withdrew from the Rhine to the Danube, Villeroi might also make a move south to link up with Maximilian and Marsin. Prince Eugene compromised – leaving 12,000 troops behind guarding the Lines of Stollhofen – he marched off with the rest of his army to forestall Tallard.
Lacking in numbers, Prince Eugene could not seriously disrupt Tallard's march but the French marshal's progress was proving slow. Tallard's force had suffered considerably more than Marlborough's troops on their march – many of his cavalry horses were suffering from glanders and the mountain passes were proving tough for the 2,000 wagonloads of provisions. Local German peasants, angry at French plundering, compounded Tallard's problems, leading Mérode-Westerloo to bemoan – "the enraged peasantry killed several thousand of our men before the army was clear of the Black Forest."
At Augsburg, Maximilian was informed on 14 July that Tallard was on his way through the Black Forest. This good news bolstered his policy of inaction, further encouraging him to wait for the reinforcements. This reticence to fight induced Marlborough to undertake a controversial policy of spoliation in Bavaria, burning buildings and crops throughout the rich lands south of the Danube. This had two aims: firstly to put pressure on Maximilian to fight or come to terms before Tallard arrived with reinforcements; and secondly, to ruin Bavaria as a base from which the French and Bavarian armies could attack Vienna, or pursue Marlborough into Franconia if, at some stage, he had to withdraw northwards. But this destruction, coupled with a protracted siege of the town of Rain over 9 to 16 July, caused Prince Eugene to lament "... since the Donauwörth action I cannot admire their performances", and later to conclude "If he has to go home without having achieved his objective, he will certainly be ruined."
Tallard, with 34,000 men, reached Ulm, joining with Maximilian and Marsin at Augsburg on 5 August, although Maximilian had dispersed his army in response to Marlborough's campaign of ravaging the region. Also on 5 August, Prince Eugene reached Höchstädt, riding that same night to meet with Marlborough at Schrobenhausen. Marlborough knew that another crossing point over the Danube was required in case Donauwörth fell to the enemy; so on 7 August, the first of Margrave Louis William's 15,000 Imperial troops left Marlborough's main force to besiege the heavily defended city of Ingolstadt, 32 km (20 mi) farther down the Danube, with the remainder following two days later.
With Prince Eugene's forces at Höchstädt on the north bank of the Danube, and Marlborough's at Rain on the south bank, Tallard and Maximilian debated their next move. Tallard preferred to bide his time, replenish supplies and allow Marlborough's Danube campaign to flounder in the colder autumn weather; Maximilian and Marsin, newly reinforced, were keen to push ahead. The French and Bavarian commanders eventually agreed to attack Prince Eugene's smaller force. On 9 August, the Franco-Bavarian forces began to cross to the north bank of the Danube. On 10 August, Prince Eugene sent an urgent dispatch reporting that he was falling back to Donauwörth. By a series of swift marches Marlborough concentrated his forces on Donauwörth and, by noon 11 August, the link-up was complete.
During 11 August, Tallard pushed forward from the river crossings at Dillingen. By 12 August, the Franco-Bavarian forces were encamped behind the small River Nebel near the village of Blenheim on the plain of Höchstädt. On the same day, Marlborough and Prince Eugene carried out a reconnaissance of the French position from the church spire at Tapfheim, and moved their combined forces to Münster – eight kilometres (five miles) from the French camp. A French reconnaissance under Jacques Joseph Vipart, Marquis de Silly went forward to probe the enemy, but were driven off by Allied troops who had deployed to cover the pioneers of the advancing army, labouring to bridge the numerous streams in the area and improve the passage leading westwards to Höchstädt. Marlborough quickly moved forward two brigades under the command of Lieutenant General John Wilkes and Brigadier Archibald Rowe to secure the narrow strip of land between the Danube and the wooded Fuchsberg hill, at the Schwenningen defile. Tallard's army numbered 56,000 men and 90 guns; the army of the Grand Alliance, 52,000 men and 66 guns. Some Allied officers who were acquainted with the superior numbers of the enemy, and aware of their strong defensive position, remonstrated with Marlborough about the hazards of attacking; but he was resolute – partly because the Dutch officer Willem Vleertman had scouted the marshy ground before them and reported that the land was perfectly suitable for the troops.
The battlefield stretched for nearly 6 km (3+1⁄2 mi). The extreme right flank of the Franco-Bavarian army rested on the Danube, the undulating pine-covered hills of the Swabian Jura lay to their left. A small stream, the Nebel, fronted the French line; the ground either side of this was marshy and only fordable intermittently. The French right rested on the village of Blenheim near where the Nebel flows into the Danube; the village itself was surrounded by hedges, fences, enclosed gardens, and meadows. Between Blenheim and the village of Oberglauheim to the north west the fields of wheat had been cut to stubble and were now ideal for the deployment of troops. From Oberglauheim to the next hamlet of Lutzingen the terrain of ditches, thickets and brambles was potentially difficult ground for the attackers.
At 02:00 on 13 August, 40 Allied cavalry squadrons were sent forward, followed at 03:00, in eight columns, by the main Allied force pushing over the River Kessel. At about 06:00 they reached Schwenningen, three kilometres (two miles) from Blenheim. The British and German troops who had held Schwenningen through the night joined the march, making a ninth column on the left of the army. Marlborough and Prince Eugene made their final plans. The Allied commanders agreed that Marlborough would command 36,000 troops and attack Tallard's force of 33,000 on the left, including capturing the village of Blenheim, while Prince Eugene's 16,000 men would attack Maximilian and Marsin's combined forces of 23,000 troops on the right. If this attack was pressed hard, it was anticipated that Maximilian and Marsin would feel unable to send troops to aid Tallard on their right. Lieutenant-General John Cutts would attack Blenheim in concert with Prince Eugene's attack. With the French flanks busy, Marlborough could cross the Nebel and deliver the fatal blow to the French at their centre. The Allies would have to wait until Prince Eugene was in position before the general engagement could begin.
Tallard was not anticipating an Allied attack; he had been deceived by intelligence gathered from prisoners taken by de Silly the previous day, and his army's strong position. Tallard and his colleagues believed that Marlborough and Prince Eugene were about to retreat north-westwards towards Nördlingen. Tallard wrote a report to this effect to King Louis that morning. Signal guns were fired to bring in the foraging parties and pickets as the French and Bavarian troops drew into battle-order to face the unexpected threat.
At about 08:00 the French artillery on their right wing opened fire, answered by Colonel Holcroft Blood's batteries. The guns were heard by Prince Louis in his camp before Ingolstadt. An hour later Tallard, Maximilian, and Marsin climbed Blenheim's church tower to finalise their plans. It was settled that Maximilian and Marsin would hold the front from the hills to Oberglauheim, whilst Tallard would defend the ground between Oberglauheim and the Danube. The French commanders were divided as to how to utilise the Nebel. Tallard's preferred tactic was to lure the Allies across before unleashing his cavalry upon them. This was opposed by Marsin and Maximilian who felt it better to close their infantry right up to the stream itself, so that while the enemy was struggling in the marshes, they would be caught in crossfire from Blenheim and Oberglauheim. Tallard's approach was sound if all its parts were implemented, but in the event it allowed Marlborough to cross the Nebel without serious interference and fight the battle he had planned.
The Franco-Bavarian commanders deployed their forces. In the village of Lutzingen, Count Alessandro de Maffei positioned five Bavarian battalions with a great battery of 16 guns at the village's edge. In the woods to the left of Lutzingen, seven French battalions under César Armand, Marquis de Rozel moved into place. Between Lutzingen and Oberglauheim Maximilian placed 27 squadrons of cavalry and 14 Bavarian squadrons commanded by d'Arco with 13 more in support nearby under Baron Veit Heinrich Moritz Freiherr von Wolframsdorf. To their right stood Marsin's 40 French squadrons and 12 battalions. The village of Oberglauheim was packed with 14 battalions commanded by Jean-Jules-Armand Colbert, Marquis de Blainville [fr], including the effective Irish Brigade known as the "Wild Geese". Six batteries of guns were ranged alongside the village. On the right of these French and Bavarian positions, between Oberglauheim and Blenheim, Tallard deployed 64 French and Walloon squadrons, 16 of which were from Marsin, supported by nine French battalions standing near the Höchstädt road. In the cornfield next to Blenheim stood three battalions from the Regiment de Roi. Nine battalions occupied the village itself, commanded by Philippe, Marquis de Clérambault. Four battalions stood to the rear and a further eleven were in reserve. These battalions were supported by Count Gabriel d'Hautefeuille's twelve squadrons of dismounted dragoons. By 11:00 Tallard, Maximilian, and Marsin were in place. Many of the Allied generals were hesitant to attack such a strong position. The Earl of Orkney later said that, "had I been asked to give my opinion, I had been against it."
Prince Eugene was expected to be in position by 11:00, but due to the difficult terrain and enemy fire, progress was slow. Cutts' column – which by 10:00 had expelled the enemy from two water mills on the Nebel – had already deployed by the river against Blenheim, enduring over the next three hours severe fire from a six-gun heavy battery posted near the village. The rest of Marlborough's army, waiting in their ranks on the forward slope, were also forced to bear the cannonade from the French artillery, suffering 2,000 casualties before the attack could even start. Meanwhile, engineers repaired a stone bridge across the Nebel, and constructed five additional bridges or causeways across the marsh between Blenheim and Oberglauheim. Marlborough's anxiety was finally allayed when, just past noon, Colonel William Cadogan reported that Prince Eugene's Prussian and Danish infantry were in place – the order for the general advance was given. At 13:00, Cutts was ordered to attack the village of Blenheim whilst Prince Eugene was requested to assault Lutzingen on the Allied right flank.
Cutts ordered Rowe's brigade to attack. The English infantry rose from the edge of the Nebel, and silently marched towards Blenheim, a distance of some 150 m (160 yd). James Ferguson's Scottish brigade supported Rowe's left, and moved towards the barricades between the village and the river, defended by Hautefeuille's dragoons. As the range closed to within 30 m (30 yd), the French fired a deadly volley. Rowe had ordered that there should be no firing from his men until he struck his sword upon the palisades, but as he stepped forward to give the signal, he fell mortally wounded. The survivors of the leading companies closed up the gaps in their ranks and rushed forward. Small parties penetrated the defences, but repeated French volleys forced the English back and inflicted heavy casualties. As the attack faltered, eight squadrons of elite Gens d'Armes, commanded by the veteran Swiss officer, Béat Jacques II de Zurlauben [fr], fell on the English troops, cutting at the exposed flank of Rowe's own regiment. Wilkes' Hessian brigade, nearby in the marshy grass at the water's edge, stood firm and repulsed the Gens d'Armes with steady fire, enabling the English and Hessians to re-order and launch another attack.
Although the Allies were again repulsed, these persistent attacks on Blenheim eventually bore fruit, panicking Clérambault into making the worst French error of the day. Without consulting Tallard, Clérambault ordered his reserve battalions into the village, upsetting the balance of the French position and nullifying the French numerical superiority. "The men were so crowded in upon one another", wrote Mérode-Westerloo, "that they couldn't even fire – let alone receive or carry out any orders". Marlborough, spotting this error, now countermanded Cutts' intention to launch a third attack, and ordered him simply to contain the enemy within Blenheim; no more than 5,000 Allied soldiers were able to pen in twice the number of French infantry and dragoons.
... Prince Eugene and the Imperial troops had been repulsed three times – driven right back to the woods – and had taken a real drubbing. – Mérode-Westerloo.
On the Allied right, Prince Eugene's Prussian and Danish forces were desperately fighting the numerically superior forces of Maximilian and Marsin. Leopold I, Prince of Anhalt-Dessau led forward four brigades across the Nebel to assault the well-fortified position of Lutzingen. Here, the Nebel was less of an obstacle, but the great battery positioned on the edge of the village enjoyed a good field of fire across the open ground stretching to the hamlet of Schwennenbach. As soon as the infantry crossed the stream, they were struck by Maffei's infantry, and salvoes from the Bavarian guns positioned both in front of the village and in enfilade on the wood-line to the right. Despite heavy casualties the Prussians attempted to storm the great battery, whilst the Danes, under Count Jobst von Scholten, attempted to drive the French infantry out of the copses beyond the village.
With the infantry heavily engaged, Prince Eugene's cavalry picked its way across the Nebel. After an initial success, his first line of cavalry, under the Imperial General of Horse, Prince Maximilian of Hanover, were pressed by the second line of Marsin's cavalry and forced back across the Nebel in confusion. The exhausted French were unable to follow up their advantage, and both cavalry forces tried to regroup and reorder their ranks. Without cavalry support, and threatened with envelopment, the Prussian and Danish infantry were in turn forced to pull back across the Nebel. Panic gripped some of Prince Eugene's troops as they crossed the stream. Ten infantry colours were lost to the Bavarians, and hundreds of prisoners taken; it was only through the leadership of Prince Eugene and the Prince Maximilian of Hanover that the Imperial infantry was prevented from abandoning the field.
After rallying his troops near Schwennenbach – well beyond their starting point – Prince Eugene prepared to launch a second attack, led by the second-line squadrons under the Duke of Württemberg-Teck. Yet again they were caught in the murderous crossfire from the artillery in Lutzingen and Oberglauheim, and were once again thrown back in disarray. The French and Bavarians were almost as disordered as their opponents, and they too were in need of inspiration from their commander, Maximilian, who was seen " ... riding up and down, and inspiring his men with fresh courage." Anhalt-Dessau's Danish and Prussian infantry attacked a second time but could not sustain the advance without proper support. Once again they fell back across the stream.
Whilst these events around Blenheim and Lutzingen were taking place, Marlborough was preparing to cross the Nebel. Hulsen's brigade of Hessians and Hanoverians and the earl of Orkney's British brigade advanced across the stream and were supported by dismounted British dragoons and ten British cavalry squadrons. This covering force allowed Charles Churchill's Dutch, British and German infantry and further cavalry units to advance and form up on the plain beyond. Marlborough arranged his infantry battalions in a novel manner with gaps sufficient to allow the cavalry to move freely between them. Marlborough ordered the formation forward. Once again Zurlauben's Gens d'Armes charged, looking to rout Henry Lumley's English cavalry who linked Cutts' column facing Blenheim with Churchill's infantry. As the elite French cavalry attacked, they were faced by five English squadrons under Colonel Francis Palmes. To the consternation of the French, the Gens d'Armes were pushed back in confusion and pursued well beyond the Maulweyer stream that flows through Blenheim. "What? Is it possible?" exclaimed Maximilian, "the gentlemen of France fleeing?" Palmes attempted to follow up his success but was repulsed by other French cavalry and musket fire from the edge of Blenheim.
Nevertheless, Tallard was alarmed by the repulse of the Gens d'Armes and urgently rode across the field to ask Marsin for reinforcements; but on the basis of being hard pressed by Prince Eugene – whose second attack was in full flood – Marsin refused. As Tallard consulted with Marsin, more of his infantry were taken into Blenheim by Clérambault. Fatally, Tallard, although aware of the situation, did nothing to rectify it, leaving him with just the nine battalions of infantry near the Höchstädt road to oppose the massed enemy ranks in the centre. Zurlauben tried several more times to disrupt the Allies forming on Tallard's side of the stream. His front-line cavalry darted forward down the gentle slope towards the Nebel, but the attacks lacked co-ordination, and the Allied infantry's steady volleys disconcerted the French horsemen. During these skirmishes Zurlauben fell mortally wounded; he died two days later. At this stage the time was just after 15:00.
The Danish cavalry, under Carl Rudolf, Duke of Württemberg-Neuenstadt, had made slow work of crossing the Nebel near Oberglauheim. Harassed by Marsin's infantry near the village, the Danes were driven back across the stream. Count Horn's Dutch infantry managed to push the French back from the water's edge, but it was apparent that before Marlborough could launch his main effort against Tallard, Oberglauheim would have to be secured.
Count Horn directed Anton Günther, Fürst von Holstein-Beck to take the village, but his two Dutch brigades were cut down by the French and Irish troops, capturing and badly wounding Holstein-Beck during the action. The battle was now in the balance. If Holstein-Beck's Dutch column were destroyed, the Allied army would be split in two: Prince Eugene's wing would be isolated from Marlborough's, passing the initiative to the Franco-Bavarian forces. Seeing the opportunity, Marsin ordered his cavalry to change from facing Prince Eugene, and turn towards their right and the open flank of Churchill's infantry drawn up in front of Unterglau. Marlborough, who had crossed the Nebel on a makeshift bridge to take personal control, ordered Hulsen's Hanoverian battalions to support the Dutch infantry. A nine-gun artillery battery and a Dutch cavalry brigade under Averock were also called forward, but the cavalry soon came under pressure from Marsin's more numerous squadrons.
Marlborough now requested Prince Eugene to release Count Hendrick Fugger and his Imperial Cuirassier brigade to help repel the French cavalry thrust. Despite his own difficulties, Prince Eugene at once complied. Although the Nebel stream lay between Fugger's and Marsin's squadrons, the French were forced to change front to meet this new threat, thus preventing Marsin from striking at Marlborough's infantry. Fugger's cuirassiers charged and, striking at a favourable angle, threw back Marsin's squadrons in disorder. With support from Blood's batteries, the Hessian, Hanoverian and Dutch infantry – now commanded by Count Berensdorf – succeeded in pushing the French and Irish infantry back into Oberglauheim so that they could not again threaten Churchill's flank as he moved against Tallard. The French commander in the village, de Blainville, was numbered among the heavy casualties.
The [French] foot remained in the best order I ever saw, till they were cut to pieces almost in rank and file. – Lord Orkney.
By 16:00, with large parts of the Franco-Bavarian army besieged in Blenheim and Oberglau, the Allied centre of 81 squadrons (nine squadrons had been transferred from Cutts' column) supported by 18 battalions was firmly planted amidst the French line of 64 squadrons and nine battalions of raw recruits. There was now a pause in the battle: Marlborough wanted to attack simultaneously along the whole front, and Prince Eugene, after his second repulse, needed time to reorganise.
By just after 17:00 all was ready along the Allied front. Marlborough's two lines of cavalry had now moved to the front of his line of battle, with the two supporting lines of infantry behind them. Mérode-Westerloo attempted to extricate some French infantry crowded into Blenheim, but Clérambault ordered the troops back into the village. The French cavalry exerted themselves once more against the Allied first line – Lumley's English and Scots on the Allied left, and Reinhard Vincent Graf von Hompesch's Dutch and German squadrons on the Allied right. Tallard's squadrons, which lacked infantry support and were tired, managed to push the Allied first line back to their infantry support. With the battle still not won, Marlborough had to rebuke one of his cavalry officers who was attempting to leave the field – "Sir, you are under a mistake, the enemy lies that way ..." Marlborough commanded the second Allied line, under Cuno Josua von Bülow [de] and Friedrich Johann von Bothmer [da], to move forward, and, driving through the centre, the Allies finally routed Tallard's tired cavalry. The Prussian Life Dragoons' Colonel, Ludwig von Blumenthal, and his second in command, Lieutenant Colonel von Hacke, fell next to each other, but the charge succeeded. With their cavalry in headlong flight, the remaining nine French infantry battalions fought with desperate valour, trying to form a square, but they were overwhelmed by Blood's close-range artillery and platoon fire. Mérode-Westerloo later wrote – "[They] died to a man where they stood, stationed right out in the open plain – supported by nobody."
The majority of Tallard's retreating troops headed for Höchstädt but most did not make the safety of the town, plunging instead into the Danube where over 3,000 French horsemen drowned; others were cut down by the pursuing Allied cavalry. The Marquis de Gruignan attempted a counter-attack, but he was brushed aside by the triumphant Allies. After a final rally behind his camp's tents, shouting entreaties to stand and fight, Tallard was caught up in the rout and swept towards Sonderheim. Surrounded by a squadron of Hessian troops, Tallard surrendered to Lieutenant Colonel de Boinenburg, the Prince of Hesse-Kassel's aide-de-camp, and was sent under escort to Marlborough. Marlborough welcomed the French commander – "I am very sorry that such a cruel misfortune should have fallen upon a soldier for whom I have the highest regard."
Meanwhile, the Allies had once again attacked the Bavarian stronghold at Lutzingen. Prince Eugene became exasperated with the performance of his Imperial cavalry whose third attack had failed: he had already shot two of his troopers to prevent a general flight. Then, declaring in disgust that he wished to "fight among brave men and not among cowards", Prince Eugene went into the attack with the Prussian and Danish infantry, as did Leopold I, waving a regimental colour to inspire his troops. This time the Prussians were able to storm the great Bavarian battery, and overwhelm the guns' crews. Beyond the village, Scholten's Danes defeated the French infantry in a desperate hand-to-hand bayonet struggle. When they saw that the centre had broken, Maximilian and Marsin decided the battle was lost; like the remnants of Tallard's army, they fled the battlefield, albeit in better order than Tallard's men. Attempts to organise an Allied force to prevent Marsin's withdrawal failed owing to the exhaustion of the cavalry, and the growing confusion in the field.
... our men fought in and through the fire ... until many on both sides were burned to death. – Private Deane, 1st Regiment Foot Guards.
Marlborough now turned his attention from the fleeing enemy to direct Churchill to detach more infantry to storm Blenheim. Orkney's infantry, Hamilton's English brigade and St Paul's Hanoverians moved across the trampled wheat to the cottages. Fierce hand-to-hand fighting gradually forced the French towards the village centre, in and around the walled churchyard which had been prepared for defence. Lord John Hay and Charles Ross's dismounted dragoons were also sent, but suffered under a counter-charge delivered by the regiments of Artois and Provence under command of Colonel de la Silvière. Colonel Belville's Hanoverians were fed into the battle to steady the resolve of the dragoons, who attacked again. The Allied progress was slow and hard, and like the defenders, they suffered many casualties.
Many of the cottages were now burning, obscuring the field of fire and driving the defenders out of their positions. Hearing the din of battle in Blenheim, Tallard sent a message to Marlborough offering to order the garrison to withdraw from the field. "Inform Monsieur Tallard", replied Marlborough, "that, in the position in which he is now, he has no command." Nevertheless, as dusk came the Allied commander was anxious for a quick conclusion. The French infantry fought tenaciously to hold on to their position in Blenheim, but their commander was nowhere to be found. By now Blenheim was under assault from every side by three British generals: Cutts, Churchill, and Orkney. The French had repulsed every attack, but many had seen what had happened on the plain: their army was routed and they were cut off. Orkney, attacking from the rear, now tried a different tactic – "... it came into my head to beat parley", he later wrote, "which they accepted of and immediately their Brigadier de Nouville capitulated with me to be prisoner at discretion and lay down their arms." Threatened by Allied guns, other units followed their example. It was not until 21:00 that the Marquis de Blanzac, who had taken charge in Clérambault's absence, reluctantly accepted the inevitability of defeat, and some 10,000 of France's best infantry had laid down their arms.
During these events Marlborough was still in the saddle organising the pursuit of the broken enemy. Pausing for a moment, he scribbled on the back of an old tavern bill a note addressed to his wife, Sarah: "I have no time to say more but to beg you will give my duty to the Queen, and let her know her army has had a glorious victory."
French losses were immense, with over 27,000 killed, wounded and captured. Moreover the myth of French invincibility had been destroyed, and King Louis's hopes of a victorious early peace were over. Mérode-Westerloo summarised the case against Tallard's army:
The French lost this battle for a wide variety of reasons. For one thing they had too good an opinion of their own ability ... Another point was their faulty field dispositions, and in addition there was rampant indiscipline and inexperience displayed ... It took all these faults to lose so celebrated a battle.
It was a hard-fought contest: Prince Eugene observed that "I have not a squadron or battalion which did not charge four times at least."
Although the war dragged on for years, the Battle of Blenheim was probably its most decisive victory; Marlborough and Prince Eugene had saved the Habsburg Empire and thereby preserved the Grand Alliance from collapse. Munich, Augsburg, Ingolstadt, Ulm and the remaining territory of Bavaria soon fell to the Allies. By the Treaty of Ilbersheim, signed on 7 November, Bavaria was placed under Austrian military rule, allowing the Habsburgs to use its resources for the rest of the conflict.
The remnants of Maximilian and Marsin's wing limped back to Strasbourg, losing another 7,000 men through desertion. Despite being offered the chance to remain as ruler of Bavaria, under the strict terms of an alliance with Austria, Maximilian left his country and family in order to continue the war against the Allies from the Spanish Netherlands where he still held the post of governor-general. Tallard – who, unlike his subordinates, was not ransomed or exchanged – was taken to England and imprisoned in Nottingham until his release in 1711.
The 1704 campaign lasted longer than usual, for the Allies sought to extract the maximum advantage. Realising that France was too powerful to be forced to make peace by a single victory, Prince Eugene, Marlborough and Prince Louis met to plan their next moves. For the following year Marlborough proposed a campaign along the valley of the Moselle to carry the war deep into France. This required the capture of the major fortress of Landau which guarded the Rhine, and the towns of Trier and Trarbach on the Moselle itself. Trier was taken on 27 October and Landau fell on 23 November to Prince Louis and Prince Eugene; with the fall of Trarbach on 20 December, the campaign season for 1704 came to an end. The planned offensive never materialised as the Grand Alliance's army had to depart the Moselle to defend Liège from a French counteroffensive. The war raged on for another decade.
Marlborough returned to England on 14 December (O.S) to the acclamation of Queen Anne and the country. In the first days of January, the 110 cavalry standards and 128 infantry colours that had been captured during the battle were borne in procession to Westminster Hall. In February 1705, Queen Anne, who had made Marlborough a duke in 1702, granted him the Park of Woodstock Palace and promised a sum of £240,000 to build a suitable house as a gift from a grateful Crown in recognition of his victory; this resulted in the construction of Blenheim Palace. The British historian Sir Edward Shepherd Creasy considered Blenheim one of the pivotal battles in history, writing: "Had it not been for Blenheim, all Europe might at this day suffer under the effect of French conquests resembling those of Alexander in extent and those of the Romans in durability." The military historian John A. Lynn considers this claim unjustified, for King Louis never had such an objective; the campaign in Bavaria was intended only to bring a favourable peace settlement and not domination over Europe.
Lake poet Robert Southey criticised the Battle of Blenheim in his anti-war poem "After Blenheim", but later praised the victory as "the greatest victory which had ever done honour to British arms". | [
{
"paragraph_id": 0,
"text": "The Battle of Blenheim (German: Zweite Schlacht bei Höchstädt; French: Bataille de Höchstädt; Dutch: Slag bij Blenheim) fought on 13 August [O.S. 2 August] 1704, was a major battle of the War of the Spanish Succession. The overwhelming Allied victory ensured the safety of Vienna from the Franco-Bavarian army, thus preventing the collapse of the reconstituted Grand Alliance.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Louis XIV of France sought to knock the Holy Roman Emperor, Leopold, out of the war by seizing Vienna, the Habsburg capital, and gain a favourable peace settlement. The dangers to Vienna were considerable: Maximilian II Emanuel, Elector of Bavaria, and Marshal Ferdinand de Marsin's forces in Bavaria threatened from the west, and Marshal Louis Joseph de Bourbon, duc de Vendôme's large army in northern Italy posed a serious danger with a potential offensive through the Brenner Pass. Vienna was also under pressure from Rákóczi's Hungarian revolt from its eastern approaches. Realising the danger, the Duke of Marlborough resolved to alleviate the peril to Vienna by marching his forces south from Bedburg to help maintain Emperor Leopold within the Grand Alliance.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A combination of deception and skilled administration – designed to conceal his true destination from friend and foe alike – enabled Marlborough to march 400 km (250 mi) unhindered from the Low Countries to the River Danube in five weeks. After securing Donauwörth on the Danube, Marlborough sought to engage Maximilian's and Marsin's army before Marshal Camille d'Hostun, duc de Tallard, could bring reinforcements through the Black Forest. The Franco-Bavarian commanders proved reluctant to fight until their numbers were deemed sufficient, and Marlborough failed in his attempts to force an engagement. When Tallard arrived to bolster Maximilian's army, and Prince Eugene of Savoy arrived with reinforcements for the Allies, the two armies finally met on the banks of the Danube in and around the small village of Blindheim, from which the English \"Blenheim\" is derived.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Blenheim was one of the battles that altered the course of the war, which until then was favouring the French and Spanish Bourbons. Although the battle did not win the war, it prevented a potentially devastating loss for the Grand Alliance and shifted the war's momentum, ending French plans of knocking Emperor Leopold out of the war. The French suffered catastrophic casualties in the battle, including their commander-in-chief, Tallard, who was taken captive to England. Before the 1704 campaign ended, the Allies had taken Landau, and the towns of Trier and Trarbach on the Moselle in preparation for the following year's campaign into France itself. This offensive never materialised, for the Grand Alliance's army had to depart the Moselle to defend Liège from a French counter-offensive. The war continued for another decade before ending in 1714.",
"title": ""
},
{
"paragraph_id": 4,
"text": "By 1704, the War of the Spanish Succession was in its fourth year. The previous year had been one of successes for France and her allies, most particularly on the Danube, where Marshal Claude-Louis-Hector de Villars and Maximilian II Emanuel, Elector of Bavaria, had created a direct threat to Vienna, the Habsburg capital. Vienna had been saved by dissension between the two commanders, leading to Villars being replaced by the less dynamic Marshal Ferdinand de Marsin. Nevertheless, the threat was still real: Rákóczi's Hungarian revolt was threatening the Empire's eastern approaches, and Marshal Louis Joseph, Duke of Vendôme's forces threatened an invasion from northern Italy. In the courts of Versailles and Madrid, Vienna's fall was confidently anticipated, an event which would almost certainly have led to the collapse of the reconstituted Grand Alliance.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "To isolate the Danube from any Allied intervention, Marshal François de Neufville, duc de Villeroi's 46,000 troops were expected to pin the 70,000 Dutch and British troops around Maastricht in the Low Countries, while General Robert Jean Antoine de Franquetot de Coigny protected Alsace against surprise with a further corps. The only forces immediately available for Vienna's defence were the imperial army under Margrave Louis William of Baden of 36,000 men stationed in the Lines of Stollhofen to watch Marshal Camille d'Hostun, duc de Tallard, at Strasbourg; and 10,000 men under Prince Eugene of Savoy south of Ulm.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "Both the Imperial Austrian Ambassador in London, Count Wratislaw, and the Duke of Marlborough realised the implications of the situation on the Danube. The Dutch were against any adventurous military operation as far south as the Danube and would not permit any major weakening of the forces in the Spanish Netherlands. Marlborough, realising the only way to reinforce the Austrians was by the use of secrecy and guile, set out to deceive his Dutch allies by pretending to move his troops to the Moselle – a plan approved of by The Hague – but once there, he would slip the Dutch leash and link up with Austrian forces in southern Germany.",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "This does not mean that he proceeded entirely without consultation with the Dutch. Without them, the army's logistics system would have simply collapsed. Intensive consultations preceded the campaign and Anthonie Heinsius, the Dutch Grand Pensionary, was likely informed by Marlborough of his secret plan to link up with Austrian forces. Many other important Dutchmen, like Major-General Johan Wijnand van Goor, were in favour of helping the Emperor and participated in the campaign. The Dutch diplomat and field deputy Van Rechteren-Almelo also played an important role. He made sure that on their 450-kilometer-long march, the Allies would nowhere be denied passage by local rulers, nor would they need to look for provisions, horsefeed or new boots. He also saw to it that sufficient stopovers were arranged along the way to ensure that the Allies arrived at their destination in good condition. This was of paramount importance, for the success of the operation depended on a quick elimination of the Bavarian elector. However, it was not possible to make the logistical arrangements in advance that would have been indispensable to supply the Allied army south of the Danube. For this, the Allies should have had access to the free imperial cities of Ulm and Augsburg, but the Bavarian elector had taken these two cities. This could have become a problem for Marlborough had the Elector avoided a battle and instead entrenched himself south of the Danube. Had Villeroy then managed to take advantage of the weakening of Allied forces in the Netherlands by recapturing Liège and besieging Maastricht, it would have validated the concerns of his Dutch adversaries.",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "A scarlet caterpillar, upon which all eyes were at once fixed, began to crawl steadfastly day by day across the map of Europe, dragging the whole war with it. – Winston Churchill",
"title": "Prelude"
},
{
"paragraph_id": 9,
"text": "Marlborough's march started on 19 May from Bedburg, 32 km (20 mi) northwest of Cologne. The army assembled by Marlborough's brother, General Charles Churchill, consisted of 66 squadrons of cavalry, 31 battalions of infantry and 38 guns and mortars, totalling 21,000 men, 16,000 of whom were British. This force was augmented en route, and by the time it reached the Danube it numbered 40,000 – 47 battalions and 88 squadrons. While Marlborough led this army south, the Dutch general, Henry Overkirk, Count of Nassau, maintained a defensive position in the Dutch Republic against the possibility of Villeroi mounting an attack. Marlborough had assured the Dutch that if the French were to launch an offensive he would return in good time, but he calculated that as he marched south, the French army would be drawn after him. In this assumption Marlborough proved correct: Villeroi shadowed Marlborough with 30,000 men in 60 squadrons and 42 battalions. Marlborough wrote to Godolphin: \"I am very sensible that I take a great deal upon me, but should I act otherwise, the Empire would be undone ...\"",
"title": "Prelude"
},
{
"paragraph_id": 10,
"text": "In the meantime, the appointment of Henry Overkirk as Field Marshal caused significant controversy in the Dutch Republic. After the Earl of Athlone's death, the Dutch States General had put Overkirk in charge of the Dutch States Army, which led to much discontent among the other high-ranking Dutch generals. Ernst Wilhelm von Salisch, Daniël van Dopff and Menno van Coehoorn threatened to resign or go into the service of other countries, although all were eventually convinced to stay. The new infantry generals were also disgruntled — the Lord of Slangenburg because he had to serve the less experienced Overkirk; and the Count of Noyelles because he had to serve the orders of the 'insupportable' Slangenburg. Then there was the major problem of the position of the Prince of Orange. The provinces of Friesland and Groningen demanded that their 17-year-old stadtholder be appointed supreme infantry general. This divided the parties so much that a second Grand Assembly, as had existed in 1651, was considered. However, after pressure from the other provinces, Friesland and Groningen adjusted their demands and a compromise was found. The Prince of Orange would nominally be appointed infantry general, behind Slangenburg and Noyelles, but he would not really be in command until he was 20.",
"title": "Prelude"
},
{
"paragraph_id": 11,
"text": "While the Allies were making their preparations, the French were striving to maintain and re-supply Marsin. He had been operating with Maximilian II against Margrave Louis William, and was somewhat isolated from France: his only lines of communication lay through the rocky passes of the Black Forest. On 14 May, Tallard brought 8,000 reinforcements and vast supplies and munitions through the difficult terrain, whilst outmanoeuvring Johann Karl von Thüngen [de], the Imperial general who sought to block his path. Tallard then returned with his own force to the Rhine, once again side-stepping Thüngen's efforts to intercept him.",
"title": "Prelude"
},
{
"paragraph_id": 12,
"text": "On 26 May, Marlborough reached Coblenz, where the Moselle meets the Rhine. If he intended an attack along the Moselle his army would now have to turn west; instead it crossed to the right bank of the Rhine, and was reinforced by 5,000 waiting Hanoverians and Prussians. The French realised that there would be no campaign on the Moselle. A second possible objective now occurred to them – an Allied incursion into Alsace and an attack on Strasbourg. Marlborough furthered this apprehension by constructing bridges across the Rhine at Philippsburg, a ruse that not only encouraged Villeroi to come to Tallard's aid in the defence of Alsace, but one that ensured the French plan to march on Vienna was delayed while they waited to see what Marlborough's army would do.",
"title": "Prelude"
},
{
"paragraph_id": 13,
"text": "Encouraged by Marlborough's promise to return to the Netherlands if a French attack developed there, transferring his troops up the Rhine on barges at a rate of 130 km (80 mi) a day, the Dutch States General agreed to release the Danish contingent of seven battalions and 22 squadrons as reinforcements. Marlborough reached Ladenburg, in the plain of the Neckar and the Rhine, and there halted for three days to rest his cavalry and allow the guns and infantry to close up. On 6 June he arrived at Wiesloch, south of Heidelberg. The following day, the Allied army swung away from the Rhine towards the hills of the Swabian Jura and the Danube beyond. At last Marlborough's destination was established without doubt.",
"title": "Prelude"
},
{
"paragraph_id": 14,
"text": "On 10 June, Marlborough met for the first time the President of the Imperial War Council, Prince Eugene – accompanied by Count Wratislaw – at the village of Mundelsheim, halfway between the Danube and the Rhine. By 13 June, the Imperial Field Commander, Margrave Louis William of Baden, had joined them in Großheppach. The three generals commanded a force of nearly 110,000 men. At this conference, it was decided that Prince Eugene would return with 28,000 men to the Lines of Stollhofen on the Rhine to watch Villeroi and Tallard and prevent them going to the aid of the Franco-Bavarian army on the Danube. Meanwhile, Marlborough's and Margrave Louis William's forces would combine, totalling 80,000 men, and march on the Danube to seek out Maximilian II and Marsin before they could be reinforced.",
"title": "Prelude"
},
{
"paragraph_id": 15,
"text": "Knowing Marlborough's destination, Tallard and Villeroi met at Landau in the Palatinate on 13 June to construct a plan to save Bavaria. The rigidity of the French command system was such that any variations from the original plan had to be sanctioned by Versailles. The Count of Mérode-Westerloo, commander of the Flemish troops in Tallard's army, wrote \"One thing is certain: we delayed our march from Alsace for far too long and quite inexplicably.\" Approval from King Louis arrived on 27 June: Tallard was to reinforce Marsin and Maximilian II on the Danube via the Black Forest, with 40 battalions and 50 squadrons; Villeroi was to pin down the Allies defending the Lines of Stollhofen, or, if the Allies should move all their forces to the Danube, he was to join with Tallard; Coigny with 8,000 men would protect Alsace. On 1 July Tallard's army of 35,000 re-crossed the Rhine at Kehl and began its march.",
"title": "Prelude"
},
{
"paragraph_id": 16,
"text": "On 22 June, Marlborough's forces linked up with the Imperial forces at Launsheim, having covered 400 km (250 mi) in five weeks. Thanks to a carefully planned timetable, the effects of wear and tear had been kept to a minimum. Captain Parker described the march discipline: \"As we marched through the country of our Allies, commissars were appointed to furnish us with all manner of necessaries for man and horse ... the soldiers had nothing to do but pitch their tents, boil kettles and lie down to rest.\" In response to Marlborough's manoeuvres, Maximilian and Marsin, conscious of their numerical disadvantage with only 40,000 men, moved their forces to the entrenched camp at Dillingen on the north bank of the Danube. Marlborough could not attack Dillingen because of a lack of siege guns – he had been unable to bring any from the Low Countries, and Margrave Louis William had failed to supply any, despite prior assurances that he would.",
"title": "Prelude"
},
{
"paragraph_id": 17,
"text": "The Allies needed a base for provisions and a good river crossing. Consequently, on 2 July Marlborough stormed the fortress of Schellenberg on the heights above the town of Donauwörth. Count Jean d'Arco had been sent with 12,000 men from the Franco-Bavarian camp to hold the town and grassy hill, but after a fierce battle, with heavy casualties on both sides, Schellenberg fell. This forced Donauwörth to surrender shortly afterward. Maximilian, knowing his position at Dillingen was now not tenable, took up a position behind the strong fortifications of Augsburg.",
"title": "Prelude"
},
{
"paragraph_id": 18,
"text": "Tallard's march presented a dilemma for Prince Eugene. If the Allies were not to be outnumbered on the Danube, he realised that he had to either try to cut Tallard off before he could get there, or to reinforce Marlborough. If he withdrew from the Rhine to the Danube, Villeroi might also make a move south to link up with Maximilian and Marsin. Prince Eugene compromised – leaving 12,000 troops behind guarding the Lines of Stollhofen – he marched off with the rest of his army to forestall Tallard.",
"title": "Prelude"
},
{
"paragraph_id": 19,
"text": "Lacking in numbers, Prince Eugene could not seriously disrupt Tallard's march but the French marshal's progress was proving slow. Tallard's force had suffered considerably more than Marlborough's troops on their march – many of his cavalry horses were suffering from glanders and the mountain passes were proving tough for the 2,000 wagonloads of provisions. Local German peasants, angry at French plundering, compounded Tallard's problems, leading Mérode-Westerloo to bemoan – \"the enraged peasantry killed several thousand of our men before the army was clear of the Black Forest.\"",
"title": "Prelude"
},
{
"paragraph_id": 20,
"text": "At Augsburg, Maximilian was informed on 14 July that Tallard was on his way through the Black Forest. This good news bolstered his policy of inaction, further encouraging him to wait for the reinforcements. This reticence to fight induced Marlborough to undertake a controversial policy of spoliation in Bavaria, burning buildings and crops throughout the rich lands south of the Danube. This had two aims: firstly to put pressure on Maximilian to fight or come to terms before Tallard arrived with reinforcements; and secondly, to ruin Bavaria as a base from which the French and Bavarian armies could attack Vienna, or pursue Marlborough into Franconia if, at some stage, he had to withdraw northwards. But this destruction, coupled with a protracted siege of the town of Rain over 9 to 16 July, caused Prince Eugene to lament \"... since the Donauwörth action I cannot admire their performances\", and later to conclude \"If he has to go home without having achieved his objective, he will certainly be ruined.\"",
"title": "Prelude"
},
{
"paragraph_id": 21,
"text": "Tallard, with 34,000 men, reached Ulm, joining with Maximilian and Marsin at Augsburg on 5 August, although Maximilian had dispersed his army in response to Marlborough's campaign of ravaging the region. Also on 5 August, Prince Eugene reached Höchstädt, riding that same night to meet with Marlborough at Schrobenhausen. Marlborough knew that another crossing point over the Danube was required in case Donauwörth fell to the enemy; so on 7 August, the first of Margrave Louis William's 15,000 Imperial troops left Marlborough's main force to besiege the heavily defended city of Ingolstadt, 32 km (20 mi) farther down the Danube, with the remainder following two days later.",
"title": "Prelude"
},
{
"paragraph_id": 22,
"text": "With Prince Eugene's forces at Höchstädt on the north bank of the Danube, and Marlborough's at Rain on the south bank, Tallard and Maximilian debated their next move. Tallard preferred to bide his time, replenish supplies and allow Marlborough's Danube campaign to flounder in the colder autumn weather; Maximilian and Marsin, newly reinforced, were keen to push ahead. The French and Bavarian commanders eventually agreed to attack Prince Eugene's smaller force. On 9 August, the Franco-Bavarian forces began to cross to the north bank of the Danube. On 10 August, Prince Eugene sent an urgent dispatch reporting that he was falling back to Donauwörth. By a series of swift marches Marlborough concentrated his forces on Donauwörth and, by noon 11 August, the link-up was complete.",
"title": "Prelude"
},
{
"paragraph_id": 23,
"text": "During 11 August, Tallard pushed forward from the river crossings at Dillingen. By 12 August, the Franco-Bavarian forces were encamped behind the small River Nebel near the village of Blenheim on the plain of Höchstädt. On the same day, Marlborough and Prince Eugene carried out a reconnaissance of the French position from the church spire at Tapfheim, and moved their combined forces to Münster – eight kilometres (five miles) from the French camp. A French reconnaissance under Jacques Joseph Vipart, Marquis de Silly went forward to probe the enemy, but were driven off by Allied troops who had deployed to cover the pioneers of the advancing army, labouring to bridge the numerous streams in the area and improve the passage leading westwards to Höchstädt. Marlborough quickly moved forward two brigades under the command of Lieutenant General John Wilkes and Brigadier Archibald Rowe to secure the narrow strip of land between the Danube and the wooded Fuchsberg hill, at the Schwenningen defile. Tallard's army numbered 56,000 men and 90 guns; the army of the Grand Alliance, 52,000 men and 66 guns. Some Allied officers who were acquainted with the superior numbers of the enemy, and aware of their strong defensive position, remonstrated with Marlborough about the hazards of attacking; but he was resolute – partly because the Dutch officer Willem Vleertman had scouted the marshy ground before them and reported that the land was perfectly suitable for the troops.",
"title": "Prelude"
},
{
"paragraph_id": 24,
"text": "The battlefield stretched for nearly 6 km (3+1⁄2 mi). The extreme right flank of the Franco-Bavarian army rested on the Danube, the undulating pine-covered hills of the Swabian Jura lay to their left. A small stream, the Nebel, fronted the French line; the ground either side of this was marshy and only fordable intermittently. The French right rested on the village of Blenheim near where the Nebel flows into the Danube; the village itself was surrounded by hedges, fences, enclosed gardens, and meadows. Between Blenheim and the village of Oberglauheim to the north west the fields of wheat had been cut to stubble and were now ideal for the deployment of troops. From Oberglauheim to the next hamlet of Lutzingen the terrain of ditches, thickets and brambles was potentially difficult ground for the attackers.",
"title": "Battle"
},
{
"paragraph_id": 25,
"text": "At 02:00 on 13 August, 40 Allied cavalry squadrons were sent forward, followed at 03:00, in eight columns, by the main Allied force pushing over the River Kessel. At about 06:00 they reached Schwenningen, three kilometres (two miles) from Blenheim. The British and German troops who had held Schwenningen through the night joined the march, making a ninth column on the left of the army. Marlborough and Prince Eugene made their final plans. The Allied commanders agreed that Marlborough would command 36,000 troops and attack Tallard's force of 33,000 on the left, including capturing the village of Blenheim, while Prince Eugene's 16,000 men would attack Maximilian and Marsin's combined forces of 23,000 troops on the right. If this attack was pressed hard, it was anticipated that Maximilian and Marsin would feel unable to send troops to aid Tallard on their right. Lieutenant-General John Cutts would attack Blenheim in concert with Prince Eugene's attack. With the French flanks busy, Marlborough could cross the Nebel and deliver the fatal blow to the French at their centre. The Allies would have to wait until Prince Eugene was in position before the general engagement could begin.",
"title": "Battle"
},
{
"paragraph_id": 26,
"text": "Tallard was not anticipating an Allied attack; he had been deceived by intelligence gathered from prisoners taken by de Silly the previous day, and his army's strong position. Tallard and his colleagues believed that Marlborough and Prince Eugene were about to retreat north-westwards towards Nördlingen. Tallard wrote a report to this effect to King Louis that morning. Signal guns were fired to bring in the foraging parties and pickets as the French and Bavarian troops drew into battle-order to face the unexpected threat.",
"title": "Battle"
},
{
"paragraph_id": 27,
"text": "At about 08:00 the French artillery on their right wing opened fire, answered by Colonel Holcroft Blood's batteries. The guns were heard by Prince Louis in his camp before Ingolstadt. An hour later Tallard, Maximilian, and Marsin climbed Blenheim's church tower to finalise their plans. It was settled that Maximilian and Marsin would hold the front from the hills to Oberglauheim, whilst Tallard would defend the ground between Oberglauheim and the Danube. The French commanders were divided as to how to utilise the Nebel. Tallard's preferred tactic was to lure the Allies across before unleashing his cavalry upon them. This was opposed by Marsin and Maximilian who felt it better to close their infantry right up to the stream itself, so that while the enemy was struggling in the marshes, they would be caught in crossfire from Blenheim and Oberglauheim. Tallard's approach was sound if all its parts were implemented, but in the event it allowed Marlborough to cross the Nebel without serious interference and fight the battle he had planned.",
"title": "Battle"
},
{
"paragraph_id": 28,
"text": "The Franco-Bavarian commanders deployed their forces. In the village of Lutzingen, Count Alessandro de Maffei positioned five Bavarian battalions with a great battery of 16 guns at the village's edge. In the woods to the left of Lutzingen, seven French battalions under César Armand, Marquis de Rozel moved into place. Between Lutzingen and Oberglauheim Maximilian placed 27 squadrons of cavalry and 14 Bavarian squadrons commanded by d'Arco with 13 more in support nearby under Baron Veit Heinrich Moritz Freiherr von Wolframsdorf. To their right stood Marsin's 40 French squadrons and 12 battalions. The village of Oberglauheim was packed with 14 battalions commanded by Jean-Jules-Armand Colbert, Marquis de Blainville [fr], including the effective Irish Brigade known as the \"Wild Geese\". Six batteries of guns were ranged alongside the village. On the right of these French and Bavarian positions, between Oberglauheim and Blenheim, Tallard deployed 64 French and Walloon squadrons, 16 of which were from Marsin, supported by nine French battalions standing near the Höchstädt road. In the cornfield next to Blenheim stood three battalions from the Regiment de Roi. Nine battalions occupied the village itself, commanded by Philippe, Marquis de Clérambault. Four battalions stood to the rear and a further eleven were in reserve. These battalions were supported by Count Gabriel d'Hautefeuille's twelve squadrons of dismounted dragoons. By 11:00 Tallard, Maximilian, and Marsin were in place. Many of the Allied generals were hesitant to attack such a strong position. The Earl of Orkney later said that, \"had I been asked to give my opinion, I had been against it.\"",
"title": "Battle"
},
{
"paragraph_id": 29,
"text": "Prince Eugene was expected to be in position by 11:00, but due to the difficult terrain and enemy fire, progress was slow. Cutts' column – which by 10:00 had expelled the enemy from two water mills on the Nebel – had already deployed by the river against Blenheim, enduring over the next three hours severe fire from a six-gun heavy battery posted near the village. The rest of Marlborough's army, waiting in their ranks on the forward slope, were also forced to bear the cannonade from the French artillery, suffering 2,000 casualties before the attack could even start. Meanwhile, engineers repaired a stone bridge across the Nebel, and constructed five additional bridges or causeways across the marsh between Blenheim and Oberglauheim. Marlborough's anxiety was finally allayed when, just past noon, Colonel William Cadogan reported that Prince Eugene's Prussian and Danish infantry were in place – the order for the general advance was given. At 13:00, Cutts was ordered to attack the village of Blenheim whilst Prince Eugene was requested to assault Lutzingen on the Allied right flank.",
"title": "Battle"
},
{
"paragraph_id": 30,
"text": "Cutts ordered Rowe's brigade to attack. The English infantry rose from the edge of the Nebel, and silently marched towards Blenheim, a distance of some 150 m (160 yd). James Ferguson's Scottish brigade supported Rowe's left, and moved towards the barricades between the village and the river, defended by Hautefeuille's dragoons. As the range closed to within 30 m (30 yd), the French fired a deadly volley. Rowe had ordered that there should be no firing from his men until he struck his sword upon the palisades, but as he stepped forward to give the signal, he fell mortally wounded. The survivors of the leading companies closed up the gaps in their ranks and rushed forward. Small parties penetrated the defences, but repeated French volleys forced the English back and inflicted heavy casualties. As the attack faltered, eight squadrons of elite Gens d'Armes, commanded by the veteran Swiss officer, Béat Jacques II de Zurlauben [fr], fell on the English troops, cutting at the exposed flank of Rowe's own regiment. Wilkes' Hessian brigade, nearby in the marshy grass at the water's edge, stood firm and repulsed the Gens d'Armes with steady fire, enabling the English and Hessians to re-order and launch another attack.",
"title": "Battle"
},
{
"paragraph_id": 31,
"text": "Although the Allies were again repulsed, these persistent attacks on Blenheim eventually bore fruit, panicking Clérambault into making the worst French error of the day. Without consulting Tallard, Clérambault ordered his reserve battalions into the village, upsetting the balance of the French position and nullifying the French numerical superiority. \"The men were so crowded in upon one another\", wrote Mérode-Westerloo, \"that they couldn't even fire – let alone receive or carry out any orders\". Marlborough, spotting this error, now countermanded Cutts' intention to launch a third attack, and ordered him simply to contain the enemy within Blenheim; no more than 5,000 Allied soldiers were able to pen in twice the number of French infantry and dragoons.",
"title": "Battle"
},
{
"paragraph_id": 32,
"text": "... Prince Eugene and the Imperial troops had been repulsed three times – driven right back to the woods – and had taken a real drubbing. – Mérode-Westerloo.",
"title": "Battle"
},
{
"paragraph_id": 33,
"text": "On the Allied right, Prince Eugene's Prussian and Danish forces were desperately fighting the numerically superior forces of Maximilian and Marsin. Leopold I, Prince of Anhalt-Dessau led forward four brigades across the Nebel to assault the well-fortified position of Lutzingen. Here, the Nebel was less of an obstacle, but the great battery positioned on the edge of the village enjoyed a good field of fire across the open ground stretching to the hamlet of Schwennenbach. As soon as the infantry crossed the stream, they were struck by Maffei's infantry, and salvoes from the Bavarian guns positioned both in front of the village and in enfilade on the wood-line to the right. Despite heavy casualties the Prussians attempted to storm the great battery, whilst the Danes, under Count Jobst von Scholten, attempted to drive the French infantry out of the copses beyond the village.",
"title": "Battle"
},
{
"paragraph_id": 34,
"text": "With the infantry heavily engaged, Prince Eugene's cavalry picked its way across the Nebel. After an initial success, his first line of cavalry, under the Imperial General of Horse, Prince Maximilian of Hanover, were pressed by the second line of Marsin's cavalry and forced back across the Nebel in confusion. The exhausted French were unable to follow up their advantage, and both cavalry forces tried to regroup and reorder their ranks. Without cavalry support, and threatened with envelopment, the Prussian and Danish infantry were in turn forced to pull back across the Nebel. Panic gripped some of Prince Eugene's troops as they crossed the stream. Ten infantry colours were lost to the Bavarians, and hundreds of prisoners taken; it was only through the leadership of Prince Eugene and the Prince Maximilian of Hanover that the Imperial infantry was prevented from abandoning the field.",
"title": "Battle"
},
{
"paragraph_id": 35,
"text": "After rallying his troops near Schwennenbach – well beyond their starting point – Prince Eugene prepared to launch a second attack, led by the second-line squadrons under the Duke of Württemberg-Teck. Yet again they were caught in the murderous crossfire from the artillery in Lutzingen and Oberglauheim, and were once again thrown back in disarray. The French and Bavarians were almost as disordered as their opponents, and they too were in need of inspiration from their commander, Maximilian, who was seen \" ... riding up and down, and inspiring his men with fresh courage.\" Anhalt-Dessau's Danish and Prussian infantry attacked a second time but could not sustain the advance without proper support. Once again they fell back across the stream.",
"title": "Battle"
},
{
"paragraph_id": 36,
"text": "Whilst these events around Blenheim and Lutzingen were taking place, Marlborough was preparing to cross the Nebel. Hulsen's brigade of Hessians and Hanoverians and the earl of Orkney's British brigade advanced across the stream and were supported by dismounted British dragoons and ten British cavalry squadrons. This covering force allowed Charles Churchill's Dutch, British and German infantry and further cavalry units to advance and form up on the plain beyond. Marlborough arranged his infantry battalions in a novel manner with gaps sufficient to allow the cavalry to move freely between them. Marlborough ordered the formation forward. Once again Zurlauben's Gens d'Armes charged, looking to rout Henry Lumley's English cavalry who linked Cutts' column facing Blenheim with Churchill's infantry. As the elite French cavalry attacked, they were faced by five English squadrons under Colonel Francis Palmes. To the consternation of the French, the Gens d'Armes were pushed back in confusion and pursued well beyond the Maulweyer stream that flows through Blenheim. \"What? Is it possible?\" exclaimed Maximilian, \"the gentlemen of France fleeing?\" Palmes attempted to follow up his success but was repulsed by other French cavalry and musket fire from the edge of Blenheim.",
"title": "Battle"
},
{
"paragraph_id": 37,
"text": "Nevertheless, Tallard was alarmed by the repulse of the Gens d'Armes and urgently rode across the field to ask Marsin for reinforcements; but on the basis of being hard pressed by Prince Eugene – whose second attack was in full flood – Marsin refused. As Tallard consulted with Marsin, more of his infantry were taken into Blenheim by Clérambault. Fatally, Tallard, although aware of the situation, did nothing to rectify it, leaving him with just the nine battalions of infantry near the Höchstädt road to oppose the massed enemy ranks in the centre. Zurlauben tried several more times to disrupt the Allies forming on Tallard's side of the stream. His front-line cavalry darted forward down the gentle slope towards the Nebel, but the attacks lacked co-ordination, and the Allied infantry's steady volleys disconcerted the French horsemen. During these skirmishes Zurlauben fell mortally wounded; he died two days later. At this stage the time was just after 15:00.",
"title": "Battle"
},
{
"paragraph_id": 38,
"text": "The Danish cavalry, under Carl Rudolf, Duke of Württemberg-Neuenstadt, had made slow work of crossing the Nebel near Oberglauheim. Harassed by Marsin's infantry near the village, the Danes were driven back across the stream. Count Horn's Dutch infantry managed to push the French back from the water's edge, but it was apparent that before Marlborough could launch his main effort against Tallard, Oberglauheim would have to be secured.",
"title": "Battle"
},
{
"paragraph_id": 39,
"text": "Count Horn directed Anton Günther, Fürst von Holstein-Beck to take the village, but his two Dutch brigades were cut down by the French and Irish troops, capturing and badly wounding Holstein-Beck during the action. The battle was now in the balance. If Holstein-Beck's Dutch column were destroyed, the Allied army would be split in two: Prince Eugene's wing would be isolated from Marlborough's, passing the initiative to the Franco-Bavarian forces. Seeing the opportunity, Marsin ordered his cavalry to change from facing Prince Eugene, and turn towards their right and the open flank of Churchill's infantry drawn up in front of Unterglau. Marlborough, who had crossed the Nebel on a makeshift bridge to take personal control, ordered Hulsen's Hanoverian battalions to support the Dutch infantry. A nine-gun artillery battery and a Dutch cavalry brigade under Averock were also called forward, but the cavalry soon came under pressure from Marsin's more numerous squadrons.",
"title": "Battle"
},
{
"paragraph_id": 40,
"text": "Marlborough now requested Prince Eugene to release Count Hendrick Fugger and his Imperial Cuirassier brigade to help repel the French cavalry thrust. Despite his own difficulties, Prince Eugene at once complied. Although the Nebel stream lay between Fugger's and Marsin's squadrons, the French were forced to change front to meet this new threat, thus preventing Marsin from striking at Marlborough's infantry. Fugger's cuirassiers charged and, striking at a favourable angle, threw back Marsin's squadrons in disorder. With support from Blood's batteries, the Hessian, Hanoverian and Dutch infantry – now commanded by Count Berensdorf – succeeded in pushing the French and Irish infantry back into Oberglauheim so that they could not again threaten Churchill's flank as he moved against Tallard. The French commander in the village, de Blainville, was numbered among the heavy casualties.",
"title": "Battle"
},
{
"paragraph_id": 41,
"text": "The [French] foot remained in the best order I ever saw, till they were cut to pieces almost in rank and file. – Lord Orkney.",
"title": "Battle"
},
{
"paragraph_id": 42,
"text": "By 16:00, with large parts of the Franco-Bavarian army besieged in Blenheim and Oberglau, the Allied centre of 81 squadrons (nine squadrons had been transferred from Cutts' column) supported by 18 battalions was firmly planted amidst the French line of 64 squadrons and nine battalions of raw recruits. There was now a pause in the battle: Marlborough wanted to attack simultaneously along the whole front, and Prince Eugene, after his second repulse, needed time to reorganise.",
"title": "Battle"
},
{
"paragraph_id": 43,
"text": "By just after 17:00 all was ready along the Allied front. Marlborough's two lines of cavalry had now moved to the front of his line of battle, with the two supporting lines of infantry behind them. Mérode-Westerloo attempted to extricate some French infantry crowded into Blenheim, but Clérambault ordered the troops back into the village. The French cavalry exerted themselves once more against the Allied first line – Lumley's English and Scots on the Allied left, and Reinhard Vincent Graf von Hompesch's Dutch and German squadrons on the Allied right. Tallard's squadrons, which lacked infantry support and were tired, managed to push the Allied first line back to their infantry support. With the battle still not won, Marlborough had to rebuke one of his cavalry officers who was attempting to leave the field – \"Sir, you are under a mistake, the enemy lies that way ...\" Marlborough commanded the second Allied line, under Cuno Josua von Bülow [de] and Friedrich Johann von Bothmer [da], to move forward, and, driving through the centre, the Allies finally routed Tallard's tired cavalry. The Prussian Life Dragoons' Colonel, Ludwig von Blumenthal, and his second in command, Lieutenant Colonel von Hacke, fell next to each other, but the charge succeeded. With their cavalry in headlong flight, the remaining nine French infantry battalions fought with desperate valour, trying to form a square, but they were overwhelmed by Blood's close-range artillery and platoon fire. Mérode-Westerloo later wrote – \"[They] died to a man where they stood, stationed right out in the open plain – supported by nobody.\"",
"title": "Battle"
},
{
"paragraph_id": 44,
"text": "The majority of Tallard's retreating troops headed for Höchstädt but most did not make the safety of the town, plunging instead into the Danube where over 3,000 French horsemen drowned; others were cut down by the pursuing Allied cavalry. The Marquis de Gruignan attempted a counter-attack, but he was brushed aside by the triumphant Allies. After a final rally behind his camp's tents, shouting entreaties to stand and fight, Tallard was caught up in the rout and swept towards Sonderheim. Surrounded by a squadron of Hessian troops, Tallard surrendered to Lieutenant Colonel de Boinenburg, the Prince of Hesse-Kassel's aide-de-camp, and was sent under escort to Marlborough. Marlborough welcomed the French commander – \"I am very sorry that such a cruel misfortune should have fallen upon a soldier for whom I have the highest regard.\"",
"title": "Battle"
},
{
"paragraph_id": 45,
"text": "Meanwhile, the Allies had once again attacked the Bavarian stronghold at Lutzingen. Prince Eugene became exasperated with the performance of his Imperial cavalry whose third attack had failed: he had already shot two of his troopers to prevent a general flight. Then, declaring in disgust that he wished to \"fight among brave men and not among cowards\", Prince Eugene went into the attack with the Prussian and Danish infantry, as did Leopold I, waving a regimental colour to inspire his troops. This time the Prussians were able to storm the great Bavarian battery, and overwhelm the guns' crews. Beyond the village, Scholten's Danes defeated the French infantry in a desperate hand-to-hand bayonet struggle. When they saw that the centre had broken, Maximilian and Marsin decided the battle was lost; like the remnants of Tallard's army, they fled the battlefield, albeit in better order than Tallard's men. Attempts to organise an Allied force to prevent Marsin's withdrawal failed owing to the exhaustion of the cavalry, and the growing confusion in the field.",
"title": "Battle"
},
{
"paragraph_id": 46,
"text": "... our men fought in and through the fire ... until many on both sides were burned to death. – Private Deane, 1st Regiment Foot Guards.",
"title": "Battle"
},
{
"paragraph_id": 47,
"text": "Marlborough now turned his attention from the fleeing enemy to direct Churchill to detach more infantry to storm Blenheim. Orkney's infantry, Hamilton's English brigade and St Paul's Hanoverians moved across the trampled wheat to the cottages. Fierce hand-to-hand fighting gradually forced the French towards the village centre, in and around the walled churchyard which had been prepared for defence. Lord John Hay and Charles Ross's dismounted dragoons were also sent, but suffered under a counter-charge delivered by the regiments of Artois and Provence under command of Colonel de la Silvière. Colonel Belville's Hanoverians were fed into the battle to steady the resolve of the dragoons, who attacked again. The Allied progress was slow and hard, and like the defenders, they suffered many casualties.",
"title": "Battle"
},
{
"paragraph_id": 48,
"text": "Many of the cottages were now burning, obscuring the field of fire and driving the defenders out of their positions. Hearing the din of battle in Blenheim, Tallard sent a message to Marlborough offering to order the garrison to withdraw from the field. \"Inform Monsieur Tallard\", replied Marlborough, \"that, in the position in which he is now, he has no command.\" Nevertheless, as dusk came the Allied commander was anxious for a quick conclusion. The French infantry fought tenaciously to hold on to their position in Blenheim, but their commander was nowhere to be found. By now Blenheim was under assault from every side by three British generals: Cutts, Churchill, and Orkney. The French had repulsed every attack, but many had seen what had happened on the plain: their army was routed and they were cut off. Orkney, attacking from the rear, now tried a different tactic – \"... it came into my head to beat parley\", he later wrote, \"which they accepted of and immediately their Brigadier de Nouville capitulated with me to be prisoner at discretion and lay down their arms.\" Threatened by Allied guns, other units followed their example. It was not until 21:00 that the Marquis de Blanzac, who had taken charge in Clérambault's absence, reluctantly accepted the inevitability of defeat, and some 10,000 of France's best infantry had laid down their arms.",
"title": "Battle"
},
{
"paragraph_id": 49,
"text": "During these events Marlborough was still in the saddle organising the pursuit of the broken enemy. Pausing for a moment, he scribbled on the back of an old tavern bill a note addressed to his wife, Sarah: \"I have no time to say more but to beg you will give my duty to the Queen, and let her know her army has had a glorious victory.\"",
"title": "Battle"
},
{
"paragraph_id": 50,
"text": "French losses were immense, with over 27,000 killed, wounded and captured. Moreover the myth of French invincibility had been destroyed, and King Louis's hopes of a victorious early peace were over. Mérode-Westerloo summarised the case against Tallard's army:",
"title": "Aftermath"
},
{
"paragraph_id": 51,
"text": "The French lost this battle for a wide variety of reasons. For one thing they had too good an opinion of their own ability ... Another point was their faulty field dispositions, and in addition there was rampant indiscipline and inexperience displayed ... It took all these faults to lose so celebrated a battle.",
"title": "Aftermath"
},
{
"paragraph_id": 52,
"text": "It was a hard-fought contest: Prince Eugene observed that \"I have not a squadron or battalion which did not charge four times at least.\"",
"title": "Aftermath"
},
{
"paragraph_id": 53,
"text": "Although the war dragged on for years, the Battle of Blenheim was probably its most decisive victory; Marlborough and Prince Eugene had saved the Habsburg Empire and thereby preserved the Grand Alliance from collapse. Munich, Augsburg, Ingolstadt, Ulm and the remaining territory of Bavaria soon fell to the Allies. By the Treaty of Ilbersheim, signed on 7 November, Bavaria was placed under Austrian military rule, allowing the Habsburgs to use its resources for the rest of the conflict.",
"title": "Aftermath"
},
{
"paragraph_id": 54,
"text": "The remnants of Maximilian and Marsin's wing limped back to Strasbourg, losing another 7,000 men through desertion. Despite being offered the chance to remain as ruler of Bavaria, under the strict terms of an alliance with Austria, Maximilian left his country and family in order to continue the war against the Allies from the Spanish Netherlands where he still held the post of governor-general. Tallard – who, unlike his subordinates, was not ransomed or exchanged – was taken to England and imprisoned in Nottingham until his release in 1711.",
"title": "Aftermath"
},
{
"paragraph_id": 55,
"text": "The 1704 campaign lasted longer than usual, for the Allies sought to extract the maximum advantage. Realising that France was too powerful to be forced to make peace by a single victory, Prince Eugene, Marlborough and Prince Louis met to plan their next moves. For the following year Marlborough proposed a campaign along the valley of the Moselle to carry the war deep into France. This required the capture of the major fortress of Landau which guarded the Rhine, and the towns of Trier and Trarbach on the Moselle itself. Trier was taken on 27 October and Landau fell on 23 November to Prince Louis and Prince Eugene; with the fall of Trarbach on 20 December, the campaign season for 1704 came to an end. The planned offensive never materialised as the Grand Alliance's army had to depart the Moselle to defend Liège from a French counteroffensive. The war raged on for another decade.",
"title": "Aftermath"
},
{
"paragraph_id": 56,
"text": "Marlborough returned to England on 14 December (O.S) to the acclamation of Queen Anne and the country. In the first days of January, the 110 cavalry standards and 128 infantry colours that had been captured during the battle were borne in procession to Westminster Hall. In February 1705, Queen Anne, who had made Marlborough a duke in 1702, granted him the Park of Woodstock Palace and promised a sum of £240,000 to build a suitable house as a gift from a grateful Crown in recognition of his victory; this resulted in the construction of Blenheim Palace. The British historian Sir Edward Shepherd Creasy considered Blenheim one of the pivotal battles in history, writing: \"Had it not been for Blenheim, all Europe might at this day suffer under the effect of French conquests resembling those of Alexander in extent and those of the Romans in durability.\" The military historian John A. Lynn considers this claim unjustified, for King Louis never had such an objective; the campaign in Bavaria was intended only to bring a favourable peace settlement and not domination over Europe.",
"title": "Aftermath"
},
{
"paragraph_id": 57,
"text": "Lake poet Robert Southey criticised the Battle of Blenheim in his anti-war poem \"After Blenheim\", but later praised the victory as \"the greatest victory which had ever done honour to British arms\".",
"title": "Aftermath"
}
] | The Battle of Blenheim fought on 13 August [O.S. 2 August] 1704, was a major battle of the War of the Spanish Succession. The overwhelming Allied victory ensured the safety of Vienna from the Franco-Bavarian army, thus preventing the collapse of the reconstituted Grand Alliance. Louis XIV of France sought to knock the Holy Roman Emperor, Leopold, out of the war by seizing Vienna, the Habsburg capital, and gain a favourable peace settlement. The dangers to Vienna were considerable: Maximilian II Emanuel, Elector of Bavaria, and Marshal Ferdinand de Marsin's forces in Bavaria threatened from the west, and Marshal Louis Joseph de Bourbon, duc de Vendôme's large army in northern Italy posed a serious danger with a potential offensive through the Brenner Pass. Vienna was also under pressure from Rákóczi's Hungarian revolt from its eastern approaches. Realising the danger, the Duke of Marlborough resolved to alleviate the peril to Vienna by marching his forces south from Bedburg to help maintain Emperor Leopold within the Grand Alliance. A combination of deception and skilled administration – designed to conceal his true destination from friend and foe alike – enabled Marlborough to march 400 km (250 mi) unhindered from the Low Countries to the River Danube in five weeks. After securing Donauwörth on the Danube, Marlborough sought to engage Maximilian's and Marsin's army before Marshal Camille d'Hostun, duc de Tallard, could bring reinforcements through the Black Forest. The Franco-Bavarian commanders proved reluctant to fight until their numbers were deemed sufficient, and Marlborough failed in his attempts to force an engagement. When Tallard arrived to bolster Maximilian's army, and Prince Eugene of Savoy arrived with reinforcements for the Allies, the two armies finally met on the banks of the Danube in and around the small village of Blindheim, from which the English "Blenheim" is derived. Blenheim was one of the battles that altered the course of the war, which until then was favouring the French and Spanish Bourbons. Although the battle did not win the war, it prevented a potentially devastating loss for the Grand Alliance and shifted the war's momentum, ending French plans of knocking Emperor Leopold out of the war. The French suffered catastrophic casualties in the battle, including their commander-in-chief, Tallard, who was taken captive to England. Before the 1704 campaign ended, the Allies had taken Landau, and the towns of Trier and Trarbach on the Moselle in preparation for the following year's campaign into France itself. This offensive never materialised, for the Grand Alliance's army had to depart the Moselle to defend Liège from a French counter-offensive. The war continued for another decade before ending in 1714. | 2001-08-17T01:52:59Z | 2023-12-28T01:45:42Z | [
"Template:Snd",
"Template:Cite book",
"Template:Notelist",
"Template:Authority control",
"Template:Use British English",
"Template:Efn",
"Template:Blockquote",
"Template:Clarify",
"Template:Wikisource-inline",
"Template:Featured article",
"Template:Infobox military conflict",
"Template:Lang-nl",
"Template:Cite encyclopedia",
"Template:Quote",
"Template:Cite journal",
"Template:Sfn",
"Template:OldStyleDate",
"Template:Ill",
"Template:Lang-de",
"Template:Reflist",
"Template:Refbegin",
"Template:Refend",
"Template:Cvt",
"Template:Further",
"Template:Short description",
"Template:Use dmy dates",
"Template:Lang-fr"
] | https://en.wikipedia.org/wiki/Battle_of_Blenheim |
4,050 | Battle of Ramillies | The Battle of Ramillies (/ˈræmɪliːz/), fought on 23 May 1706, was a battle of the War of the Spanish Succession. For the Grand Alliance – Austria, England, and the Dutch Republic – the battle had followed an indecisive campaign against the Bourbon armies of King Louis XIV of France in 1705. Although the Allies had captured Barcelona that year, they had been forced to abandon their campaign on the Moselle, had stalled in the Spanish Netherlands and suffered defeat in northern Italy. Yet despite his opponents' setbacks Louis XIV wanted peace, but on reasonable terms. Because of this, as well as to maintain their momentum, the French and their allies took the offensive in 1706.
The campaign began well for Louis XIV's generals: in Italy Marshal Vendôme defeated the Austrians at the Battle of Calcinato in April, while in Alsace Marshal Villars forced the Margrave of Baden back across the Rhine. Encouraged by these early gains Louis XIV urged Marshal Villeroi to go over to the offensive in the Spanish Netherlands and, with victory, gain a 'fair' peace. Accordingly, the French Marshal set off from Leuven (Louvain) at the head of 60,000 men and marched towards Tienen (Tirlemont), as if to threaten Zoutleeuw (Léau). Also determined to fight a major engagement, the Duke of Marlborough, commander-in-chief of Anglo-Dutch forces, assembled his army – some 62,000 men – near Maastricht, and marched past Zoutleeuw. With both sides seeking battle, they soon encountered each other on the dry ground between the rivers Mehaigne and Petite Gette, close to the small village of Ramillies.
In less than four hours Marlborough's Dutch, English, and Danish forces overwhelmed Villeroi's and Max Emanuel's Franco-Spanish-Bavarian army. The Duke's subtle moves and changes in emphasis during the battle – something his opponents failed to realise until it was too late – caught the French in a tactical vice. With their foe broken and routed, the Allies were able to fully exploit their victory. Town after town fell, including Brussels, Bruges and Antwerp; by the end of the campaign Villeroi's army had been driven from most of the Spanish Netherlands. With Prince Eugene's subsequent success at the Battle of Turin in northern Italy, the Allies had imposed the greatest loss of territory and resources that Louis XIV would suffer during the war. Thus, the year 1706 proved, for the Allies, to be an annus mirabilis.
After their disastrous defeat at Blenheim in 1704, the next year brought the French some respite. The Duke of Marlborough had intended the 1705 campaign – an invasion of France through the Moselle valley – to complete the work of Blenheim and persuade King Louis XIV to make peace but the plan had been thwarted by friend and foe alike. The reluctance of his Dutch allies to see their frontiers denuded of troops for another gamble in Germany had denied Marlborough the initiative but of far greater importance was the Margrave of Baden's pronouncement that he could not join the Duke in strength for the coming offensive. This was in part due to the sudden switching of troops from the Rhine to reinforce Prince Eugene in Italy and part due to the deterioration of Baden's health brought on by the re-opening of a severe foot wound he had received at the storming of the Schellenberg the previous year. Marlborough had to cope with the death of Emperor Leopold I in May and the accession of Joseph I, which unavoidably complicated matters for the Grand Alliance.
The resilience of the French King and the efforts of his generals, also added to Marlborough's problems. Marshal Villeroi, exerting considerable pressure on the Dutch commander, Count Overkirk, along the Meuse, took Huy on 10 June before pressing on towards Liège. With Marshal Villars sitting strong on the Moselle, the Allied commander – whose supplies had by now become very short – was forced to call off his campaign on 16 June. "What a disgrace for Marlborough," exulted Villeroi, "to have made false movements without any result!" With Marlborough's departure north, the French transferred troops from the Moselle valley to reinforce Villeroi in Flanders, while Villars marched off to the Rhine.
The Anglo-Dutch forces gained minor compensation for the failed Moselle campaign with the success at Elixheim and the crossing of the Lines of Brabant in the Spanish Netherlands (Huy was also retaken on 11 July) but a chance to bring the French to a decisive engagement eluded Marlborough. The year 1705 proved almost entirely barren for the Duke, whose military disappointments were only partly compensated by efforts on the diplomatic front where, at the courts of Düsseldorf, Frankfurt, Vienna, Berlin and Hanover, Marlborough sought to bolster support for the Grand Alliance and extract promises of prompt assistance for the following year's campaign.
On 11 January 1706 Marlborough finally reached London at the end of his diplomatic tour but he had already been planning his strategy for the coming season. The first option (although it is debatable to what extent the Duke was committed to such an enterprise) was a plan to transfer his forces from the Spanish Netherlands to northern Italy; once there, he intended linking up with Prince Eugene in order to defeat the French and safeguard Savoy from being overrun. Savoy would then serve as a gateway into France by way of the mountain passes or an invasion with naval support along the Mediterranean coast via Nice and Toulon, in connexion with redoubled Allied efforts in Spain. It seems that the Duke's favoured scheme was to return to the Moselle valley (where Marshal Marsin had recently taken command of French forces) and once more attempt an advance into the heart of France. But these decisions soon became academic. Shortly after Marlborough landed in the Dutch Republic on 14 April, news arrived of big Allied setbacks in the wider war.
Determined to show the Grand Alliance that France was still resolute, Louis XIV prepared to launch a double surprise in Alsace and northern Italy. On the latter front Marshal Vendôme defeated the Imperial army at Calcinato on 19 April, pushing the Imperialists back in confusion (French forces were now in a position to prepare for the long-anticipated siege of Turin). In Alsace, Marshal Villars took Baden by surprise and captured Haguenau, driving him back across the Rhine in some disorder, thus creating a threat on Landau. With these reverses, the Dutch refused to contemplate Marlborough's ambitious march to Italy or any plan that denuded their borders of the Duke and their army. In the interest of coalition harmony, Marlborough prepared to campaign in the Low Countries.
The Duke left The Hague on 9 May. "God knows I go with a heavy heart," he wrote six days later to his friend and political ally in England, Lord Godolphin, "for I have no hope of doing anything considerable, unless the French do what I am very confident they will not ..." – in other words, court battle. On 17 May the Duke concentrated his Dutch and English troops at Tongeren, near Maastricht. The Hanoverians, Hessians and Danes, despite earlier undertakings, found, or invented, pressing reasons for withholding their support. Marlborough wrote an appeal to the Duke of Württemberg, the commander of the Danish contingent: "I send you this express to request your Highness to bring forward by a double march your cavalry so as to join us at the earliest moment ..." Additionally, the King in Prussia, Frederick I, had kept his troops in quarters behind the Rhine while his personal disputes with Vienna and the States General at The Hague remained unresolved. Nevertheless, the Duke could think of no circumstances why the French would leave their strong positions and attack his army, even if Villeroi was first reinforced by substantial transfers from Marsin's command. But in this he had miscalculated. Although Louis XIV wanted peace he wanted it on reasonable terms; for that, he needed victory in the field and to convince the Allies that his resources were by no means exhausted.
Following the successes in Italy and along the Rhine, Louis XIV was now hopeful of similar results in Flanders. Far from standing on the defensive therefore – and unbeknown to Marlborough – Louis XIV was persistently goading his marshal into action. "[Villeroi] began to imagine," wrote St Simon, "that the King doubted his courage, and resolved to stake all at once in an effort to vindicate himself." Accordingly, on 18 May, Villeroi set off from Leuven at the head of 70 battalions, 132 squadrons and 62 cannon – comprising an overall force of some 60,000 troops – and crossed the river Dyle to seek battle with the enemy. Spurred on by his growing confidence in his ability to out-general his opponent, and by Versailles’ determination to avenge Blenheim, Villeroi and his generals anticipated success.
Neither opponent expected the clash at the exact moment or place where it occurred. The French moved first to Tienen, (as if to threaten Zoutleeuw, abandoned by the French in October 1705), before turning southwards, heading for Jodoigne – this line of march took Villeroi's army towards the narrow aperture of dry ground between the rivers Mehaigne and Petite Gette close to the small villages of Ramillies and Taviers; but neither commander quite appreciated how far his opponent had travelled. Villeroi still believed (on 22 May) the Allies were a full day's march away when in fact they had camped near Corswaren waiting for the Danish squadrons to catch up; for his part, Marlborough deemed Villeroi still at Jodoigne when in reality he was now approaching the plateau of Mont St. André with the intention of pitching camp near Ramillies (see map at right). However, the Prussian infantry was not there. Marlborough wrote to Lord Raby, the English resident at Berlin: "If it should please God to give us victory over the enemy, the Allies will be little obliged to the King [Frederick] for the success."
The following day, at 01:00, Marlborough dispatched Cadogan, his Quartermaster-General, with an advanced guard to reconnoitre the same dry ground that Villeroi's army was now heading toward, country that was well known to the Duke from previous campaigns. Two hours later the Duke followed with the main body: 74 battalions, 123 squadrons, 90 pieces of artillery and 20 mortars, totalling 62,000 troops. About 08:00, after Cadogan had just passed Merdorp, his force made brief contact with a party of French hussars gathering forage on the edge of the plateau of Jandrenouille. After a brief exchange of shots the French retired and Cadogan's dragoons pressed forward. With a short lift in the mist, Cadogan soon discovered the smartly ordered lines of Villeroi's advance guard some 6 kilometres (4 miles) off; a galloper hastened back to warn Marlborough. Two hours later the Duke, accompanied by the Dutch field commander Field Marshal Overkirk, General Daniël van Dopff, and the Allied staff, rode up to Cadogan where on the horizon to the westward he could discern the massed ranks of the French army deploying for battle along the 6 km (4 mi) front. Marlborough later told Bishop Burnet: "The French army looked the best of any he had ever seen."
The battlefield of Ramillies is very similar to that of Blenheim, for here too there is an immense area of arable land unimpeded by woods or hedges. Villeroi's right rested on the villages of Franquenée and Taviers, with the river Mehaigne protecting his flank. A large open plain, about 2 km (1 mi) wide, lay between Taviers and Ramillies, but unlike Blenheim, there was no stream to hinder the cavalry. His centre was secured by Ramillies itself, lying on a slight eminence which gave distant views to the north and east. The French left flank was protected by broken country, and by a stream, the Petite Gheete, which runs deep between steep and slippery slopes. On the French side of the stream the ground rises to Offus, the village which, together with Autre-Eglise farther north, anchored Villeroi's left flank. To the west of the Petite Gheete rises the plateau of Mont St. André; a second plain, the plateau of Jandrenouille – upon which the Anglo-Dutch army amassed – rises to the east.
At 11:00 the Duke ordered the army to take standard battle formation. On the far right, towards Foulz, the British battalions and squadrons took up their posts in a double line near the Jeuche stream. The centre was formed by the mass of Dutch, German, Protestant Swiss and Scottish infantry – perhaps 30,000 men – facing Offus and Ramillies. Also facing Ramillies Marlborough placed a powerful battery of thirty 24-pounders, dragged into position by a team of oxen; further batteries were positioned overlooking the Petite Gheete. On their left, on the broad plain between Taviers and Ramillies – and where Marlborough thought the decisive encounter must take place – Overkirk drew the 69 squadrons of the Dutch and Danish horse, supported by 19 battalions of Dutch infantry and two artillery pieces.
Meanwhile, Villeroi deployed his forces. In Taviers on his right, he placed two battalions of the Greder Suisse Régiment, with a smaller force forward in Franquenée; the whole position was protected by the boggy ground of the river Mehaigne, thus preventing an Allied flanking movement. In the open country between Taviers and Ramillies, he placed 82 squadrons under General de Guiscard supported by several interleaved brigades of French, Swiss and Bavarian infantry. Along the Ramillies–Offus–Autre Eglise ridge-line, Villeroi positioned Walloon and Bavarian infantry, supported by the Elector of Bavaria's 50 squadrons of Bavarian and Walloon cavalry placed behind on the plateau of Mont St. André. Ramillies, Offus and Autre-Eglise were all packed with troops and put in a state of defence, with alleys barricaded and walls loop-holed for muskets. Villeroi also positioned powerful batteries near Ramillies. These guns (some of which were of the three barrelled kind first seen at Elixheim the previous year) enjoyed good arcs of fire, able to fully cover the approaches of the plateau of Jandrenouille over which the Allied infantry would have to pass.
Marlborough, however, noticed several important weaknesses in the French dispositions. Tactically, it was imperative for Villeroi to occupy Taviers on his right and Autre-Eglise on his left, but by adopting this posture he had been forced to over-extend his forces. Moreover, this disposition – concave in relation to the Allied army – gave Marlborough the opportunity to form a more compact line, drawn up in a shorter front between the 'horns' of the French crescent; when the Allied blow came it would be more concentrated and carry more weight. Additionally, the Duke's disposition facilitated the transfer of troops across his front far more easily than his foe, a tactical advantage that would grow in importance as the events of the afternoon unfolded. Although Villeroi had the option of enveloping the flanks of the Allied army as they deployed on the plateau of Jandrenouille – threatening to encircle their army – the Duke correctly gauged that the characteristically cautious French commander was intent on a defensive battle along the ridge-line.
At 13:00 the batteries went into action; a little later two Allied columns set out from the extremities of their line and attacked the flanks of the Franco-Bavarian army. To the south, 4 battalions, under the command of Colonel Wertmüller, came forward with their two field guns to seize the hamlet of Franquenée. The small Swiss garrison in the village, shaken by the sudden onslaught and unsupported by the battalions to their rear, were soon compelled back towards the village of Taviers. Taviers was of particular importance to the Franco-Bavarian position: it protected the otherwise unsupported flank of General de Guiscard's cavalry on the open plain, while at the same time, it allowed the French infantry to pose a threat to the flanks of the Dutch and Danish squadrons as they came forward into position. But hardly had the retreating Swiss rejoined their comrades in that village when the Dutch Guards renewed their attack. The fighting amongst the alleys and cottages soon deteriorated into a fierce bayonet and clubbing mêlée, but the superiority in Dutch firepower soon told. The accomplished French officer, Colonel de la Colonie, standing on the plain nearby remembered: "This village was the opening of the engagement, and the fighting there was almost as murderous as the rest of the battle put together." By about 15:00 the Swiss had been pushed out of the village into the marshes beyond.
Villeroi's right flank fell into chaos and was now open and vulnerable. Alerted to the situation de Guiscard ordered an immediate attack with 14 squadrons of French dragoons currently stationed in the rear. Two other battalions of the Greder Suisse Régiment were also sent, but the attack was poorly co-ordinated and consequently went in piecemeal. The Anglo-Dutch commanders now sent dismounted Dutch dragoons into Taviers, which, together with the Guards and their field guns, poured concentrated musketry- and canister-fire into the advancing French troops. Colonel d’Aubigni, leading his regiment, fell mortally wounded.
As the French ranks wavered, the leading squadrons of Württemberg's Danish horse – now unhampered by enemy fire from either village – were also sent into the attack and fell upon the exposed flank of the Franco-Swiss infantry and dragoons. De la Colonie, with his Grenadiers Rouge regiment, together with the Cologne Guards who were brigaded with them, was now ordered forward from his post south of Ramillies to support the faltering counter-attack on the village. But on his arrival, all was chaos: "Scarcely had my troops got over when the dragoons and Swiss who had preceded us, came tumbling down upon my battalions in full flight ... My own fellows turned about and fled along with them." De La Colonie managed to rally some of his grenadiers, together with the remnants of the French dragoons and Greder Suisse battalions, but it was an entirely peripheral operation, offering only fragile support for Villeroi's right flank.
While the attack on Taviers went on the Earl of Orkney launched his first line of English across the Petite Gheete in a determined attack against the barricaded villages of Offus and Autre-Eglise on the Allied right. Villeroi, posting himself near Offus, watched anxiously the redcoats' advance, mindful of the counsel he had received on 6 May from Louis XIV: "Have particular care to that part of the line which will endure the first shock of the English troops." Heeding this advice the French commander began to transfer battalions from his centre to reinforce the left, drawing more foot from the already weakened right to replace them.
As the English battalions descended the gentle slope of the Petite Gheete valley, struggling through the boggy stream, they were met by Major General de la Guiche's disciplined Walloon infantry sent forward from around Offus. After concentrated volleys, exacting heavy casualties on the redcoats, the Walloons reformed back to the ridgeline in good order. The English took some time to reform their ranks on the dry ground beyond the stream and press on up the slope towards the cottages and barricades on the ridge. The vigour of the English assault, however, was such that they threatened to break through the line of the villages and out onto the open plateau of Mont St André beyond. This was potentially dangerous for the Allied infantry who would then be at the mercy of the Elector's Bavarian and Walloon squadrons patiently waiting on the plateau for the order to move.
Although Henry Lumley's English cavalry had managed to cross the marshy ground around the Petite Gheete, it was soon evident to Marlborough that sufficient cavalry support would not be practicable and that the battle could not be won on the Allied right. The Duke, therefore, called off the attack against Offus and Autre-Eglise. To make sure that Orkney obeyed his order to withdraw, Marlborough sent his Quartermaster-General in person with the command. Despite Orkney's protestations, Cadogan insisted on compliance and, reluctantly, Orkney gave the word for his troops to fall back to their original positions on the edge of the plateau of Jandrenouille. It is still not clear how far Orkney's advance was planned only as a feint; according to historian David Chandler it is probably more accurate to surmise that Marlborough launched Orkney in a serious probe with a view to sounding out the possibilities of the sector. Nevertheless, the attack had served its purpose. Villeroi had given his personal attention to that wing and strengthened it with large bodies of horse and foot that ought to have been taking part in the decisive struggle south of Ramillies.
Meanwhile, the Dutch assault on Ramillies was gaining pace. Marlborough's younger brother, General of Infantry, Charles Churchill, ordered four brigades of foot to attack the village. The assault consisted of 12 battalions of Dutch infantry commanded by Major Generals Schultz and Sparre; two brigades of Saxons under Count Schulenburg; a Scottish brigade in Dutch service led by the 2nd Duke of Argyle; and a small brigade of Protestant Swiss. The 20 French and Bavarian battalions in Ramillies, supported by the Irish who had left Ireland in the Flight of the Wild Geese to join Clare's Dragoons who fought as infantry and captured a colour from the British 3rd Regiment of Foot and a small brigade of Cologne and Bavarian Guards under the Marquis de Maffei, put up a determined defence, initially driving back the attackers with severe losses as commemorated in the song Clare's Dragoons.
Seeing that Schultz and Spaar were faltering, Marlborough now ordered Orkney's second-line British and Danish battalions (who had not been used in the assault on Offus and Autre-Eglise) to move south towards Ramillies. Shielded as they were from observation by a slight fold in the land, their commander, Brigadier-General Van Pallandt, ordered the regimental colours to be left in place on the edge of the plateau to convince their opponents they were still in their initial position. Therefore, unbeknown to the French who remained oblivious to the Allies' real strength and intentions on the opposite side of the Petite Gheete, Marlborough was throwing his full weight against Ramillies and the open plain to the south. Villeroi meanwhile, was still moving more reserves of infantry in the opposite direction towards his left flank; crucially, it would be some time before the French commander noticed the subtle change in emphasis of the Allied dispositions.
Around 15:30 Overkirk advanced his massed squadrons on the open plain in support of the infantry attack on Ramillies. 48 Dutch squadrons, supported on their left by 21 Danish squadrons, led by Count Tilly and Lieutenants Generals Hompesch, d'Auvergne, Ostfriesland and Dopff – steadily advanced towards the enemy (taking care not to prematurely tire the horses), before breaking into a trot to gain the impetus for their charge. The Marquis de Feuquières writing after the battle described the scene: "They advanced in four lines ... As they approached they advanced their second and fourth lines into the intervals of their first and third lines; so that when they made their advance upon us, they formed only one front, without any intermediate spaces." This made it nearly impossible for the French cavalry to perform flanking manoeuvres.
The initial clash favoured the Dutch and Danish squadrons. The disparity of numbers – exacerbated by Villeroi stripping their ranks of infantry to reinforce his left flank – enabled Overkirk's cavalry to throw the first line of French horse back in some disorder towards their second-line squadrons. This line also came under severe pressure and, in turn, was forced back to their third-line of cavalry and the few battalions still remaining on the plain. But these French horsemen were amongst the best in Louis XIV's army – the Maison du Roi, supported by four elite squadrons of Bavarian Cuirassiers. Ably led by de Guiscard, the French cavalry rallied, thrusting back the Allied squadrons in successful local counterattacks. On Overkirk's right flank, close to Ramillies, ten of his squadrons suddenly broke ranks and were scattered, riding headlong to the rear to recover their order, leaving the left flank of the Allied assault on Ramillies dangerously exposed. Notwithstanding the lack of infantry support, de Guiscard threw his cavalry forward in an attempt to split the Allied army in two.
A crisis threatened the centre, but from his vantage point Marlborough was at once aware of the situation. The Allied commander now summoned the cavalry on the right wing to reinforce his centre, leaving only the English squadrons in support of Orkney. Thanks to a combination of battle-smoke and favourable terrain, his redeployment went unnoticed by Villeroi who made no attempt to transfer any of his own 50 unused squadrons. While he waited for the fresh reinforcements to arrive, Marlborough flung himself into the mêlée, rallying some of the Dutch cavalry who were in confusion. But his personal involvement nearly led to his undoing. A number of French horsemen, recognising the Duke, came surging towards his party. Marlborough's horse tumbled and the Duke was thrown – "Milord Marlborough was rid over," wrote Orkney some time later. It was a critical moment of the battle. "Major-General Murray," recalled one eyewitness: "... seeing him fall, marched up in all haste with two Swiss battalions to save him and stop the enemy who were hewing all down in their way." Fortunately Marlborough's newly appointed aide-de-camp, Richard Molesworth, galloped to the rescue, mounted the Duke on his horse and made good their escape, before Murray's disciplined ranks threw back the pursuing French troopers.
After a brief pause, Marlborough's equerry, Colonel Bringfield (or Bingfield), led up another of the Duke's spare horses; but while assisting him onto his mount, the unfortunate Bringfield was hit by an errant cannonball that sheared off his head. One account has it that the cannonball flew between the Captain-General's legs before hitting the unfortunate colonel, whose torso fell at Marlborough's feet – a moment subsequently depicted in a lurid set of contemporary playing cards. Nevertheless, the danger passed and Overkirk and Tilly restored order among the confused squadrons and ordered them to attack again, enabling the Duke to attend to the positioning of the cavalry reinforcements feeding down from his right flank – a change of which Villeroi remained blissfully unaware.
The time was about 16:30, and the two armies were in close contact across the whole 6 km (4 mi) front, from the skirmishing in the marshes in the south, through the vast cavalry battle on the open plain; to the fierce struggle for Ramillies at the centre, and to the north, where, around the cottages of Offus and Autre-Eglise, Orkney and de la Guiche faced each other across the Petite Gheete ready to renew hostilities.
The arrival of the transferring squadrons now began to tip the balance in favour of the Allies. Tired, and suffering a growing list of casualties, the numerical inferiority of Guiscard's squadrons battling on the plain at last began to tell. After earlier failing to hold or retake Franquenée and Taviers, Guiscard's right flank had become dangerously exposed and a fatal gap had opened on the right of their line. Taking advantage of this breach, Württemberg's Danish cavalry now swept forward, wheeling to penetrate the flank of the Maison du Roi whose attention was almost entirely fixed on holding back the Dutch. Sweeping forwards, virtually without resistance, the 21 Danish squadrons reformed behind the French around the area of the Tomb of Ottomond, facing north across the plateau of Mont St André towards the exposed flank of Villeroi's army.
The final Allied reinforcements for the cavalry contest to the south were at last in position; Marlborough's superiority on the left could no longer be denied, and his fast-moving plan took hold of the battlefield. Now, far too late, Villeroi tried to redeploy his 50 unused squadrons, but a desperate attempt to form line facing south, stretching from Offus to Mont St André, floundered amongst the baggage and tents of the French camp carelessly left there after the initial deployment. The Allied commander ordered his cavalry forward against the now heavily outnumbered French and Bavarian horsemen. De Guiscard's right flank, without proper infantry support, could no longer resist the onslaught and, turning their horses northwards, they broke and fled in complete disorder. Even the squadrons currently being scrambled together by Villeroi behind Ramillies could not withstand the onslaught. "We had not got forty yards on our retreat," remembered Captain Peter Drake, an Irishman serving with the French – "when the words sauve qui peut went through the great part, if not the whole army, and put all to confusion"
In Ramillies the Allied infantry, now reinforced by the English troops brought down from the north, at last broke through. The Régiment de Picardie stood their ground but were caught between Colonel Borthwick's Scots-Dutch regiment and the English reinforcements. Borthwick was killed, as was Charles O’Brien, the Irish Viscount Clare in French service, fighting at the head of his regiment. The Marquis de Maffei attempted one last stand with his Bavarian and Cologne Guards, but it proved in vain. Noticing a rush of horsemen fast approaching from the south, he later recalled: "... I went towards the nearest of these squadrons to instruct their officer, but instead of being listened to [I] was immediately surrounded and called upon to ask for quarter."
The roads leading north and west were choked with fugitives. Orkney now sent his English troops back across the Petite Gheete stream to once again storm Offus where de la Guiche's infantry had begun to drift away in the confusion. To the right of the infantry Lord John Hay's 'Scots Greys' also picked their way across the stream and charged the Régiment du Roi within Autre-Eglise. "Our dragoons," wrote John Deane, "pushing into the village ... made terrible slaughter of the enemy." The Bavarian Horse Grenadiers and the Electoral Guards withdrew and formed a shield about Villeroi and the Elector but were scattered by Lumley's cavalry. Stuck in the mass of fugitives fleeing the battlefield, the French and Bavarian commanders narrowly escaped capture by General Cornelius Wood who, unaware of their identity, had to content himself with the seizure of two Bavarian Lieutenant-Generals. Far to the south, the remnants of de la Colonie's brigade headed in the opposite direction towards the French held fortress of Namur.
The retreat became a rout. Individual Allied commanders drove their troops forward in pursuit, allowing their beaten enemy no chance to recover. Soon the Allied infantry could no longer keep up, but their cavalry were off the leash, heading through the gathering night for the crossings on the river Dyle. At last, however, Marlborough called a halt to the pursuit shortly after midnight near Meldert, 19 km (12 mi) from the field. "It was indeed a truly shocking sight to see the miserable remains of this mighty army," wrote Captain Drake, "... reduced to a handful."
What was left of Villeroi's army was now broken in spirit; the imbalance of the casualty figures amply demonstrates the extent of the disaster for Louis XIV's army: (see below). In addition, hundreds of French soldiers were fugitives, many of whom would never remuster to the colours. Villeroi also lost 52 artillery pieces and his entire engineer pontoon train. In the words of Marshal Villars, the French defeat at Ramillies was "the most shameful, humiliating and disastrous of routs".
Town after town now succumbed to the Allies. Leuven fell on 25 May 1706; three days later, the Allies entered Brussels, the capital of the Spanish Netherlands. Marlborough realised the great opportunity created by the early victory of Ramillies: "We now have the whole summer before us," wrote the Duke from Brussels to Robert Harley: "... and with the blessing of God I shall make the best use of it." Malines, Lierre, Ghent, Alost, Damme, Oudenaarde, Bruges, and on 6 June Antwerp, all subsequently fell to Marlborough's victorious army and, like Brussels, proclaimed the Austrian candidate for the Spanish throne, the Archduke Charles, as their sovereign. Villeroi was helpless to arrest the process of collapse. When Louis XIV learnt of the disaster he recalled Marshal Vendôme from northern Italy to take command in Flanders; but it would be weeks before the command changed hands.
As news spread of the Allies' triumph, the Prussians, Hessians and Hanoverian contingents, long delayed by their respective rulers, eagerly joined the pursuit of the broken French and Bavarian forces. "This," wrote Marlborough wearily, "I take to be owing to our late success." Meanwhile, Overkirk took the port of Ostend on 4 July thus opening a direct route to the English Channel for communication and supply, but the Allies were making scant progress against Dendermonde whose governor, the Marquis de Valée, was stubbornly resisting. Only later when Cadogan and Churchill went to take charge did the town's defences begin to fail.
Vendôme formally took over command in Flanders on 4 August; Villeroi would never again receive a major command: "I cannot foresee a happy day in my life save only that of my death." Louis XIV was more forgiving to his old friend: "At our age, Marshal, we must no longer expect good fortune." In the meantime, Marlborough invested the elaborate fortress of Menin which, after a costly siege, capitulated on 22 August. Dendermonde finally succumbed on 6 September followed by Ath – the last conquest of 1706 – on 2 October. By the time Marlborough had closed down the Ramillies campaign he had denied the French most of the Spanish Netherlands west of the Meuse and north of the Sambre – it was an unsurpassed operational triumph for the English Duke but once again it was not decisive as these gains did not defeat France.
The immediate question for the Allies was how to deal with the Spanish Netherlands, a subject on which the Austrians and the Dutch were diametrically opposed. Emperor Joseph I, acting on behalf of his younger brother King Charles III, absent in Spain, claimed that reconquered Brabant and Flanders should be put under immediate possession of a governor named by himself. The Dutch, however, who had supplied the major share of the troops and money to secure the victory (the Austrians had produced nothing of either) claimed the government of the region till the war was over, and that after the peace they should continue to garrison Barrier Fortresses stronger than those which had fallen so easily to Louis XIV's forces in 1701. Marlborough mediated between the two parties but favoured the Dutch position. To sway the Duke's opinion, the Emperor offered Marlborough the governorship of the Spanish Netherlands. It was a tempting offer, but in the name of Allied unity, it was one he refused. In the end England and the Dutch Republic took control of the newly won territory for the duration of the war; after which it was to be handed over to the direct rule of Charles III, subject to the reservation of a Dutch Barrier, the extent and nature of which had yet to be settled.
Meanwhile, on the Upper Rhine, Villars had been forced onto the defensive as battalion after battalion had been sent north to bolster collapsing French forces in Flanders; there was now no possibility of his undertaking the re-capture of Landau. Further good news for the Allies arrived from northern Italy where, on 7 September, Prince Eugene had routed a French army before the Piedmontese capital, Turin, driving the Franco-Spanish forces from northern Italy. Only from Spain did Louis XIV receive any good news where Das Minas and Galway had been forced to retreat from Madrid towards Valencia, allowing Philip V to re-enter his capital on 4 October. All in all though, the situation had changed considerably and Louis XIV began to look for ways to end what was fast becoming a ruinous war for France. For Queen Anne also, the Ramillies campaign had one overriding significance: "Now we have God be thanked so hopeful a prospect of peace." Instead of continuing the momentum of victory, however, cracks in Allied unity would enable Louis XIV to reverse some of the major setbacks suffered at Turin and Ramillies.
The total number of French casualties cannot be calculated precisely, so complete was the collapse of the Franco-Bavarian army that day. David G. Chandler's Marlborough as Military Commander and A Guide to the Battlefields of Europe are consistent with regards to French casualty figures, i.e. 12,000 dead and wounded plus some 7,000 taken prisoner. James Falkner, in Ramillies 1706: Year of Miracles, also notes 12,000 dead and wounded and "up to 10,000" taken prisoner. In Notes on the history of military medicine, Garrison puts French casualties at 13,000, including 2,000 killed, 3,000 wounded and 6,000 missing. In The Collins Encyclopaedia of Military History, Dupuy puts Villeroi's dead and wounded at 8,000, with a further 7,000 captured. Neil Litten, using French archives, suggests 7,000 killed and wounded and 6,000 captured, with a further 2,000 choosing to desert. John Millner's memoirs – Compendious Journal (1733) – is more specific, recording 12,087 of Villeroi's army were killed or wounded, with another 9,729 taken prisoner. In Marlborough, however, Correlli Barnett puts the total casualty figure as high as 30,000–15,000 dead and wounded with an additional 15,000 taken captive. Trevelyan estimates Villeroi's casualties at 13,000 but adds "his losses by desertion may have doubled that number". La Colonie omits a casualty figure in his Chronicles of an old Campaigner but Saint-Simon in his Memoirs states 4,000 killed adding "many others were wounded and many important persons were taken prisoner". Voltaire, however, in Histoire du siècle du Louis XIV records "the French lost there twenty thousand men". Gaston Bodart states 2,000 killed or wounded, 6,000 captured and 7,000 scattered for a total of 13,000 casualties. Périni writes that both sides lost 2 to 3,000 killed or wounded (the Dutch losing precisely 716 killed and 1,712 wounded), and that 5,600 French were captured. | [
{
"paragraph_id": 0,
"text": "The Battle of Ramillies (/ˈræmɪliːz/), fought on 23 May 1706, was a battle of the War of the Spanish Succession. For the Grand Alliance – Austria, England, and the Dutch Republic – the battle had followed an indecisive campaign against the Bourbon armies of King Louis XIV of France in 1705. Although the Allies had captured Barcelona that year, they had been forced to abandon their campaign on the Moselle, had stalled in the Spanish Netherlands and suffered defeat in northern Italy. Yet despite his opponents' setbacks Louis XIV wanted peace, but on reasonable terms. Because of this, as well as to maintain their momentum, the French and their allies took the offensive in 1706.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The campaign began well for Louis XIV's generals: in Italy Marshal Vendôme defeated the Austrians at the Battle of Calcinato in April, while in Alsace Marshal Villars forced the Margrave of Baden back across the Rhine. Encouraged by these early gains Louis XIV urged Marshal Villeroi to go over to the offensive in the Spanish Netherlands and, with victory, gain a 'fair' peace. Accordingly, the French Marshal set off from Leuven (Louvain) at the head of 60,000 men and marched towards Tienen (Tirlemont), as if to threaten Zoutleeuw (Léau). Also determined to fight a major engagement, the Duke of Marlborough, commander-in-chief of Anglo-Dutch forces, assembled his army – some 62,000 men – near Maastricht, and marched past Zoutleeuw. With both sides seeking battle, they soon encountered each other on the dry ground between the rivers Mehaigne and Petite Gette, close to the small village of Ramillies.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In less than four hours Marlborough's Dutch, English, and Danish forces overwhelmed Villeroi's and Max Emanuel's Franco-Spanish-Bavarian army. The Duke's subtle moves and changes in emphasis during the battle – something his opponents failed to realise until it was too late – caught the French in a tactical vice. With their foe broken and routed, the Allies were able to fully exploit their victory. Town after town fell, including Brussels, Bruges and Antwerp; by the end of the campaign Villeroi's army had been driven from most of the Spanish Netherlands. With Prince Eugene's subsequent success at the Battle of Turin in northern Italy, the Allies had imposed the greatest loss of territory and resources that Louis XIV would suffer during the war. Thus, the year 1706 proved, for the Allies, to be an annus mirabilis.",
"title": ""
},
{
"paragraph_id": 3,
"text": "After their disastrous defeat at Blenheim in 1704, the next year brought the French some respite. The Duke of Marlborough had intended the 1705 campaign – an invasion of France through the Moselle valley – to complete the work of Blenheim and persuade King Louis XIV to make peace but the plan had been thwarted by friend and foe alike. The reluctance of his Dutch allies to see their frontiers denuded of troops for another gamble in Germany had denied Marlborough the initiative but of far greater importance was the Margrave of Baden's pronouncement that he could not join the Duke in strength for the coming offensive. This was in part due to the sudden switching of troops from the Rhine to reinforce Prince Eugene in Italy and part due to the deterioration of Baden's health brought on by the re-opening of a severe foot wound he had received at the storming of the Schellenberg the previous year. Marlborough had to cope with the death of Emperor Leopold I in May and the accession of Joseph I, which unavoidably complicated matters for the Grand Alliance.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "The resilience of the French King and the efforts of his generals, also added to Marlborough's problems. Marshal Villeroi, exerting considerable pressure on the Dutch commander, Count Overkirk, along the Meuse, took Huy on 10 June before pressing on towards Liège. With Marshal Villars sitting strong on the Moselle, the Allied commander – whose supplies had by now become very short – was forced to call off his campaign on 16 June. \"What a disgrace for Marlborough,\" exulted Villeroi, \"to have made false movements without any result!\" With Marlborough's departure north, the French transferred troops from the Moselle valley to reinforce Villeroi in Flanders, while Villars marched off to the Rhine.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "The Anglo-Dutch forces gained minor compensation for the failed Moselle campaign with the success at Elixheim and the crossing of the Lines of Brabant in the Spanish Netherlands (Huy was also retaken on 11 July) but a chance to bring the French to a decisive engagement eluded Marlborough. The year 1705 proved almost entirely barren for the Duke, whose military disappointments were only partly compensated by efforts on the diplomatic front where, at the courts of Düsseldorf, Frankfurt, Vienna, Berlin and Hanover, Marlborough sought to bolster support for the Grand Alliance and extract promises of prompt assistance for the following year's campaign.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "On 11 January 1706 Marlborough finally reached London at the end of his diplomatic tour but he had already been planning his strategy for the coming season. The first option (although it is debatable to what extent the Duke was committed to such an enterprise) was a plan to transfer his forces from the Spanish Netherlands to northern Italy; once there, he intended linking up with Prince Eugene in order to defeat the French and safeguard Savoy from being overrun. Savoy would then serve as a gateway into France by way of the mountain passes or an invasion with naval support along the Mediterranean coast via Nice and Toulon, in connexion with redoubled Allied efforts in Spain. It seems that the Duke's favoured scheme was to return to the Moselle valley (where Marshal Marsin had recently taken command of French forces) and once more attempt an advance into the heart of France. But these decisions soon became academic. Shortly after Marlborough landed in the Dutch Republic on 14 April, news arrived of big Allied setbacks in the wider war.",
"title": "Prelude"
},
{
"paragraph_id": 7,
"text": "Determined to show the Grand Alliance that France was still resolute, Louis XIV prepared to launch a double surprise in Alsace and northern Italy. On the latter front Marshal Vendôme defeated the Imperial army at Calcinato on 19 April, pushing the Imperialists back in confusion (French forces were now in a position to prepare for the long-anticipated siege of Turin). In Alsace, Marshal Villars took Baden by surprise and captured Haguenau, driving him back across the Rhine in some disorder, thus creating a threat on Landau. With these reverses, the Dutch refused to contemplate Marlborough's ambitious march to Italy or any plan that denuded their borders of the Duke and their army. In the interest of coalition harmony, Marlborough prepared to campaign in the Low Countries.",
"title": "Prelude"
},
{
"paragraph_id": 8,
"text": "The Duke left The Hague on 9 May. \"God knows I go with a heavy heart,\" he wrote six days later to his friend and political ally in England, Lord Godolphin, \"for I have no hope of doing anything considerable, unless the French do what I am very confident they will not ...\" – in other words, court battle. On 17 May the Duke concentrated his Dutch and English troops at Tongeren, near Maastricht. The Hanoverians, Hessians and Danes, despite earlier undertakings, found, or invented, pressing reasons for withholding their support. Marlborough wrote an appeal to the Duke of Württemberg, the commander of the Danish contingent: \"I send you this express to request your Highness to bring forward by a double march your cavalry so as to join us at the earliest moment ...\" Additionally, the King in Prussia, Frederick I, had kept his troops in quarters behind the Rhine while his personal disputes with Vienna and the States General at The Hague remained unresolved. Nevertheless, the Duke could think of no circumstances why the French would leave their strong positions and attack his army, even if Villeroi was first reinforced by substantial transfers from Marsin's command. But in this he had miscalculated. Although Louis XIV wanted peace he wanted it on reasonable terms; for that, he needed victory in the field and to convince the Allies that his resources were by no means exhausted.",
"title": "Prelude"
},
{
"paragraph_id": 9,
"text": "Following the successes in Italy and along the Rhine, Louis XIV was now hopeful of similar results in Flanders. Far from standing on the defensive therefore – and unbeknown to Marlborough – Louis XIV was persistently goading his marshal into action. \"[Villeroi] began to imagine,\" wrote St Simon, \"that the King doubted his courage, and resolved to stake all at once in an effort to vindicate himself.\" Accordingly, on 18 May, Villeroi set off from Leuven at the head of 70 battalions, 132 squadrons and 62 cannon – comprising an overall force of some 60,000 troops – and crossed the river Dyle to seek battle with the enemy. Spurred on by his growing confidence in his ability to out-general his opponent, and by Versailles’ determination to avenge Blenheim, Villeroi and his generals anticipated success.",
"title": "Prelude"
},
{
"paragraph_id": 10,
"text": "Neither opponent expected the clash at the exact moment or place where it occurred. The French moved first to Tienen, (as if to threaten Zoutleeuw, abandoned by the French in October 1705), before turning southwards, heading for Jodoigne – this line of march took Villeroi's army towards the narrow aperture of dry ground between the rivers Mehaigne and Petite Gette close to the small villages of Ramillies and Taviers; but neither commander quite appreciated how far his opponent had travelled. Villeroi still believed (on 22 May) the Allies were a full day's march away when in fact they had camped near Corswaren waiting for the Danish squadrons to catch up; for his part, Marlborough deemed Villeroi still at Jodoigne when in reality he was now approaching the plateau of Mont St. André with the intention of pitching camp near Ramillies (see map at right). However, the Prussian infantry was not there. Marlborough wrote to Lord Raby, the English resident at Berlin: \"If it should please God to give us victory over the enemy, the Allies will be little obliged to the King [Frederick] for the success.\"",
"title": "Prelude"
},
{
"paragraph_id": 11,
"text": "The following day, at 01:00, Marlborough dispatched Cadogan, his Quartermaster-General, with an advanced guard to reconnoitre the same dry ground that Villeroi's army was now heading toward, country that was well known to the Duke from previous campaigns. Two hours later the Duke followed with the main body: 74 battalions, 123 squadrons, 90 pieces of artillery and 20 mortars, totalling 62,000 troops. About 08:00, after Cadogan had just passed Merdorp, his force made brief contact with a party of French hussars gathering forage on the edge of the plateau of Jandrenouille. After a brief exchange of shots the French retired and Cadogan's dragoons pressed forward. With a short lift in the mist, Cadogan soon discovered the smartly ordered lines of Villeroi's advance guard some 6 kilometres (4 miles) off; a galloper hastened back to warn Marlborough. Two hours later the Duke, accompanied by the Dutch field commander Field Marshal Overkirk, General Daniël van Dopff, and the Allied staff, rode up to Cadogan where on the horizon to the westward he could discern the massed ranks of the French army deploying for battle along the 6 km (4 mi) front. Marlborough later told Bishop Burnet: \"The French army looked the best of any he had ever seen.\"",
"title": "Prelude"
},
{
"paragraph_id": 12,
"text": "The battlefield of Ramillies is very similar to that of Blenheim, for here too there is an immense area of arable land unimpeded by woods or hedges. Villeroi's right rested on the villages of Franquenée and Taviers, with the river Mehaigne protecting his flank. A large open plain, about 2 km (1 mi) wide, lay between Taviers and Ramillies, but unlike Blenheim, there was no stream to hinder the cavalry. His centre was secured by Ramillies itself, lying on a slight eminence which gave distant views to the north and east. The French left flank was protected by broken country, and by a stream, the Petite Gheete, which runs deep between steep and slippery slopes. On the French side of the stream the ground rises to Offus, the village which, together with Autre-Eglise farther north, anchored Villeroi's left flank. To the west of the Petite Gheete rises the plateau of Mont St. André; a second plain, the plateau of Jandrenouille – upon which the Anglo-Dutch army amassed – rises to the east.",
"title": "Battle"
},
{
"paragraph_id": 13,
"text": "At 11:00 the Duke ordered the army to take standard battle formation. On the far right, towards Foulz, the British battalions and squadrons took up their posts in a double line near the Jeuche stream. The centre was formed by the mass of Dutch, German, Protestant Swiss and Scottish infantry – perhaps 30,000 men – facing Offus and Ramillies. Also facing Ramillies Marlborough placed a powerful battery of thirty 24-pounders, dragged into position by a team of oxen; further batteries were positioned overlooking the Petite Gheete. On their left, on the broad plain between Taviers and Ramillies – and where Marlborough thought the decisive encounter must take place – Overkirk drew the 69 squadrons of the Dutch and Danish horse, supported by 19 battalions of Dutch infantry and two artillery pieces.",
"title": "Battle"
},
{
"paragraph_id": 14,
"text": "Meanwhile, Villeroi deployed his forces. In Taviers on his right, he placed two battalions of the Greder Suisse Régiment, with a smaller force forward in Franquenée; the whole position was protected by the boggy ground of the river Mehaigne, thus preventing an Allied flanking movement. In the open country between Taviers and Ramillies, he placed 82 squadrons under General de Guiscard supported by several interleaved brigades of French, Swiss and Bavarian infantry. Along the Ramillies–Offus–Autre Eglise ridge-line, Villeroi positioned Walloon and Bavarian infantry, supported by the Elector of Bavaria's 50 squadrons of Bavarian and Walloon cavalry placed behind on the plateau of Mont St. André. Ramillies, Offus and Autre-Eglise were all packed with troops and put in a state of defence, with alleys barricaded and walls loop-holed for muskets. Villeroi also positioned powerful batteries near Ramillies. These guns (some of which were of the three barrelled kind first seen at Elixheim the previous year) enjoyed good arcs of fire, able to fully cover the approaches of the plateau of Jandrenouille over which the Allied infantry would have to pass.",
"title": "Battle"
},
{
"paragraph_id": 15,
"text": "Marlborough, however, noticed several important weaknesses in the French dispositions. Tactically, it was imperative for Villeroi to occupy Taviers on his right and Autre-Eglise on his left, but by adopting this posture he had been forced to over-extend his forces. Moreover, this disposition – concave in relation to the Allied army – gave Marlborough the opportunity to form a more compact line, drawn up in a shorter front between the 'horns' of the French crescent; when the Allied blow came it would be more concentrated and carry more weight. Additionally, the Duke's disposition facilitated the transfer of troops across his front far more easily than his foe, a tactical advantage that would grow in importance as the events of the afternoon unfolded. Although Villeroi had the option of enveloping the flanks of the Allied army as they deployed on the plateau of Jandrenouille – threatening to encircle their army – the Duke correctly gauged that the characteristically cautious French commander was intent on a defensive battle along the ridge-line.",
"title": "Battle"
},
{
"paragraph_id": 16,
"text": "At 13:00 the batteries went into action; a little later two Allied columns set out from the extremities of their line and attacked the flanks of the Franco-Bavarian army. To the south, 4 battalions, under the command of Colonel Wertmüller, came forward with their two field guns to seize the hamlet of Franquenée. The small Swiss garrison in the village, shaken by the sudden onslaught and unsupported by the battalions to their rear, were soon compelled back towards the village of Taviers. Taviers was of particular importance to the Franco-Bavarian position: it protected the otherwise unsupported flank of General de Guiscard's cavalry on the open plain, while at the same time, it allowed the French infantry to pose a threat to the flanks of the Dutch and Danish squadrons as they came forward into position. But hardly had the retreating Swiss rejoined their comrades in that village when the Dutch Guards renewed their attack. The fighting amongst the alleys and cottages soon deteriorated into a fierce bayonet and clubbing mêlée, but the superiority in Dutch firepower soon told. The accomplished French officer, Colonel de la Colonie, standing on the plain nearby remembered: \"This village was the opening of the engagement, and the fighting there was almost as murderous as the rest of the battle put together.\" By about 15:00 the Swiss had been pushed out of the village into the marshes beyond.",
"title": "Battle"
},
{
"paragraph_id": 17,
"text": "Villeroi's right flank fell into chaos and was now open and vulnerable. Alerted to the situation de Guiscard ordered an immediate attack with 14 squadrons of French dragoons currently stationed in the rear. Two other battalions of the Greder Suisse Régiment were also sent, but the attack was poorly co-ordinated and consequently went in piecemeal. The Anglo-Dutch commanders now sent dismounted Dutch dragoons into Taviers, which, together with the Guards and their field guns, poured concentrated musketry- and canister-fire into the advancing French troops. Colonel d’Aubigni, leading his regiment, fell mortally wounded.",
"title": "Battle"
},
{
"paragraph_id": 18,
"text": "As the French ranks wavered, the leading squadrons of Württemberg's Danish horse – now unhampered by enemy fire from either village – were also sent into the attack and fell upon the exposed flank of the Franco-Swiss infantry and dragoons. De la Colonie, with his Grenadiers Rouge regiment, together with the Cologne Guards who were brigaded with them, was now ordered forward from his post south of Ramillies to support the faltering counter-attack on the village. But on his arrival, all was chaos: \"Scarcely had my troops got over when the dragoons and Swiss who had preceded us, came tumbling down upon my battalions in full flight ... My own fellows turned about and fled along with them.\" De La Colonie managed to rally some of his grenadiers, together with the remnants of the French dragoons and Greder Suisse battalions, but it was an entirely peripheral operation, offering only fragile support for Villeroi's right flank.",
"title": "Battle"
},
{
"paragraph_id": 19,
"text": "While the attack on Taviers went on the Earl of Orkney launched his first line of English across the Petite Gheete in a determined attack against the barricaded villages of Offus and Autre-Eglise on the Allied right. Villeroi, posting himself near Offus, watched anxiously the redcoats' advance, mindful of the counsel he had received on 6 May from Louis XIV: \"Have particular care to that part of the line which will endure the first shock of the English troops.\" Heeding this advice the French commander began to transfer battalions from his centre to reinforce the left, drawing more foot from the already weakened right to replace them.",
"title": "Battle"
},
{
"paragraph_id": 20,
"text": "As the English battalions descended the gentle slope of the Petite Gheete valley, struggling through the boggy stream, they were met by Major General de la Guiche's disciplined Walloon infantry sent forward from around Offus. After concentrated volleys, exacting heavy casualties on the redcoats, the Walloons reformed back to the ridgeline in good order. The English took some time to reform their ranks on the dry ground beyond the stream and press on up the slope towards the cottages and barricades on the ridge. The vigour of the English assault, however, was such that they threatened to break through the line of the villages and out onto the open plateau of Mont St André beyond. This was potentially dangerous for the Allied infantry who would then be at the mercy of the Elector's Bavarian and Walloon squadrons patiently waiting on the plateau for the order to move.",
"title": "Battle"
},
{
"paragraph_id": 21,
"text": "Although Henry Lumley's English cavalry had managed to cross the marshy ground around the Petite Gheete, it was soon evident to Marlborough that sufficient cavalry support would not be practicable and that the battle could not be won on the Allied right. The Duke, therefore, called off the attack against Offus and Autre-Eglise. To make sure that Orkney obeyed his order to withdraw, Marlborough sent his Quartermaster-General in person with the command. Despite Orkney's protestations, Cadogan insisted on compliance and, reluctantly, Orkney gave the word for his troops to fall back to their original positions on the edge of the plateau of Jandrenouille. It is still not clear how far Orkney's advance was planned only as a feint; according to historian David Chandler it is probably more accurate to surmise that Marlborough launched Orkney in a serious probe with a view to sounding out the possibilities of the sector. Nevertheless, the attack had served its purpose. Villeroi had given his personal attention to that wing and strengthened it with large bodies of horse and foot that ought to have been taking part in the decisive struggle south of Ramillies.",
"title": "Battle"
},
{
"paragraph_id": 22,
"text": "Meanwhile, the Dutch assault on Ramillies was gaining pace. Marlborough's younger brother, General of Infantry, Charles Churchill, ordered four brigades of foot to attack the village. The assault consisted of 12 battalions of Dutch infantry commanded by Major Generals Schultz and Sparre; two brigades of Saxons under Count Schulenburg; a Scottish brigade in Dutch service led by the 2nd Duke of Argyle; and a small brigade of Protestant Swiss. The 20 French and Bavarian battalions in Ramillies, supported by the Irish who had left Ireland in the Flight of the Wild Geese to join Clare's Dragoons who fought as infantry and captured a colour from the British 3rd Regiment of Foot and a small brigade of Cologne and Bavarian Guards under the Marquis de Maffei, put up a determined defence, initially driving back the attackers with severe losses as commemorated in the song Clare's Dragoons.",
"title": "Battle"
},
{
"paragraph_id": 23,
"text": "Seeing that Schultz and Spaar were faltering, Marlborough now ordered Orkney's second-line British and Danish battalions (who had not been used in the assault on Offus and Autre-Eglise) to move south towards Ramillies. Shielded as they were from observation by a slight fold in the land, their commander, Brigadier-General Van Pallandt, ordered the regimental colours to be left in place on the edge of the plateau to convince their opponents they were still in their initial position. Therefore, unbeknown to the French who remained oblivious to the Allies' real strength and intentions on the opposite side of the Petite Gheete, Marlborough was throwing his full weight against Ramillies and the open plain to the south. Villeroi meanwhile, was still moving more reserves of infantry in the opposite direction towards his left flank; crucially, it would be some time before the French commander noticed the subtle change in emphasis of the Allied dispositions.",
"title": "Battle"
},
{
"paragraph_id": 24,
"text": "Around 15:30 Overkirk advanced his massed squadrons on the open plain in support of the infantry attack on Ramillies. 48 Dutch squadrons, supported on their left by 21 Danish squadrons, led by Count Tilly and Lieutenants Generals Hompesch, d'Auvergne, Ostfriesland and Dopff – steadily advanced towards the enemy (taking care not to prematurely tire the horses), before breaking into a trot to gain the impetus for their charge. The Marquis de Feuquières writing after the battle described the scene: \"They advanced in four lines ... As they approached they advanced their second and fourth lines into the intervals of their first and third lines; so that when they made their advance upon us, they formed only one front, without any intermediate spaces.\" This made it nearly impossible for the French cavalry to perform flanking manoeuvres.",
"title": "Battle"
},
{
"paragraph_id": 25,
"text": "The initial clash favoured the Dutch and Danish squadrons. The disparity of numbers – exacerbated by Villeroi stripping their ranks of infantry to reinforce his left flank – enabled Overkirk's cavalry to throw the first line of French horse back in some disorder towards their second-line squadrons. This line also came under severe pressure and, in turn, was forced back to their third-line of cavalry and the few battalions still remaining on the plain. But these French horsemen were amongst the best in Louis XIV's army – the Maison du Roi, supported by four elite squadrons of Bavarian Cuirassiers. Ably led by de Guiscard, the French cavalry rallied, thrusting back the Allied squadrons in successful local counterattacks. On Overkirk's right flank, close to Ramillies, ten of his squadrons suddenly broke ranks and were scattered, riding headlong to the rear to recover their order, leaving the left flank of the Allied assault on Ramillies dangerously exposed. Notwithstanding the lack of infantry support, de Guiscard threw his cavalry forward in an attempt to split the Allied army in two.",
"title": "Battle"
},
{
"paragraph_id": 26,
"text": "A crisis threatened the centre, but from his vantage point Marlborough was at once aware of the situation. The Allied commander now summoned the cavalry on the right wing to reinforce his centre, leaving only the English squadrons in support of Orkney. Thanks to a combination of battle-smoke and favourable terrain, his redeployment went unnoticed by Villeroi who made no attempt to transfer any of his own 50 unused squadrons. While he waited for the fresh reinforcements to arrive, Marlborough flung himself into the mêlée, rallying some of the Dutch cavalry who were in confusion. But his personal involvement nearly led to his undoing. A number of French horsemen, recognising the Duke, came surging towards his party. Marlborough's horse tumbled and the Duke was thrown – \"Milord Marlborough was rid over,\" wrote Orkney some time later. It was a critical moment of the battle. \"Major-General Murray,\" recalled one eyewitness: \"... seeing him fall, marched up in all haste with two Swiss battalions to save him and stop the enemy who were hewing all down in their way.\" Fortunately Marlborough's newly appointed aide-de-camp, Richard Molesworth, galloped to the rescue, mounted the Duke on his horse and made good their escape, before Murray's disciplined ranks threw back the pursuing French troopers.",
"title": "Battle"
},
{
"paragraph_id": 27,
"text": "After a brief pause, Marlborough's equerry, Colonel Bringfield (or Bingfield), led up another of the Duke's spare horses; but while assisting him onto his mount, the unfortunate Bringfield was hit by an errant cannonball that sheared off his head. One account has it that the cannonball flew between the Captain-General's legs before hitting the unfortunate colonel, whose torso fell at Marlborough's feet – a moment subsequently depicted in a lurid set of contemporary playing cards. Nevertheless, the danger passed and Overkirk and Tilly restored order among the confused squadrons and ordered them to attack again, enabling the Duke to attend to the positioning of the cavalry reinforcements feeding down from his right flank – a change of which Villeroi remained blissfully unaware.",
"title": "Battle"
},
{
"paragraph_id": 28,
"text": "The time was about 16:30, and the two armies were in close contact across the whole 6 km (4 mi) front, from the skirmishing in the marshes in the south, through the vast cavalry battle on the open plain; to the fierce struggle for Ramillies at the centre, and to the north, where, around the cottages of Offus and Autre-Eglise, Orkney and de la Guiche faced each other across the Petite Gheete ready to renew hostilities.",
"title": "Battle"
},
{
"paragraph_id": 29,
"text": "The arrival of the transferring squadrons now began to tip the balance in favour of the Allies. Tired, and suffering a growing list of casualties, the numerical inferiority of Guiscard's squadrons battling on the plain at last began to tell. After earlier failing to hold or retake Franquenée and Taviers, Guiscard's right flank had become dangerously exposed and a fatal gap had opened on the right of their line. Taking advantage of this breach, Württemberg's Danish cavalry now swept forward, wheeling to penetrate the flank of the Maison du Roi whose attention was almost entirely fixed on holding back the Dutch. Sweeping forwards, virtually without resistance, the 21 Danish squadrons reformed behind the French around the area of the Tomb of Ottomond, facing north across the plateau of Mont St André towards the exposed flank of Villeroi's army.",
"title": "Battle"
},
{
"paragraph_id": 30,
"text": "The final Allied reinforcements for the cavalry contest to the south were at last in position; Marlborough's superiority on the left could no longer be denied, and his fast-moving plan took hold of the battlefield. Now, far too late, Villeroi tried to redeploy his 50 unused squadrons, but a desperate attempt to form line facing south, stretching from Offus to Mont St André, floundered amongst the baggage and tents of the French camp carelessly left there after the initial deployment. The Allied commander ordered his cavalry forward against the now heavily outnumbered French and Bavarian horsemen. De Guiscard's right flank, without proper infantry support, could no longer resist the onslaught and, turning their horses northwards, they broke and fled in complete disorder. Even the squadrons currently being scrambled together by Villeroi behind Ramillies could not withstand the onslaught. \"We had not got forty yards on our retreat,\" remembered Captain Peter Drake, an Irishman serving with the French – \"when the words sauve qui peut went through the great part, if not the whole army, and put all to confusion\"",
"title": "Battle"
},
{
"paragraph_id": 31,
"text": "In Ramillies the Allied infantry, now reinforced by the English troops brought down from the north, at last broke through. The Régiment de Picardie stood their ground but were caught between Colonel Borthwick's Scots-Dutch regiment and the English reinforcements. Borthwick was killed, as was Charles O’Brien, the Irish Viscount Clare in French service, fighting at the head of his regiment. The Marquis de Maffei attempted one last stand with his Bavarian and Cologne Guards, but it proved in vain. Noticing a rush of horsemen fast approaching from the south, he later recalled: \"... I went towards the nearest of these squadrons to instruct their officer, but instead of being listened to [I] was immediately surrounded and called upon to ask for quarter.\"",
"title": "Battle"
},
{
"paragraph_id": 32,
"text": "The roads leading north and west were choked with fugitives. Orkney now sent his English troops back across the Petite Gheete stream to once again storm Offus where de la Guiche's infantry had begun to drift away in the confusion. To the right of the infantry Lord John Hay's 'Scots Greys' also picked their way across the stream and charged the Régiment du Roi within Autre-Eglise. \"Our dragoons,\" wrote John Deane, \"pushing into the village ... made terrible slaughter of the enemy.\" The Bavarian Horse Grenadiers and the Electoral Guards withdrew and formed a shield about Villeroi and the Elector but were scattered by Lumley's cavalry. Stuck in the mass of fugitives fleeing the battlefield, the French and Bavarian commanders narrowly escaped capture by General Cornelius Wood who, unaware of their identity, had to content himself with the seizure of two Bavarian Lieutenant-Generals. Far to the south, the remnants of de la Colonie's brigade headed in the opposite direction towards the French held fortress of Namur.",
"title": "Battle"
},
{
"paragraph_id": 33,
"text": "The retreat became a rout. Individual Allied commanders drove their troops forward in pursuit, allowing their beaten enemy no chance to recover. Soon the Allied infantry could no longer keep up, but their cavalry were off the leash, heading through the gathering night for the crossings on the river Dyle. At last, however, Marlborough called a halt to the pursuit shortly after midnight near Meldert, 19 km (12 mi) from the field. \"It was indeed a truly shocking sight to see the miserable remains of this mighty army,\" wrote Captain Drake, \"... reduced to a handful.\"",
"title": "Battle"
},
{
"paragraph_id": 34,
"text": "What was left of Villeroi's army was now broken in spirit; the imbalance of the casualty figures amply demonstrates the extent of the disaster for Louis XIV's army: (see below). In addition, hundreds of French soldiers were fugitives, many of whom would never remuster to the colours. Villeroi also lost 52 artillery pieces and his entire engineer pontoon train. In the words of Marshal Villars, the French defeat at Ramillies was \"the most shameful, humiliating and disastrous of routs\".",
"title": "Aftermath"
},
{
"paragraph_id": 35,
"text": "Town after town now succumbed to the Allies. Leuven fell on 25 May 1706; three days later, the Allies entered Brussels, the capital of the Spanish Netherlands. Marlborough realised the great opportunity created by the early victory of Ramillies: \"We now have the whole summer before us,\" wrote the Duke from Brussels to Robert Harley: \"... and with the blessing of God I shall make the best use of it.\" Malines, Lierre, Ghent, Alost, Damme, Oudenaarde, Bruges, and on 6 June Antwerp, all subsequently fell to Marlborough's victorious army and, like Brussels, proclaimed the Austrian candidate for the Spanish throne, the Archduke Charles, as their sovereign. Villeroi was helpless to arrest the process of collapse. When Louis XIV learnt of the disaster he recalled Marshal Vendôme from northern Italy to take command in Flanders; but it would be weeks before the command changed hands.",
"title": "Aftermath"
},
{
"paragraph_id": 36,
"text": "As news spread of the Allies' triumph, the Prussians, Hessians and Hanoverian contingents, long delayed by their respective rulers, eagerly joined the pursuit of the broken French and Bavarian forces. \"This,\" wrote Marlborough wearily, \"I take to be owing to our late success.\" Meanwhile, Overkirk took the port of Ostend on 4 July thus opening a direct route to the English Channel for communication and supply, but the Allies were making scant progress against Dendermonde whose governor, the Marquis de Valée, was stubbornly resisting. Only later when Cadogan and Churchill went to take charge did the town's defences begin to fail.",
"title": "Aftermath"
},
{
"paragraph_id": 37,
"text": "Vendôme formally took over command in Flanders on 4 August; Villeroi would never again receive a major command: \"I cannot foresee a happy day in my life save only that of my death.\" Louis XIV was more forgiving to his old friend: \"At our age, Marshal, we must no longer expect good fortune.\" In the meantime, Marlborough invested the elaborate fortress of Menin which, after a costly siege, capitulated on 22 August. Dendermonde finally succumbed on 6 September followed by Ath – the last conquest of 1706 – on 2 October. By the time Marlborough had closed down the Ramillies campaign he had denied the French most of the Spanish Netherlands west of the Meuse and north of the Sambre – it was an unsurpassed operational triumph for the English Duke but once again it was not decisive as these gains did not defeat France.",
"title": "Aftermath"
},
{
"paragraph_id": 38,
"text": "The immediate question for the Allies was how to deal with the Spanish Netherlands, a subject on which the Austrians and the Dutch were diametrically opposed. Emperor Joseph I, acting on behalf of his younger brother King Charles III, absent in Spain, claimed that reconquered Brabant and Flanders should be put under immediate possession of a governor named by himself. The Dutch, however, who had supplied the major share of the troops and money to secure the victory (the Austrians had produced nothing of either) claimed the government of the region till the war was over, and that after the peace they should continue to garrison Barrier Fortresses stronger than those which had fallen so easily to Louis XIV's forces in 1701. Marlborough mediated between the two parties but favoured the Dutch position. To sway the Duke's opinion, the Emperor offered Marlborough the governorship of the Spanish Netherlands. It was a tempting offer, but in the name of Allied unity, it was one he refused. In the end England and the Dutch Republic took control of the newly won territory for the duration of the war; after which it was to be handed over to the direct rule of Charles III, subject to the reservation of a Dutch Barrier, the extent and nature of which had yet to be settled.",
"title": "Aftermath"
},
{
"paragraph_id": 39,
"text": "Meanwhile, on the Upper Rhine, Villars had been forced onto the defensive as battalion after battalion had been sent north to bolster collapsing French forces in Flanders; there was now no possibility of his undertaking the re-capture of Landau. Further good news for the Allies arrived from northern Italy where, on 7 September, Prince Eugene had routed a French army before the Piedmontese capital, Turin, driving the Franco-Spanish forces from northern Italy. Only from Spain did Louis XIV receive any good news where Das Minas and Galway had been forced to retreat from Madrid towards Valencia, allowing Philip V to re-enter his capital on 4 October. All in all though, the situation had changed considerably and Louis XIV began to look for ways to end what was fast becoming a ruinous war for France. For Queen Anne also, the Ramillies campaign had one overriding significance: \"Now we have God be thanked so hopeful a prospect of peace.\" Instead of continuing the momentum of victory, however, cracks in Allied unity would enable Louis XIV to reverse some of the major setbacks suffered at Turin and Ramillies.",
"title": "Aftermath"
},
{
"paragraph_id": 40,
"text": "The total number of French casualties cannot be calculated precisely, so complete was the collapse of the Franco-Bavarian army that day. David G. Chandler's Marlborough as Military Commander and A Guide to the Battlefields of Europe are consistent with regards to French casualty figures, i.e. 12,000 dead and wounded plus some 7,000 taken prisoner. James Falkner, in Ramillies 1706: Year of Miracles, also notes 12,000 dead and wounded and \"up to 10,000\" taken prisoner. In Notes on the history of military medicine, Garrison puts French casualties at 13,000, including 2,000 killed, 3,000 wounded and 6,000 missing. In The Collins Encyclopaedia of Military History, Dupuy puts Villeroi's dead and wounded at 8,000, with a further 7,000 captured. Neil Litten, using French archives, suggests 7,000 killed and wounded and 6,000 captured, with a further 2,000 choosing to desert. John Millner's memoirs – Compendious Journal (1733) – is more specific, recording 12,087 of Villeroi's army were killed or wounded, with another 9,729 taken prisoner. In Marlborough, however, Correlli Barnett puts the total casualty figure as high as 30,000–15,000 dead and wounded with an additional 15,000 taken captive. Trevelyan estimates Villeroi's casualties at 13,000 but adds \"his losses by desertion may have doubled that number\". La Colonie omits a casualty figure in his Chronicles of an old Campaigner but Saint-Simon in his Memoirs states 4,000 killed adding \"many others were wounded and many important persons were taken prisoner\". Voltaire, however, in Histoire du siècle du Louis XIV records \"the French lost there twenty thousand men\". Gaston Bodart states 2,000 killed or wounded, 6,000 captured and 7,000 scattered for a total of 13,000 casualties. Périni writes that both sides lost 2 to 3,000 killed or wounded (the Dutch losing precisely 716 killed and 1,712 wounded), and that 5,600 French were captured.",
"title": "Casualties"
}
] | The Battle of Ramillies, fought on 23 May 1706, was a battle of the War of the Spanish Succession. For the Grand Alliance – Austria, England, and the Dutch Republic – the battle had followed an indecisive campaign against the Bourbon armies of King Louis XIV of France in 1705. Although the Allies had captured Barcelona that year, they had been forced to abandon their campaign on the Moselle, had stalled in the Spanish Netherlands and suffered defeat in northern Italy. Yet despite his opponents' setbacks Louis XIV wanted peace, but on reasonable terms. Because of this, as well as to maintain their momentum, the French and their allies took the offensive in 1706. The campaign began well for Louis XIV's generals: in Italy Marshal Vendôme defeated the Austrians at the Battle of Calcinato in April, while in Alsace Marshal Villars forced the Margrave of Baden back across the Rhine. Encouraged by these early gains Louis XIV urged Marshal Villeroi to go over to the offensive in the Spanish Netherlands and, with victory, gain a 'fair' peace. Accordingly, the French Marshal set off from Leuven (Louvain) at the head of 60,000 men and marched towards Tienen (Tirlemont), as if to threaten Zoutleeuw (Léau). Also determined to fight a major engagement, the Duke of Marlborough, commander-in-chief of Anglo-Dutch forces, assembled his army – some 62,000 men – near Maastricht, and marched past Zoutleeuw. With both sides seeking battle, they soon encountered each other on the dry ground between the rivers Mehaigne and Petite Gette, close to the small village of Ramillies. In less than four hours Marlborough's Dutch, English, and Danish forces overwhelmed Villeroi's and Max Emanuel's Franco-Spanish-Bavarian army. The Duke's subtle moves and changes in emphasis during the battle – something his opponents failed to realise until it was too late – caught the French in a tactical vice. With their foe broken and routed, the Allies were able to fully exploit their victory. Town after town fell, including Brussels, Bruges and Antwerp; by the end of the campaign Villeroi's army had been driven from most of the Spanish Netherlands. With Prince Eugene's subsequent success at the Battle of Turin in northern Italy, the Allies had imposed the greatest loss of territory and resources that Louis XIV would suffer during the war. Thus, the year 1706 proved, for the Allies, to be an annus mirabilis. | 2001-09-07T19:09:13Z | 2023-11-19T17:39:38Z | [
"Template:Featured article",
"Template:Infobox military conflict",
"Template:Efn",
"Template:Imagefact",
"Template:Aut",
"Template:ISBN",
"Template:Authority control",
"Template:Short description",
"Template:Sfn",
"Template:Notelist",
"Template:Use dmy dates",
"Template:Convert",
"Template:Reflist",
"Template:Refbegin",
"Template:IPAc-en",
"Template:Snd",
"Template:Nbs",
"Template:Cite book",
"Template:Refend"
] | https://en.wikipedia.org/wiki/Battle_of_Ramillies |
4,051 | Brian Kernighan | Brian Wilson Kernighan (/ˈkɜːrnɪhæn/; born January 30, 1942) is a Canadian computer scientist.
He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language (The C Programming Language) with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language ("it's entirely Dennis Ritchie's work"). He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and of AWK both stand for "Kernighan".
In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic.
Kernighan has been a professor of computer science at Princeton University since 2000 and is the director of undergraduate studies in the department of computer science. In 2015, he co-authored the book The Go Programming Language.
Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled "Some graph partitioning problems related to program segmentation" under the supervision of Peter G. Weiner.
Kernighan has held a professorship in the department of computer science at Princeton since 2000. Each fall he teaches a course called "Computers in Our World", which introduces the fundamentals of computing to non-majors.
Kernighan was the software editor for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for BASIC, FORTRAN, and Pascal, and most notably his "Ratfor" (rational FORTRAN) was put in the public domain.
He has said that if stranded on an island with only one programming language it would have to be C.
Kernighan coined the term "Unix" and helped popularize Thompson's Unix philosophy. Kernighan is also known as a coiner of the expression "What You See Is All You Get" (WYSIAYG), which is a sarcastic variant of the original "What You See Is What You Get" (WYSIWYG). Kernighan's term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts.
In 1972, Kernighan described memory management in strings using "hello" and "world", in the B programming language, which became the iconic example we know today. Kernighan's original 1978 implementation of Hello, World! was sold at The Algorithm Auction, the world's first auction of computer algorithms.
In 1996, Kernighan taught CS50 which is the Harvard University introductory course in computer science. Kernighan was an influence on David J. Malan who subsequently taught the course and scaled it up to run at multiple universities and in multiple digital formats.
Kernighan was elected a member of the National Academy of Engineering in 2002 for contributions to software and to programming languages. He was also elected a member of the American Academy of Arts and Sciences in 2019.
In 2022 Kernighan stated that he was actively working on improvements to the AWK programming language, which he took part in creating in 1977.
Kernighan uses a 13-inch MacBook Air as his primary device. Along with this, from time to time, he uses an iMac in his office. He, most of the time, uses Sam as his text editor. | [
{
"paragraph_id": 0,
"text": "Brian Wilson Kernighan (/ˈkɜːrnɪhæn/; born January 30, 1942) is a Canadian computer scientist.",
"title": ""
},
{
"paragraph_id": 1,
"text": "He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language (The C Programming Language) with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language (\"it's entirely Dennis Ritchie's work\"). He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The \"K\" of K&R C and of AWK both stand for \"Kernighan\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Kernighan has been a professor of computer science at Princeton University since 2000 and is the director of undergraduate studies in the department of computer science. In 2015, he co-authored the book The Go Programming Language.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled \"Some graph partitioning problems related to program segmentation\" under the supervision of Peter G. Weiner.",
"title": "Early life and education"
},
{
"paragraph_id": 5,
"text": "Kernighan has held a professorship in the department of computer science at Princeton since 2000. Each fall he teaches a course called \"Computers in Our World\", which introduces the fundamentals of computing to non-majors.",
"title": "Career and research"
},
{
"paragraph_id": 6,
"text": "Kernighan was the software editor for Prentice Hall International. His \"Software Tools\" series spread the essence of \"C/Unix thinking\" with makeovers for BASIC, FORTRAN, and Pascal, and most notably his \"Ratfor\" (rational FORTRAN) was put in the public domain.",
"title": "Career and research"
},
{
"paragraph_id": 7,
"text": "He has said that if stranded on an island with only one programming language it would have to be C.",
"title": "Career and research"
},
{
"paragraph_id": 8,
"text": "Kernighan coined the term \"Unix\" and helped popularize Thompson's Unix philosophy. Kernighan is also known as a coiner of the expression \"What You See Is All You Get\" (WYSIAYG), which is a sarcastic variant of the original \"What You See Is What You Get\" (WYSIWYG). Kernighan's term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts.",
"title": "Career and research"
},
{
"paragraph_id": 9,
"text": "In 1972, Kernighan described memory management in strings using \"hello\" and \"world\", in the B programming language, which became the iconic example we know today. Kernighan's original 1978 implementation of Hello, World! was sold at The Algorithm Auction, the world's first auction of computer algorithms.",
"title": "Career and research"
},
{
"paragraph_id": 10,
"text": "In 1996, Kernighan taught CS50 which is the Harvard University introductory course in computer science. Kernighan was an influence on David J. Malan who subsequently taught the course and scaled it up to run at multiple universities and in multiple digital formats.",
"title": "Career and research"
},
{
"paragraph_id": 11,
"text": "Kernighan was elected a member of the National Academy of Engineering in 2002 for contributions to software and to programming languages. He was also elected a member of the American Academy of Arts and Sciences in 2019.",
"title": "Career and research"
},
{
"paragraph_id": 12,
"text": "In 2022 Kernighan stated that he was actively working on improvements to the AWK programming language, which he took part in creating in 1977.",
"title": "Career and research"
},
{
"paragraph_id": 13,
"text": "Kernighan uses a 13-inch MacBook Air as his primary device. Along with this, from time to time, he uses an iMac in his office. He, most of the time, uses Sam as his text editor.",
"title": "Programming setup"
}
] | Brian Wilson Kernighan is a Canadian computer scientist. He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language. He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and of AWK both stand for "Kernighan". In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a professor of computer science at Princeton University since 2000 and is the director of undergraduate studies in the department of computer science. In 2015, he co-authored the book The Go Programming Language. | 2001-09-30T03:33:51Z | 2023-12-20T11:07:58Z | [
"Template:Cite tech report",
"Template:Authority control",
"Template:Cite web",
"Template:Cbignore",
"Template:Citation",
"Template:ACMPortal",
"Template:Cite journal",
"Template:Wikiquote",
"Template:Reflist",
"Template:Infobox scientist",
"Template:Div col",
"Template:Div col end",
"Template:Cite magazine",
"Template:OL author",
"Template:Short description",
"Template:Cite book",
"Template:ISBN",
"Template:Commons category",
"Template:IPAc-en"
] | https://en.wikipedia.org/wiki/Brian_Kernighan |
4,052 | BCPL | BCPL ("Basic Combined Programming Language") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967.
BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Furthermore, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only 1⁄5 of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 person-months. This approach became common practice later (e.g. Pascal, Java).
The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit.
The interpretation of any value was determined by the operators used to process the values. (For example, + added two values together, treating them as integers; ! indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking.
The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by %).
BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead, there is a global vector, similar to "blank common" in Fortran. All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus, the header files (files included during compilation using the "GET" directive) become the primary means of synchronizing global data between compilation units, containing "GLOBAL" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively, BCPL gives the programmer control of the linking process.
The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick ad hoc debugging aid.
BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) in place of the symbols { and }. The single-line // comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99.
The book BCPL: The language and its compiler describes the philosophy of BCPL as follows:
The philosophy of BCPL is not one of the tyrant who thinks he knows best and lays down the law on what is and what is not allowed; rather, BCPL acts more as a servant offering his services to the best of his ability without complaint, even when confronted with apparent nonsense. The programmer is always assumed to know what he is doing and is not hemmed in by petty restrictions.
BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by "removing those features of the full language which make compilation difficult". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System, was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference.
BCPL has been rumored to have originally stood for "Bootstrap Cambridge Programming Language", but CPL was never created since development stopped at BCPL, and the acronym was later reinterpreted for the BCPL book.
BCPL is the language in which the original "Hello, World!" program was written. The first MUD was also written in BCPL (MUD1).
Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS). BCPL was also the initial language used in the Xerox PARC Alto project, the first modern personal computer; among other projects, the Bravo document preparation system was written in BCPL.
An early compiler, bootstrapped in 1969, by starting with a paper tape of the O-code of Richards's Atlas 2 compiler, targeted the ICT 1900 series. The two machines had different word-lengths (48 vs 24 bits), different character encodings, and different packed string representations—and the successful bootstrapping increased confidence in the practicality of the method.
By late 1970, implementations existed for the Honeywell 635 and Honeywell 645, IBM 360, PDP-10, TX-2, CDC 6400, UNIVAC 1108, PDP-9, KDF 9 and Atlas 2. In 1974 a dialect of BCPL was implemented at BBN without using the intermediate O-code. The initial implementation was a cross-compiler hosted on BBN's TENEX PDP-10s, and directly targeted the PDP-11s used in BBN's implementation of the second generation IMPs used in the ARPANET.
There was also a version produced for the BBC Micro in the mid-1980s, by Richards Computer Products, a company started by John Richards, the brother of Martin Richards. The BBC Domesday Project made use of the language. Versions of BCPL for the Amstrad CPC and Amstrad PCW computers were also released in 1986 by UK software house Arnor Ltd. MacBCPL was released for the Apple Macintosh in 1985 by Topexpress Ltd, of Kensington, England.
Both the design and philosophy of BCPL strongly influenced B, which in turn influenced C. Programmers at the time debated whether an eventual successor to C would be called "D", the next letter in the alphabet, or "P", the next letter in the parent language name. The language most accepted as being C's successor is C++ (with ++ being C's increment operator), although meanwhile, a D programming language also exists.
In 1979, implementations of BCPL existed for at least 25 architectures; the language gradually fell out of favour as C became popular on non-Unix systems.
Martin Richards maintains a modern version of BCPL on his website, last updated in 2018. This can be set up to run on various systems including Linux, FreeBSD, and Mac OS X. The latest distribution includes graphics and sound libraries, and there is a comprehensive manual. He continues to program in it, including for his research on musical automated score following.
A common informal MIME type for BCPL is text/x-bcpl.
Richards and Whitby-Strevens provide an example of the "Hello, World!" program for BCPL using a standard system header, 'LIBHDR':
If these programs are run using Richards' current version of Cintsys (December 2018), LIBHDR, START and WRITEF must be changed to lower case to avoid errors.
Print factorials:
Count solutions to the N queens problem: | [
{
"paragraph_id": 0,
"text": "BCPL (\"Basic Combined Programming Language\") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967.",
"title": ""
},
{
"paragraph_id": 1,
"text": "BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Furthermore, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only 1⁄5 of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 person-months. This approach became common practice later (e.g. Pascal, Java).",
"title": "Design"
},
{
"paragraph_id": 2,
"text": "The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit.",
"title": "Design"
},
{
"paragraph_id": 3,
"text": "The interpretation of any value was determined by the operators used to process the values. (For example, + added two values together, treating them as integers; ! indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking.",
"title": "Design"
},
{
"paragraph_id": 4,
"text": "The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by %).",
"title": "Design"
},
{
"paragraph_id": 5,
"text": "BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead, there is a global vector, similar to \"blank common\" in Fortran. All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus, the header files (files included during compilation using the \"GET\" directive) become the primary means of synchronizing global data between compilation units, containing \"GLOBAL\" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively, BCPL gives the programmer control of the linking process.",
"title": "Design"
},
{
"paragraph_id": 6,
"text": "The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick ad hoc debugging aid.",
"title": "Design"
},
{
"paragraph_id": 7,
"text": "BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) in place of the symbols { and }. The single-line // comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99.",
"title": "Design"
},
{
"paragraph_id": 8,
"text": "The book BCPL: The language and its compiler describes the philosophy of BCPL as follows:",
"title": "Design"
},
{
"paragraph_id": 9,
"text": "The philosophy of BCPL is not one of the tyrant who thinks he knows best and lays down the law on what is and what is not allowed; rather, BCPL acts more as a servant offering his services to the best of his ability without complaint, even when confronted with apparent nonsense. The programmer is always assumed to know what he is doing and is not hemmed in by petty restrictions.",
"title": "Design"
},
{
"paragraph_id": 10,
"text": "BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by \"removing those features of the full language which make compilation difficult\". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System, was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "BCPL has been rumored to have originally stood for \"Bootstrap Cambridge Programming Language\", but CPL was never created since development stopped at BCPL, and the acronym was later reinterpreted for the BCPL book.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "BCPL is the language in which the original \"Hello, World!\" program was written. The first MUD was also written in BCPL (MUD1).",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS). BCPL was also the initial language used in the Xerox PARC Alto project, the first modern personal computer; among other projects, the Bravo document preparation system was written in BCPL.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "An early compiler, bootstrapped in 1969, by starting with a paper tape of the O-code of Richards's Atlas 2 compiler, targeted the ICT 1900 series. The two machines had different word-lengths (48 vs 24 bits), different character encodings, and different packed string representations—and the successful bootstrapping increased confidence in the practicality of the method.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "By late 1970, implementations existed for the Honeywell 635 and Honeywell 645, IBM 360, PDP-10, TX-2, CDC 6400, UNIVAC 1108, PDP-9, KDF 9 and Atlas 2. In 1974 a dialect of BCPL was implemented at BBN without using the intermediate O-code. The initial implementation was a cross-compiler hosted on BBN's TENEX PDP-10s, and directly targeted the PDP-11s used in BBN's implementation of the second generation IMPs used in the ARPANET.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "There was also a version produced for the BBC Micro in the mid-1980s, by Richards Computer Products, a company started by John Richards, the brother of Martin Richards. The BBC Domesday Project made use of the language. Versions of BCPL for the Amstrad CPC and Amstrad PCW computers were also released in 1986 by UK software house Arnor Ltd. MacBCPL was released for the Apple Macintosh in 1985 by Topexpress Ltd, of Kensington, England.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Both the design and philosophy of BCPL strongly influenced B, which in turn influenced C. Programmers at the time debated whether an eventual successor to C would be called \"D\", the next letter in the alphabet, or \"P\", the next letter in the parent language name. The language most accepted as being C's successor is C++ (with ++ being C's increment operator), although meanwhile, a D programming language also exists.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1979, implementations of BCPL existed for at least 25 architectures; the language gradually fell out of favour as C became popular on non-Unix systems.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Martin Richards maintains a modern version of BCPL on his website, last updated in 2018. This can be set up to run on various systems including Linux, FreeBSD, and Mac OS X. The latest distribution includes graphics and sound libraries, and there is a comprehensive manual. He continues to program in it, including for his research on musical automated score following.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "A common informal MIME type for BCPL is text/x-bcpl.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Richards and Whitby-Strevens provide an example of the \"Hello, World!\" program for BCPL using a standard system header, 'LIBHDR':",
"title": "Examples"
},
{
"paragraph_id": 22,
"text": "If these programs are run using Richards' current version of Cintsys (December 2018), LIBHDR, START and WRITEF must be changed to lower case to avoid errors.",
"title": "Examples"
},
{
"paragraph_id": 23,
"text": "Print factorials:",
"title": "Examples"
},
{
"paragraph_id": 24,
"text": "Count solutions to the N queens problem:",
"title": "Examples"
}
] | BCPL is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. | 2001-08-18T02:27:31Z | 2023-09-06T13:18:35Z | [
"Template:More citations needed section",
"Template:Original research",
"Template:Authority control",
"Template:More citations needed",
"Template:About",
"Template:Use dmy dates",
"Template:Infobox programming language",
"Template:Citation needed",
"Template:Cite book",
"Template:Short description",
"Template:Clarify",
"Template:Mono",
"Template:Reflist",
"Template:Blockquote",
"Template:Cite web",
"Template:ISBN",
"Template:Frac"
] | https://en.wikipedia.org/wiki/BCPL |
4,054 | Battleship | A battleship is a large armored warship with a main battery consisting of large caliber guns. It dominated naval warfare in the late 19th and early 20th centuries.
The term battleship came into use in the late 1880s to describe a type of ironclad warship, now referred to by historians as pre-dreadnought battleships. In 1906, the commissioning of HMS Dreadnought into the United Kingdom's Royal Navy heralded a revolution in the field of battleship design. Subsequent battleship designs, influenced by HMS Dreadnought, were referred to as "dreadnoughts", though the term eventually became obsolete as dreadnoughts became the only type of battleship in common use.
Battleships were a symbol of naval dominance and national might, and for decades the battleship was a major factor in both diplomacy and military strategy. A global arms race in battleship construction began in Europe in the 1890s and culminated at the decisive Battle of Tsushima in 1905, the outcome of which significantly influenced the design of HMS Dreadnought. The launch of Dreadnought in 1906 commenced a new naval arms race. Three major fleet actions between steel battleships took place: the long-range gunnery duel at the Battle of the Yellow Sea in 1904, the decisive Battle of Tsushima in 1905 (both during the Russo-Japanese War) and the inconclusive Battle of Jutland in 1916, during the First World War. Jutland was the largest naval battle and the only full-scale clash of dreadnoughts of the war, and it was the last major battle in naval history fought primarily by battleships.
The Naval Treaties of the 1920s and 1930s limited the number of battleships, though technical innovation in battleship design continued. Both the Allied and Axis powers built battleships during World War II, though the increasing importance of the aircraft carrier meant that the battleship played a less important role than had been expected in that conflict.
The value of the battleship has been questioned, even during their heyday. There were few of the decisive fleet battles that battleship proponents expected and used to justify the vast resources spent on building battlefleets. Even in spite of their huge firepower and protection, battleships were increasingly vulnerable to much smaller and relatively inexpensive weapons: initially the torpedo and the naval mine, and later aircraft and the guided missile. The growing range of naval engagements led to the aircraft carrier replacing the battleship as the leading capital ship during World War II, with the last battleship to be launched being HMS Vanguard in 1944. Four battleships were retained by the United States Navy until the end of the Cold War for fire support purposes and were last used in combat during the Gulf War in 1991, and then struck from the U.S. Naval Vessel Register in the 2000s. Many World War II-era battleships remain today as museum ships.
A ship of the line was a large, unarmored wooden sailing ship which mounted a battery of up to 120 smoothbore guns and carronades, which came to prominence with the adoption of line of battle tactics in the early 17th century and the end of the sailing battleship's heyday in the 1830s. From 1794, the alternative term 'line of battle ship' was contracted (informally at first) to 'battle ship' or 'battleship'.
The sheer number of guns fired broadside meant a ship of the line could wreck any wooden enemy, holing her hull, knocking down masts, wrecking her rigging, and killing her crew. However, the effective range of the guns was as little as a few hundred yards, so the battle tactics of sailing ships depended in part on the wind.
Over time, ships of the line gradually became larger and carried more guns, but otherwise remained quite similar. The first major change to the ship of the line concept was the introduction of steam power as an auxiliary propulsion system. Steam power was gradually introduced to the navy in the first half of the 19th century, initially for small craft and later for frigates. The French Navy introduced steam to the line of battle with the 90-gun Napoléon in 1850—the first true steam battleship. Napoléon was armed as a conventional ship-of-the-line, but her steam engines could give her a speed of 12 knots (22 km/h), regardless of the wind. This was a potentially decisive advantage in a naval engagement. The introduction of steam accelerated the growth in size of battleships. France and the United Kingdom were the only countries to develop fleets of wooden steam screw battleships although several other navies operated small numbers of screw battleships, including Russia (9), the Ottoman Empire (3), Sweden (2), Naples (1), Denmark (1) and Austria (1).
The adoption of steam power was only one of a number of technological advances which revolutionized warship design in the 19th century. The ship of the line was overtaken by the ironclad: powered by steam, protected by metal armor, and armed with guns firing high-explosive shells.
Guns that fired explosive or incendiary shells were a major threat to wooden ships, and these weapons quickly became widespread after the introduction of 8-inch shell guns as part of the standard armament of French and American line-of-battle ships in 1841. In the Crimean War, six line-of-battle ships and two frigates of the Russian Black Sea Fleet destroyed seven Turkish frigates and three corvettes with explosive shells at the Battle of Sinop in 1853. Later in the war, French ironclad floating batteries used similar weapons against the defenses at the Battle of Kinburn.
Nevertheless, wooden-hulled ships stood up comparatively well to shells, as shown in the 1866 Battle of Lissa, where the modern Austrian steam two-decker SMS Kaiser ranged across a confused battlefield, rammed an Italian ironclad and took 80 hits from Italian ironclads, many of which were shells, but including at least one 300-pound shot at point-blank range. Despite losing her bowsprit and her foremast, and being set on fire, she was ready for action again the very next day.
The development of high-explosive shells made the use of iron armor plate on warships necessary. In 1859 France launched Gloire, the first ocean-going ironclad warship. She had the profile of a ship of the line, cut to one deck due to weight considerations. Although made of wood and reliant on sail for most journeys, Gloire was fitted with a propeller, and her wooden hull was protected by a layer of thick iron armor. Gloire prompted further innovation from the Royal Navy, anxious to prevent France from gaining a technological lead.
The superior armored frigate Warrior followed Gloire by only 14 months, and both nations embarked on a program of building new ironclads and converting existing screw ships of the line to armored frigates. Within two years, Italy, Austria, Spain and Russia had all ordered ironclad warships, and by the time of the famous clash of the USS Monitor and the CSS Virginia at the Battle of Hampton Roads at least eight navies possessed ironclad ships.
Navies experimented with the positioning of guns, in turrets (like the USS Monitor), central-batteries or barbettes, or with the ram as the principal weapon. As steam technology developed, masts were gradually removed from battleship designs. By the mid-1870s steel was used as a construction material alongside iron and wood. The French Navy's Redoutable, laid down in 1873 and launched in 1876, was a central battery and barbette warship which became the first battleship in the world to use steel as the principal building material.
The term "battleship" was officially adopted by the Royal Navy in the re-classification of 1892. By the 1890s, there was an increasing similarity between battleship designs, and the type that later became known as the 'pre-dreadnought battleship' emerged. These were heavily armored ships, mounting a mixed battery of guns in turrets, and without sails. The typical first-class battleship of the pre-dreadnought era displaced 15,000 to 17,000 tons, had a speed of 16 knots (30 km/h), and an armament of four 12-inch (305 mm) guns in two turrets fore and aft with a mixed-caliber secondary battery amidships around the superstructure. An early design with superficial similarity to the pre-dreadnought is the British Devastation class of 1871.
The slow-firing 12-inch (305 mm) main guns were the principal weapons for battleship-to-battleship combat. The intermediate and secondary batteries had two roles. Against major ships, it was thought a 'hail of fire' from quick-firing secondary weapons could distract enemy gun crews by inflicting damage to the superstructure, and they would be more effective against smaller ships such as cruisers. Smaller guns (12-pounders and smaller) were reserved for protecting the battleship against the threat of torpedo attack from destroyers and torpedo boats.
The beginning of the pre-dreadnought era coincided with Britain reasserting her naval dominance. For many years previously, Britain had taken naval supremacy for granted. Expensive naval projects were criticized by political leaders of all inclinations. However, in 1888 a war scare with France and the build-up of the Russian navy gave added impetus to naval construction, and the British Naval Defence Act of 1889 laid down a new fleet including eight new battleships. The principle that Britain's navy should be more powerful than the two next most powerful fleets combined was established. This policy was designed to deter France and Russia from building more battleships, but both nations nevertheless expanded their fleets with more and better pre-dreadnoughts in the 1890s.
In the last years of the 19th century and the first years of the 20th, the escalation in the building of battleships became an arms race between Britain and Germany. The German naval laws of 1890 and 1898 authorized a fleet of 38 battleships, a vital threat to the balance of naval power. Britain answered with further shipbuilding, but by the end of the pre-dreadnought era, British supremacy at sea had markedly weakened. In 1883, the United Kingdom had 38 battleships, twice as many as France and almost as many as the rest of the world put together. In 1897, Britain's lead was far smaller due to competition from France, Germany, and Russia, as well as the development of pre-dreadnought fleets in Italy, the United States and Japan. The Ottoman Empire, Spain, Sweden, Denmark, Norway, the Netherlands, Chile and Brazil all had second-rate fleets led by armored cruisers, coastal defence ships or monitors.
Pre-dreadnoughts continued the technical innovations of the ironclad. Turrets, armor plate, and steam engines were all improved over the years, and torpedo tubes were also introduced. A small number of designs, including the American Kearsarge and Virginia classes, experimented with all or part of the 8-inch intermediate battery superimposed over the 12-inch primary. Results were poor: recoil factors and blast effects resulted in the 8-inch battery being completely unusable, and the inability to train the primary and intermediate armaments on different targets led to significant tactical limitations. Even though such innovative designs saved weight (a key reason for their inception), they proved too cumbersome in practice.
In 1906, the British Royal Navy launched the revolutionary HMS Dreadnought. Created as a result of pressure from Admiral Sir John ("Jackie") Fisher, HMS Dreadnought rendered existing battleships obsolete. Combining an "all-big-gun" armament of ten 12-inch (305 mm) guns with unprecedented speed (from steam turbine engines) and protection, she prompted navies worldwide to re-evaluate their battleship building programs. While the Japanese had laid down an all-big-gun battleship, Satsuma, in 1904 and the concept of an all-big-gun ship had been in circulation for several years, it had yet to be validated in combat. Dreadnought sparked a new arms race, principally between Britain and Germany but reflected worldwide, as the new class of warships became a crucial element of national power.
Technical development continued rapidly through the dreadnought era, with steep changes in armament, armor and propulsion. Ten years after Dreadnought's commissioning, much more powerful ships, the super-dreadnoughts, were being built.
In the first years of the 20th century, several navies worldwide experimented with the idea of a new type of battleship with a uniform armament of very heavy guns.
Admiral Vittorio Cuniberti, the Italian Navy's chief naval architect, articulated the concept of an all-big-gun battleship in 1903. When the Regia Marina did not pursue his ideas, Cuniberti wrote an article in Jane's proposing an "ideal" future British battleship, a large armored warship of 17,000 tons, armed solely with a single calibre main battery (twelve 12-inch [305 mm] guns), carrying 300-millimetre (12 in) belt armor, and capable of 24 knots (44 km/h).
The Russo-Japanese War provided operational experience to validate the "all-big-gun" concept. During the Battle of the Yellow Sea on August 10, 1904, Admiral Togo of the Imperial Japanese Navy commenced deliberate 12-inch gun fire at the Russian flagship Tzesarevich at 14,200 yards (13,000 meters). At the Battle of Tsushima on May 27, 1905, Russian Admiral Rozhestvensky's flagship fired the first 12-inch guns at the Japanese flagship Mikasa at 7,000 meters. It is often held that these engagements demonstrated the importance of the 12-inch (305 mm) gun over its smaller counterparts, though some historians take the view that secondary batteries were just as important as the larger weapons when dealing with smaller fast moving torpedo craft. Such was the case, albeit unsuccessfully, when the Russian battleship Knyaz Suvorov at Tsushima had been sent to the bottom by destroyer launched torpedoes. The 1903–04 design also retained traditional triple-expansion steam engines.
As early as 1904, Jackie Fisher had been convinced of the need for fast, powerful ships with an all-big-gun armament. If Tsushima influenced his thinking, it was to persuade him of the need to standardise on 12-inch (305 mm) guns. Fisher's concerns were submarines and destroyers equipped with torpedoes, then threatening to outrange battleship guns, making speed imperative for capital ships. Fisher's preferred option was his brainchild, the battlecruiser: lightly armored but heavily armed with eight 12-inch guns and propelled to 25 knots (46 km/h) by steam turbines.
It was to prove this revolutionary technology that Dreadnought was designed in January 1905, laid down in October 1905 and sped to completion by 1906. She carried ten 12-inch guns, had an 11-inch armor belt, and was the first large ship powered by turbines. She mounted her guns in five turrets; three on the centerline (one forward, two aft) and two on the wings, giving her at her launch twice the broadside of any other warship. She retained a number of 12-pound (3-inch, 76 mm) quick-firing guns for use against destroyers and torpedo-boats. Her armor was heavy enough for her to go head-to-head with any other ship in a gun battle, and conceivably win.
Dreadnought was to have been followed by three Invincible-class battlecruisers, their construction delayed to allow lessons from Dreadnought to be used in their design. While Fisher may have intended Dreadnought to be the last Royal Navy battleship, the design was so successful he found little support for his plan to switch to a battlecruiser navy. Although there were some problems with the ship (the wing turrets had limited arcs of fire and strained the hull when firing a full broadside, and the top of the thickest armor belt lay below the waterline at full load), the Royal Navy promptly commissioned another six ships to a similar design in the Bellerophon and St. Vincent classes.
An American design, South Carolina, authorized in 1905 and laid down in December 1906, was another of the first dreadnoughts, but she and her sister, Michigan, were not launched until 1908. Both used triple-expansion engines and had a superior layout of the main battery, dispensing with Dreadnought's wing turrets. They thus retained the same broadside, despite having two fewer guns.
In 1897, before the revolution in design brought about by HMS Dreadnought, the Royal Navy had 62 battleships in commission or building, a lead of 26 over France and 50 over Germany. From the 1906 launching of Dreadnought, an arms race with major strategic consequences was prompted. Major naval powers raced to build their own dreadnoughts. Possession of modern battleships was not only seen as vital to naval power, but also, as with nuclear weapons after World War II, represented a nation's standing in the world. Germany, France, Japan, Italy, Austria, and the United States all began dreadnought programmes; while the Ottoman Empire, Argentina, Russia, Brazil, and Chile commissioned dreadnoughts to be built in British and American yards.
By virtue of geography, the Royal Navy was able to use her imposing battleship and battlecruiser fleet to impose a strict and successful naval blockade of Germany and kept Germany's smaller battleship fleet bottled up in the North Sea: only narrow channels led to the Atlantic Ocean and these were guarded by British forces. Both sides were aware that, because of the greater number of British dreadnoughts, a full fleet engagement would be likely to result in a British victory. The German strategy was therefore to try to provoke an engagement on their terms: either to induce a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coastline, where friendly minefields, torpedo-boats and submarines could be used to even the odds. This did not happen however, due in large part to the necessity to keep submarines for the Atlantic campaign. Submarines were the only vessels in the Imperial German Navy able to break out and raid British commerce in force, but even though they sank many merchant ships, they could not successfully counter-blockade the United Kingdom; the Royal Navy successfully adopted convoy tactics to combat Germany's submarine counter-blockade and eventually defeated it. This was in stark contrast to Britain's successful blockade of Germany.
The first two years of war saw the Royal Navy's battleships and battlecruisers regularly "sweep" the North Sea making sure that no German ships could get in or out. Only a few German surface ships that were already at sea, such as the famous light cruiser SMS Emden, were able to raid commerce. Even some of those that did manage to get out were hunted down by battlecruisers, as in the Battle of the Falklands, December 7, 1914. The results of sweeping actions in the North Sea were battles including the Heligoland Bight and Dogger Bank and German raids on the English coast, all of which were attempts by the Germans to lure out portions of the Grand Fleet in an attempt to defeat the Royal Navy in detail. On May 31, 1916, a further attempt to draw British ships into battle on German terms resulted in a clash of the battlefleets in the Battle of Jutland. The German fleet withdrew to port after two short encounters with the British fleet. Less than two months later, the Germans once again attempted to draw portions of the Grand Fleet into battle. The resulting Action of 19 August 1916 proved inconclusive. This reinforced German determination not to engage in a fleet to fleet battle.
In the other naval theatres there were no decisive pitched battles. In the Black Sea, engagement between Russian and Ottoman battleships was restricted to skirmishes. In the Baltic Sea, action was largely limited to the raiding of convoys, and the laying of defensive minefields; the only significant clash of battleship squadrons there was the Battle of Moon Sound at which one Russian pre-dreadnought was lost. The Adriatic was in a sense the mirror of the North Sea: the Austro-Hungarian dreadnought fleet remained bottled up by the British and French blockade. And in the Mediterranean, the most important use of battleships was in support of the amphibious assault on Gallipoli.
In September 1914, the threat posed to surface ships by German U-boats was confirmed by successful attacks on British cruisers, including the sinking of three British armored cruisers by the German submarine SM U-9 in less than an hour. The British Super-dreadnought HMS Audacious soon followed suit as she struck a mine laid by a German U-boat in October 1914 and sank. The threat that German U-boats posed to British dreadnoughts was enough to cause the Royal Navy to change their strategy and tactics in the North Sea to reduce the risk of U-boat attack. Further near-misses from submarine attacks on battleships and casualties amongst cruisers led to growing concern in the Royal Navy about the vulnerability of battleships.
As the war wore on however, it turned out that whilst submarines did prove to be a very dangerous threat to older pre-dreadnought battleships, as shown by examples such as the sinking of Mesûdiye, which was caught in the Dardanelles by a British submarine and HMS Majestic and HMS Triumph were torpedoed by U-21 as well as HMS Formidable, HMS Cornwallis, HMS Britannia etc., the threat posed to dreadnought battleships proved to have been largely a false alarm. HMS Audacious turned out to be the only dreadnought sunk by a submarine in World War I. While battleships were never intended for anti-submarine warfare, there was one instance of a submarine being sunk by a dreadnought battleship. HMS Dreadnought rammed and sank the German submarine U-29 on March 18, 1915, off the Moray Firth.
Whilst the escape of the German fleet from the superior British firepower at Jutland was effected by the German cruisers and destroyers successfully turning away the British battleships, the German attempt to rely on U-boat attacks on the British fleet failed.
Torpedo boats did have some successes against battleships in World War I, as demonstrated by the sinking of the British pre-dreadnought HMS Goliath by Muâvenet-i Millîye during the Dardanelles Campaign and the destruction of the Austro-Hungarian dreadnought SMS Szent István by Italian motor torpedo boats in June 1918. In large fleet actions, however, destroyers and torpedo boats were usually unable to get close enough to the battleships to damage them. The only battleship sunk in a fleet action by either torpedo boats or destroyers was the obsolescent German pre-dreadnought SMS Pommern. She was sunk by destroyers during the night phase of the Battle of Jutland.
The German High Seas Fleet, for their part, were determined not to engage the British without the assistance of submarines; and since the submarines were needed more for raiding commercial traffic, the fleet stayed in port for much of the war.
For many years, Germany simply had no battleships. The Armistice with Germany required that most of the High Seas Fleet be disarmed and interned in a neutral port; largely because no neutral port could be found, the ships remained in British custody in Scapa Flow, Scotland. The Treaty of Versailles specified that the ships should be handed over to the British. Instead, most of them were scuttled by their German crews on June 21, 1919, just before the signature of the peace treaty. The treaty also limited the German Navy, and prevented Germany from building or possessing any capital ships.
The inter-war period saw the battleship subjected to strict international limitations to prevent a costly arms race breaking out.
While the victors were not limited by the Treaty of Versailles, many of the major naval powers were crippled after the war. Faced with the prospect of a naval arms race against the United Kingdom and Japan, which would in turn have led to a possible Pacific war, the United States was keen to conclude the Washington Naval Treaty of 1922. This treaty limited the number and size of battleships that each major nation could possess, and required Britain to accept parity with the U.S. and to abandon the British alliance with Japan. The Washington treaty was followed by a series of other naval treaties, including the First Geneva Naval Conference (1927), the First London Naval Treaty (1930), the Second Geneva Naval Conference (1932), and finally the Second London Naval Treaty (1936), which all set limits on major warships. These treaties became effectively obsolete on September 1, 1939, at the beginning of World War II, but the ship classifications that had been agreed upon still apply. The treaty limitations meant that fewer new battleships were launched in 1919–1939 than in 1905–1914. The treaties also inhibited development by imposing upper limits on the weights of ships. Designs like the projected British N3-class battleship, the first American South Dakota class, and the Japanese Kii class—all of which continued the trend to larger ships with bigger guns and thicker armor—never got off the drawing board. Those designs which were commissioned during this period were referred to as treaty battleships.
As early as 1914, the British Admiral Percy Scott predicted that battleships would soon be made irrelevant by aircraft. By the end of World War I, aircraft had successfully adopted the torpedo as a weapon. In 1921 the Italian general and air theorist Giulio Douhet completed a hugely influential treatise on strategic bombing titled The Command of the Air, which foresaw the dominance of air power over naval units.
In the 1920s, General Billy Mitchell of the United States Army Air Corps, believing that air forces had rendered navies around the world obsolete, testified in front of Congress that "1,000 bombardment airplanes can be built and operated for about the price of one battleship" and that a squadron of these bombers could sink a battleship, making for more efficient use of government funds. This infuriated the U.S. Navy, but Mitchell was nevertheless allowed to conduct a careful series of bombing tests alongside Navy and Marine bombers. In 1921, he bombed and sank numerous ships, including the "unsinkable" German World War I battleship SMS Ostfriesland and the American pre-dreadnought Alabama.
Although Mitchell had required "war-time conditions", the ships sunk were obsolete, stationary, defenseless and had no damage control. The sinking of Ostfriesland was accomplished by violating an agreement that would have allowed Navy engineers to examine the effects of various munitions: Mitchell's airmen disregarded the rules, and sank the ship within minutes in a coordinated attack. The stunt made headlines, and Mitchell declared, "No surface vessels can exist wherever air forces acting from land bases are able to attack them." While far from conclusive, Mitchell's test was significant because it put proponents of the battleship against naval aviation on the defensive. Rear Admiral William A. Moffett used public relations against Mitchell to make headway toward expansion of the U.S. Navy's nascent aircraft carrier program.
The Royal Navy, United States Navy, and Imperial Japanese Navy extensively upgraded and modernized their World War I–era battleships during the 1930s. Among the new features were an increased tower height and stability for the optical rangefinder equipment (for gunnery control), more armor (especially around turrets) to protect against plunging fire and aerial bombing, and additional anti-aircraft weapons. Some British ships received a large block superstructure nicknamed the "Queen Anne's castle", such as in Queen Elizabeth and Warspite, which would be used in the new conning towers of the King George V-class fast battleships. External bulges were added to improve both buoyancy to counteract weight increase and provide underwater protection against mines and torpedoes. The Japanese rebuilt all of their battleships, plus their battlecruisers, with distinctive "pagoda" structures, though the Hiei received a more modern bridge tower that would influence the new Yamato class. Bulges were fitted, including steel tube arrays to improve both underwater and vertical protection along the waterline. The U.S. experimented with cage masts and later tripod masts, though after the Japanese attack on Pearl Harbor some of the most severely damaged ships (such as West Virginia and California) were rebuilt with tower masts, for an appearance similar to their Iowa-class contemporaries. Radar, which was effective beyond visual range and effective in complete darkness or adverse weather, was introduced to supplement optical fire control.
Even when war threatened again in the late 1930s, battleship construction did not regain the level of importance it had held in the years before World War I. The "building holiday" imposed by the naval treaties meant the capacity of dockyards worldwide had shrunk, and the strategic position had changed.
In Germany, the ambitious Plan Z for naval rearmament was abandoned in favor of a strategy of submarine warfare supplemented by the use of battlecruisers and commerce raiding (in particular by Bismarck-class battleships). In Britain, the most pressing need was for air defenses and convoy escorts to safeguard the civilian population from bombing or starvation, and re-armament construction plans consisted of five ships of the King George V class. It was in the Mediterranean that navies remained most committed to battleship warfare. France intended to build six battleships of the Dunkerque and Richelieu classes, and the Italians four Littorio-class ships. Neither navy built significant aircraft carriers. The U.S. preferred to spend limited funds on aircraft carriers until the South Dakota class. Japan, also prioritising aircraft carriers, nevertheless began work on three mammoth Yamatos (although the third, Shinano, was later completed as a carrier) and a planned fourth was cancelled.
At the outbreak of the Spanish Civil War, the Spanish navy included only two small dreadnought battleships, España and Jaime I. España (originally named Alfonso XIII), by then in reserve at the northwestern naval base of El Ferrol, fell into Nationalist hands in July 1936. The crew aboard Jaime I remained loyal to the Republic, killed their officers, who apparently supported Franco's attempted coup, and joined the Republican Navy. Thus each side had one battleship; however, the Republican Navy generally lacked experienced officers. The Spanish battleships mainly restricted themselves to mutual blockades, convoy escort duties, and shore bombardment, rarely in direct fighting against other surface units. In April 1937, España ran into a mine laid by friendly forces, and sank with little loss of life. In May 1937, Jaime I was damaged by Nationalist air attacks and a grounding incident. The ship was forced to go back to port to be repaired. There she was again hit by several aerial bombs. It was then decided to tow the battleship to a more secure port, but during the transport she suffered an internal explosion that caused 300 deaths and her total loss. Several Italian and German capital ships participated in the non-intervention blockade. On May 29, 1937, two Republican aircraft managed to bomb the German pocket battleship Deutschland outside Ibiza, causing severe damage and loss of life. Admiral Scheer retaliated two days later by bombarding Almería, causing much destruction, and the resulting Deutschland incident meant the end of German and Italian participation in non-intervention.
The German battleship Schleswig-Holstein—an obsolete pre-dreadnought—fired the first shots of World War II with the bombardment of the Polish garrison at Westerplatte; and the final surrender of the Japanese Empire took place aboard a United States Navy battleship, USS Missouri. Between those two events, it had become clear that aircraft carriers were the new principal ships of the fleet and that battleships now performed a secondary role.
Battleships played a part in major engagements in Atlantic, Pacific and Mediterranean theaters; in the Atlantic, the Germans used their battleships as independent commerce raiders. However, clashes between battleships were of little strategic importance. The Battle of the Atlantic was fought between destroyers and submarines, and most of the decisive fleet clashes of the Pacific war were determined by aircraft carriers.
In the first year of the war, armored warships defied predictions that aircraft would dominate naval warfare. Scharnhorst and Gneisenau surprised and sank the aircraft carrier Glorious off western Norway in June 1940. This engagement marked the only time a fleet carrier was sunk by surface gunnery. In the attack on Mers-el-Kébir, British battleships opened fire on the French battleships in the harbor near Oran in Algeria with their heavy guns. The fleeing French ships were then pursued by planes from aircraft carriers.
The subsequent years of the war saw many demonstrations of the maturity of the aircraft carrier as a strategic naval weapon and its effectiveness against battleships. The British air attack on the Italian naval base at Taranto sank one Italian battleship and damaged two more. The same Swordfish torpedo bombers played a crucial role in sinking the German battleship Bismarck.
On December 7, 1941, the Japanese launched a surprise attack on Pearl Harbor. Within a short time, five of eight U.S. battleships were sunk or sinking, with the rest damaged. All three American aircraft carriers were out to sea, however, and evaded destruction. The sinking of the British battleship Prince of Wales and battlecruiser Repulse, demonstrated the vulnerability of a battleship to air attack while at sea without sufficient air cover, settling the argument begun by Mitchell in 1921. Both warships were under way and en route to attack the Japanese amphibious force that had invaded Malaya when they were caught by Japanese land-based bombers and torpedo bombers on December 10, 1941.
At many of the early crucial battles of the Pacific, for instance Coral Sea and Midway, battleships were either absent or overshadowed as carriers launched wave after wave of planes into the attack at a range of hundreds of miles. In later battles in the Pacific, battleships primarily performed shore bombardment in support of amphibious landings and provided anti-aircraft defense as escort for the carriers. Even the largest battleships ever constructed, Japan's Yamato class, which carried a main battery of nine 18-inch (46 cm) guns and were designed as a principal strategic weapon, were never given a chance to show their potential in the decisive battleship action that figured in Japanese pre-war planning.
The last battleship confrontation in history was the Battle of Surigao Strait, on October 25, 1944, in which a numerically and technically superior American battleship group destroyed a lesser Japanese battleship group by gunfire after it had already been devastated by destroyer torpedo attacks. All but one of the American battleships in this confrontation had previously been sunk during the attack on Pearl Harbor and subsequently raised and repaired. Mississippi fired the last major-caliber salvo of this battle. In April 1945, during the battle for Okinawa, the world's most powerful battleship, the Yamato, was sent out on a suicide mission against a massive U.S. force and sunk by overwhelming pressure from carrier aircraft with nearly all hands lost. After that, Japanese fleet remaining in the mainland was also destroyed by the US naval air force.
After World War II, several navies retained their existing battleships, but they were no longer strategically dominant military assets. It soon became apparent that they were no longer worth the considerable cost of construction and maintenance and only one new battleship was commissioned after the war, HMS Vanguard. During the war it had been demonstrated that battleship-on-battleship engagements like Leyte Gulf or the sinking of HMS Hood were the exception and not the rule, and with the growing role of aircraft engagement ranges were becoming longer and longer, making heavy gun armament irrelevant. The armor of a battleship was equally irrelevant in the face of a nuclear attack as tactical missiles with a range of 100 kilometres (60 mi) or more could be mounted on the Soviet Kildin-class destroyer and Whiskey-class submarines. By the end of the 1950s, smaller vessel classes such as destroyers, which formerly offered no noteworthy opposition to battleships, now were capable of eliminating battleships from outside the range of the ship's heavy guns.
The remaining battleships met a variety of ends. USS Arkansas and Nagato were sunk during the testing of nuclear weapons in Operation Crossroads in 1946. Both battleships proved resistant to nuclear air burst but vulnerable to underwater nuclear explosions. The Italian battleship Giulio Cesare was taken by the Soviets as reparations and renamed Novorossiysk; she was sunk by a leftover German mine in the Black Sea on October 29, 1955. The two Andrea Doria-class ships were scrapped in 1956. The French Lorraine was scrapped in 1954, Richelieu in 1968, and Jean Bart in 1970.
The United Kingdom's four surviving King George V-class ships were scrapped in 1957, and Vanguard followed in 1960. All other surviving British battleships had been sold or broken up by 1949. The Soviet Union's Marat was scrapped in 1953, Parizhskaya Kommuna in 1957 and Oktyabrskaya Revolutsiya (back under her original name, Gangut, since 1942) in 1956–57. Brazil's Minas Geraes was scrapped in Genoa in 1953, and her sister ship São Paulo sank during a storm in the Atlantic en route to the breakers in Italy in 1951.
Argentina kept its two Rivadavia-class ships until 1956 and Chile kept Almirante Latorre (formerly HMS Canada) until 1959. The Turkish battlecruiser Yavûz (formerly SMS Goeben, launched in 1911) was scrapped in 1976 after an offer to sell her back to Germany was refused. Sweden had several small coastal-defense battleships, one of which, HSwMS Gustav V, survived until 1970. The Soviets scrapped four large incomplete cruisers in the late 1950s, whilst plans to build a number of new Stalingrad-class battlecruisers were abandoned following the death of Joseph Stalin in 1953. The three old German battleships Schleswig-Holstein, Schlesien, and Hessen all met similar ends. Hessen was taken over by the Soviet Union and renamed Tsel. She was scrapped in 1960. Schleswig-Holstein was renamed Borodino, and was used as a target ship until 1960. Schlesien, too, was used as a target ship. She was broken up between 1952 and 1957.
The Iowa-class battleships gained a new lease of life in the U.S. Navy as fire support ships. Radar and computer-controlled gunfire could be aimed with pinpoint accuracy to target. The U.S. recommissioned all four Iowa-class battleships for the Korean War and the New Jersey for the Vietnam War. These were primarily used for shore bombardment, New Jersey firing nearly 6,000 rounds of 16 inch shells and over 14,000 rounds of 5 inch projectiles during her tour on the gunline, seven times more rounds against shore targets in Vietnam than she had fired in the Second World War.
As part of Navy Secretary John F. Lehman's effort to build a 600-ship Navy in the 1980s, and in response to the commissioning of Kirov by the Soviet Union, the United States recommissioned all four Iowa-class battleships. On several occasions, battleships were support ships in carrier battle groups, or led their own battleship battle group. These were modernized to carry Tomahawk (TLAM) missiles, with New Jersey seeing action bombarding Lebanon in 1983 and 1984, while Missouri and Wisconsin fired their 16-inch (406 mm) guns at land targets and launched missiles during Operation Desert Storm in 1991. Wisconsin served as the TLAM strike commander for the Persian Gulf, directing the sequence of launches that marked the opening of Desert Storm, firing a total of 24 TLAMs during the first two days of the campaign. The primary threat to the battleships were Iraqi shore-based surface-to-surface missiles; Missouri was targeted by two Iraqi Silkworm missiles, with one missing and another being intercepted by the British destroyer HMS Gloucester.
After Indiana was stricken in 1962, the four Iowa-class ships were the only battleships in commission or reserve anywhere in the world. There was an extended debate when the four Iowa ships were finally decommissioned in the early 1990s. USS Iowa and USS Wisconsin were maintained to a standard whereby they could be rapidly returned to service as fire support vessels, pending the development of a superior fire support vessel. These last two battleships were finally stricken from the U.S. Naval Vessel Register in 2006. The Military Balance and Russian Foreign Military Review states the U.S. Navy listed one battleship in the reserve (Naval Inactive Fleet/Reserve 2nd Turn) in 2010. The Military Balance states the U.S. Navy listed no battleships in the reserve in 2014.
When the last Iowa-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: Massachusetts, North Carolina, Alabama, Iowa, New Jersey, Missouri, Wisconsin, and Texas. Missouri and New Jersey are museums at Pearl Harbor and Camden, New Jersey, respectively. Iowa is on display as an educational attraction at the Los Angeles Waterfront in San Pedro, California. Wisconsin now serves as a museum ship in Norfolk, Virginia. Massachusetts, which has the distinction of never having lost a man during service, is on display at the Battleship Cove naval museum in Fall River, Massachusetts. Texas, the first battleship turned into a museum, is normally on display at the San Jacinto Battleground State Historic Site, near Houston, but as of 2021 is closed for repairs. North Carolina is on display in Wilmington, North Carolina. Alabama is on display in Mobile, Alabama. The wreck of Arizona, sunk during the Pearl Harbor attack in 1941, is designated a historical landmark and national gravesite. The wreck of Utah, also sunk during the attack, is a historic landmark.
The only other 20th-century battleship on display is the Japanese pre-dreadnought Mikasa. A replica of the ironclad battleship Dingyuan was built by the Weihai Port Bureau in 2003 and is on display in Weihai, China.
Former battleships that were previously used as museum ships included USS Oregon (BB-3), SMS Tegetthoff, and SMS Erzherzog Franz Ferdinand.
Battleships were the embodiment of sea power. For American naval officer Alfred Thayer Mahan and his followers, a strong navy was vital to the success of a nation, and control of the seas was vital for the projection of force on land and overseas. Mahan's theory, proposed in The Influence of Sea Power Upon History, 1660–1783 of 1890, dictated the role of the battleship was to sweep the enemy from the seas. While the work of escorting, blockading, and raiding might be done by cruisers or smaller vessels, the presence of the battleship was a potential threat to any convoy escorted by any vessels other than capital ships. This concept of "potential threat" can be further generalized to the mere existence (as opposed to presence) of a powerful fleet tying the opposing fleet down. This concept came to be known as a "fleet in being"—an idle yet mighty fleet forcing others to spend time, resource and effort to actively guard against it.
Mahan went on to say victory could only be achieved by engagements between battleships, which came to be known as the decisive battle doctrine in some navies, while targeting merchant ships (commerce raiding or guerre de course, as posited by the Jeune École) could never succeed.
Mahan was highly influential in naval and political circles throughout the age of the battleship, calling for a large fleet of the most powerful battleships possible. Mahan's work developed in the late 1880s, and by the end of the 1890s it had acquired much international influence on naval strategy; in the end, it was adopted by many major navies (notably the British, American, German, and Japanese). The strength of Mahanian opinion was important in the development of the battleships arms races, and equally important in the agreement of the Powers to limit battleship numbers in the interwar era.
The "fleet in being" suggested battleships could simply by their existence tie down superior enemy resources. This in turn was believed to be able to tip the balance of a conflict even without a battle. This suggested even for inferior naval powers a battleship fleet could have important strategic effect.
While the role of battleships in both World Wars reflected Mahanian doctrine, the details of battleship deployment were more complex. Unlike ships of the line, the battleships of the late 19th and early 20th centuries had significant vulnerability to torpedoes and mines—because efficient mines and torpedoes did not exist before that—which could be used by relatively small and inexpensive craft. The Jeune École doctrine of the 1870s and 1880s recommended placing torpedo boats alongside battleships; these would hide behind the larger ships until gun-smoke obscured visibility enough for them to dart out and fire their torpedoes. While this tactic was made less effective by the development of smokeless propellant, the threat from more capable torpedo craft (later including submarines) remained. By the 1890s, the Royal Navy had developed the first destroyers, which were initially designed to intercept and drive off any attacking torpedo boats. During the First World War and subsequently, battleships were rarely deployed without a protective screen of destroyers.
Battleship doctrine emphasized the concentration of the battlegroup. In order for this concentrated force to be able to bring its power to bear on a reluctant opponent (or to avoid an encounter with a stronger enemy fleet), battlefleets needed some means of locating enemy ships beyond horizon range. This was provided by scouting forces; at various stages battlecruisers, cruisers, destroyers, airships, submarines and aircraft were all used. (With the development of radio, direction finding and traffic analysis would come into play, as well, so even shore stations, broadly speaking, joined the battlegroup.) So for most of their history, battleships operated surrounded by squadrons of destroyers and cruisers. The North Sea campaign of the First World War illustrates how, despite this support, the threat of mine and torpedo attack, and the failure to integrate or appreciate the capabilities of new techniques, seriously inhibited the operations of the Royal Navy Grand Fleet, the greatest battleship fleet of its time.
The presence of battleships had a great psychological and diplomatic impact. Similar to possessing nuclear weapons today, the ownership of battleships served to enhance a nation's force projection.
Even during the Cold War, the psychological impact of a battleship was significant. In 1946, USS Missouri was dispatched to deliver the remains of the ambassador from Turkey, and her presence in Turkish and Greek waters staved off a possible Soviet thrust into the Balkan region. In September 1983, when Druze militia in Lebanon's Shouf Mountains fired upon U.S. Marine peacekeepers, the arrival of USS New Jersey stopped the firing. Gunfire from New Jersey later killed militia leaders.
Battleships were the largest and most complex, and hence the most expensive warships of their time; as a result, the value of investment in battleships has always been contested. As the French politician Etienne Lamy wrote in 1879, "The construction of battleships is so costly, their effectiveness so uncertain and of such short duration, that the enterprise of creating an armored fleet seems to leave fruitless the perseverance of a people". The Jeune École school of thought of the 1870s and 1880s sought alternatives to the crippling expense and debatable utility of a conventional battlefleet. It proposed what would nowadays be termed a sea denial strategy, based on fast, long-ranged cruisers for commerce raiding and torpedo boat flotillas to attack enemy ships attempting to blockade French ports. The ideas of the Jeune École were ahead of their time; it was not until the 20th century that efficient mines, torpedoes, submarines, and aircraft were available that allowed similar ideas to be effectively implemented. The determination of powers such as Germany to build battlefleets with which to confront much stronger rivals has been criticized by historians, who emphasise the futility of investment in a battlefleet that has no chance of matching its opponent in an actual battle. | [
{
"paragraph_id": 0,
"text": "A battleship is a large armored warship with a main battery consisting of large caliber guns. It dominated naval warfare in the late 19th and early 20th centuries.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term battleship came into use in the late 1880s to describe a type of ironclad warship, now referred to by historians as pre-dreadnought battleships. In 1906, the commissioning of HMS Dreadnought into the United Kingdom's Royal Navy heralded a revolution in the field of battleship design. Subsequent battleship designs, influenced by HMS Dreadnought, were referred to as \"dreadnoughts\", though the term eventually became obsolete as dreadnoughts became the only type of battleship in common use.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Battleships were a symbol of naval dominance and national might, and for decades the battleship was a major factor in both diplomacy and military strategy. A global arms race in battleship construction began in Europe in the 1890s and culminated at the decisive Battle of Tsushima in 1905, the outcome of which significantly influenced the design of HMS Dreadnought. The launch of Dreadnought in 1906 commenced a new naval arms race. Three major fleet actions between steel battleships took place: the long-range gunnery duel at the Battle of the Yellow Sea in 1904, the decisive Battle of Tsushima in 1905 (both during the Russo-Japanese War) and the inconclusive Battle of Jutland in 1916, during the First World War. Jutland was the largest naval battle and the only full-scale clash of dreadnoughts of the war, and it was the last major battle in naval history fought primarily by battleships.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Naval Treaties of the 1920s and 1930s limited the number of battleships, though technical innovation in battleship design continued. Both the Allied and Axis powers built battleships during World War II, though the increasing importance of the aircraft carrier meant that the battleship played a less important role than had been expected in that conflict.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The value of the battleship has been questioned, even during their heyday. There were few of the decisive fleet battles that battleship proponents expected and used to justify the vast resources spent on building battlefleets. Even in spite of their huge firepower and protection, battleships were increasingly vulnerable to much smaller and relatively inexpensive weapons: initially the torpedo and the naval mine, and later aircraft and the guided missile. The growing range of naval engagements led to the aircraft carrier replacing the battleship as the leading capital ship during World War II, with the last battleship to be launched being HMS Vanguard in 1944. Four battleships were retained by the United States Navy until the end of the Cold War for fire support purposes and were last used in combat during the Gulf War in 1991, and then struck from the U.S. Naval Vessel Register in the 2000s. Many World War II-era battleships remain today as museum ships.",
"title": ""
},
{
"paragraph_id": 5,
"text": "A ship of the line was a large, unarmored wooden sailing ship which mounted a battery of up to 120 smoothbore guns and carronades, which came to prominence with the adoption of line of battle tactics in the early 17th century and the end of the sailing battleship's heyday in the 1830s. From 1794, the alternative term 'line of battle ship' was contracted (informally at first) to 'battle ship' or 'battleship'.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The sheer number of guns fired broadside meant a ship of the line could wreck any wooden enemy, holing her hull, knocking down masts, wrecking her rigging, and killing her crew. However, the effective range of the guns was as little as a few hundred yards, so the battle tactics of sailing ships depended in part on the wind.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Over time, ships of the line gradually became larger and carried more guns, but otherwise remained quite similar. The first major change to the ship of the line concept was the introduction of steam power as an auxiliary propulsion system. Steam power was gradually introduced to the navy in the first half of the 19th century, initially for small craft and later for frigates. The French Navy introduced steam to the line of battle with the 90-gun Napoléon in 1850—the first true steam battleship. Napoléon was armed as a conventional ship-of-the-line, but her steam engines could give her a speed of 12 knots (22 km/h), regardless of the wind. This was a potentially decisive advantage in a naval engagement. The introduction of steam accelerated the growth in size of battleships. France and the United Kingdom were the only countries to develop fleets of wooden steam screw battleships although several other navies operated small numbers of screw battleships, including Russia (9), the Ottoman Empire (3), Sweden (2), Naples (1), Denmark (1) and Austria (1).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The adoption of steam power was only one of a number of technological advances which revolutionized warship design in the 19th century. The ship of the line was overtaken by the ironclad: powered by steam, protected by metal armor, and armed with guns firing high-explosive shells.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Guns that fired explosive or incendiary shells were a major threat to wooden ships, and these weapons quickly became widespread after the introduction of 8-inch shell guns as part of the standard armament of French and American line-of-battle ships in 1841. In the Crimean War, six line-of-battle ships and two frigates of the Russian Black Sea Fleet destroyed seven Turkish frigates and three corvettes with explosive shells at the Battle of Sinop in 1853. Later in the war, French ironclad floating batteries used similar weapons against the defenses at the Battle of Kinburn.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Nevertheless, wooden-hulled ships stood up comparatively well to shells, as shown in the 1866 Battle of Lissa, where the modern Austrian steam two-decker SMS Kaiser ranged across a confused battlefield, rammed an Italian ironclad and took 80 hits from Italian ironclads, many of which were shells, but including at least one 300-pound shot at point-blank range. Despite losing her bowsprit and her foremast, and being set on fire, she was ready for action again the very next day.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The development of high-explosive shells made the use of iron armor plate on warships necessary. In 1859 France launched Gloire, the first ocean-going ironclad warship. She had the profile of a ship of the line, cut to one deck due to weight considerations. Although made of wood and reliant on sail for most journeys, Gloire was fitted with a propeller, and her wooden hull was protected by a layer of thick iron armor. Gloire prompted further innovation from the Royal Navy, anxious to prevent France from gaining a technological lead.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The superior armored frigate Warrior followed Gloire by only 14 months, and both nations embarked on a program of building new ironclads and converting existing screw ships of the line to armored frigates. Within two years, Italy, Austria, Spain and Russia had all ordered ironclad warships, and by the time of the famous clash of the USS Monitor and the CSS Virginia at the Battle of Hampton Roads at least eight navies possessed ironclad ships.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Navies experimented with the positioning of guns, in turrets (like the USS Monitor), central-batteries or barbettes, or with the ram as the principal weapon. As steam technology developed, masts were gradually removed from battleship designs. By the mid-1870s steel was used as a construction material alongside iron and wood. The French Navy's Redoutable, laid down in 1873 and launched in 1876, was a central battery and barbette warship which became the first battleship in the world to use steel as the principal building material.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The term \"battleship\" was officially adopted by the Royal Navy in the re-classification of 1892. By the 1890s, there was an increasing similarity between battleship designs, and the type that later became known as the 'pre-dreadnought battleship' emerged. These were heavily armored ships, mounting a mixed battery of guns in turrets, and without sails. The typical first-class battleship of the pre-dreadnought era displaced 15,000 to 17,000 tons, had a speed of 16 knots (30 km/h), and an armament of four 12-inch (305 mm) guns in two turrets fore and aft with a mixed-caliber secondary battery amidships around the superstructure. An early design with superficial similarity to the pre-dreadnought is the British Devastation class of 1871.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The slow-firing 12-inch (305 mm) main guns were the principal weapons for battleship-to-battleship combat. The intermediate and secondary batteries had two roles. Against major ships, it was thought a 'hail of fire' from quick-firing secondary weapons could distract enemy gun crews by inflicting damage to the superstructure, and they would be more effective against smaller ships such as cruisers. Smaller guns (12-pounders and smaller) were reserved for protecting the battleship against the threat of torpedo attack from destroyers and torpedo boats.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The beginning of the pre-dreadnought era coincided with Britain reasserting her naval dominance. For many years previously, Britain had taken naval supremacy for granted. Expensive naval projects were criticized by political leaders of all inclinations. However, in 1888 a war scare with France and the build-up of the Russian navy gave added impetus to naval construction, and the British Naval Defence Act of 1889 laid down a new fleet including eight new battleships. The principle that Britain's navy should be more powerful than the two next most powerful fleets combined was established. This policy was designed to deter France and Russia from building more battleships, but both nations nevertheless expanded their fleets with more and better pre-dreadnoughts in the 1890s.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In the last years of the 19th century and the first years of the 20th, the escalation in the building of battleships became an arms race between Britain and Germany. The German naval laws of 1890 and 1898 authorized a fleet of 38 battleships, a vital threat to the balance of naval power. Britain answered with further shipbuilding, but by the end of the pre-dreadnought era, British supremacy at sea had markedly weakened. In 1883, the United Kingdom had 38 battleships, twice as many as France and almost as many as the rest of the world put together. In 1897, Britain's lead was far smaller due to competition from France, Germany, and Russia, as well as the development of pre-dreadnought fleets in Italy, the United States and Japan. The Ottoman Empire, Spain, Sweden, Denmark, Norway, the Netherlands, Chile and Brazil all had second-rate fleets led by armored cruisers, coastal defence ships or monitors.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Pre-dreadnoughts continued the technical innovations of the ironclad. Turrets, armor plate, and steam engines were all improved over the years, and torpedo tubes were also introduced. A small number of designs, including the American Kearsarge and Virginia classes, experimented with all or part of the 8-inch intermediate battery superimposed over the 12-inch primary. Results were poor: recoil factors and blast effects resulted in the 8-inch battery being completely unusable, and the inability to train the primary and intermediate armaments on different targets led to significant tactical limitations. Even though such innovative designs saved weight (a key reason for their inception), they proved too cumbersome in practice.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 1906, the British Royal Navy launched the revolutionary HMS Dreadnought. Created as a result of pressure from Admiral Sir John (\"Jackie\") Fisher, HMS Dreadnought rendered existing battleships obsolete. Combining an \"all-big-gun\" armament of ten 12-inch (305 mm) guns with unprecedented speed (from steam turbine engines) and protection, she prompted navies worldwide to re-evaluate their battleship building programs. While the Japanese had laid down an all-big-gun battleship, Satsuma, in 1904 and the concept of an all-big-gun ship had been in circulation for several years, it had yet to be validated in combat. Dreadnought sparked a new arms race, principally between Britain and Germany but reflected worldwide, as the new class of warships became a crucial element of national power.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Technical development continued rapidly through the dreadnought era, with steep changes in armament, armor and propulsion. Ten years after Dreadnought's commissioning, much more powerful ships, the super-dreadnoughts, were being built.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In the first years of the 20th century, several navies worldwide experimented with the idea of a new type of battleship with a uniform armament of very heavy guns.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Admiral Vittorio Cuniberti, the Italian Navy's chief naval architect, articulated the concept of an all-big-gun battleship in 1903. When the Regia Marina did not pursue his ideas, Cuniberti wrote an article in Jane's proposing an \"ideal\" future British battleship, a large armored warship of 17,000 tons, armed solely with a single calibre main battery (twelve 12-inch [305 mm] guns), carrying 300-millimetre (12 in) belt armor, and capable of 24 knots (44 km/h).",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The Russo-Japanese War provided operational experience to validate the \"all-big-gun\" concept. During the Battle of the Yellow Sea on August 10, 1904, Admiral Togo of the Imperial Japanese Navy commenced deliberate 12-inch gun fire at the Russian flagship Tzesarevich at 14,200 yards (13,000 meters). At the Battle of Tsushima on May 27, 1905, Russian Admiral Rozhestvensky's flagship fired the first 12-inch guns at the Japanese flagship Mikasa at 7,000 meters. It is often held that these engagements demonstrated the importance of the 12-inch (305 mm) gun over its smaller counterparts, though some historians take the view that secondary batteries were just as important as the larger weapons when dealing with smaller fast moving torpedo craft. Such was the case, albeit unsuccessfully, when the Russian battleship Knyaz Suvorov at Tsushima had been sent to the bottom by destroyer launched torpedoes. The 1903–04 design also retained traditional triple-expansion steam engines.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "As early as 1904, Jackie Fisher had been convinced of the need for fast, powerful ships with an all-big-gun armament. If Tsushima influenced his thinking, it was to persuade him of the need to standardise on 12-inch (305 mm) guns. Fisher's concerns were submarines and destroyers equipped with torpedoes, then threatening to outrange battleship guns, making speed imperative for capital ships. Fisher's preferred option was his brainchild, the battlecruiser: lightly armored but heavily armed with eight 12-inch guns and propelled to 25 knots (46 km/h) by steam turbines.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "It was to prove this revolutionary technology that Dreadnought was designed in January 1905, laid down in October 1905 and sped to completion by 1906. She carried ten 12-inch guns, had an 11-inch armor belt, and was the first large ship powered by turbines. She mounted her guns in five turrets; three on the centerline (one forward, two aft) and two on the wings, giving her at her launch twice the broadside of any other warship. She retained a number of 12-pound (3-inch, 76 mm) quick-firing guns for use against destroyers and torpedo-boats. Her armor was heavy enough for her to go head-to-head with any other ship in a gun battle, and conceivably win.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Dreadnought was to have been followed by three Invincible-class battlecruisers, their construction delayed to allow lessons from Dreadnought to be used in their design. While Fisher may have intended Dreadnought to be the last Royal Navy battleship, the design was so successful he found little support for his plan to switch to a battlecruiser navy. Although there were some problems with the ship (the wing turrets had limited arcs of fire and strained the hull when firing a full broadside, and the top of the thickest armor belt lay below the waterline at full load), the Royal Navy promptly commissioned another six ships to a similar design in the Bellerophon and St. Vincent classes.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "An American design, South Carolina, authorized in 1905 and laid down in December 1906, was another of the first dreadnoughts, but she and her sister, Michigan, were not launched until 1908. Both used triple-expansion engines and had a superior layout of the main battery, dispensing with Dreadnought's wing turrets. They thus retained the same broadside, despite having two fewer guns.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In 1897, before the revolution in design brought about by HMS Dreadnought, the Royal Navy had 62 battleships in commission or building, a lead of 26 over France and 50 over Germany. From the 1906 launching of Dreadnought, an arms race with major strategic consequences was prompted. Major naval powers raced to build their own dreadnoughts. Possession of modern battleships was not only seen as vital to naval power, but also, as with nuclear weapons after World War II, represented a nation's standing in the world. Germany, France, Japan, Italy, Austria, and the United States all began dreadnought programmes; while the Ottoman Empire, Argentina, Russia, Brazil, and Chile commissioned dreadnoughts to be built in British and American yards.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "By virtue of geography, the Royal Navy was able to use her imposing battleship and battlecruiser fleet to impose a strict and successful naval blockade of Germany and kept Germany's smaller battleship fleet bottled up in the North Sea: only narrow channels led to the Atlantic Ocean and these were guarded by British forces. Both sides were aware that, because of the greater number of British dreadnoughts, a full fleet engagement would be likely to result in a British victory. The German strategy was therefore to try to provoke an engagement on their terms: either to induce a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coastline, where friendly minefields, torpedo-boats and submarines could be used to even the odds. This did not happen however, due in large part to the necessity to keep submarines for the Atlantic campaign. Submarines were the only vessels in the Imperial German Navy able to break out and raid British commerce in force, but even though they sank many merchant ships, they could not successfully counter-blockade the United Kingdom; the Royal Navy successfully adopted convoy tactics to combat Germany's submarine counter-blockade and eventually defeated it. This was in stark contrast to Britain's successful blockade of Germany.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The first two years of war saw the Royal Navy's battleships and battlecruisers regularly \"sweep\" the North Sea making sure that no German ships could get in or out. Only a few German surface ships that were already at sea, such as the famous light cruiser SMS Emden, were able to raid commerce. Even some of those that did manage to get out were hunted down by battlecruisers, as in the Battle of the Falklands, December 7, 1914. The results of sweeping actions in the North Sea were battles including the Heligoland Bight and Dogger Bank and German raids on the English coast, all of which were attempts by the Germans to lure out portions of the Grand Fleet in an attempt to defeat the Royal Navy in detail. On May 31, 1916, a further attempt to draw British ships into battle on German terms resulted in a clash of the battlefleets in the Battle of Jutland. The German fleet withdrew to port after two short encounters with the British fleet. Less than two months later, the Germans once again attempted to draw portions of the Grand Fleet into battle. The resulting Action of 19 August 1916 proved inconclusive. This reinforced German determination not to engage in a fleet to fleet battle.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In the other naval theatres there were no decisive pitched battles. In the Black Sea, engagement between Russian and Ottoman battleships was restricted to skirmishes. In the Baltic Sea, action was largely limited to the raiding of convoys, and the laying of defensive minefields; the only significant clash of battleship squadrons there was the Battle of Moon Sound at which one Russian pre-dreadnought was lost. The Adriatic was in a sense the mirror of the North Sea: the Austro-Hungarian dreadnought fleet remained bottled up by the British and French blockade. And in the Mediterranean, the most important use of battleships was in support of the amphibious assault on Gallipoli.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In September 1914, the threat posed to surface ships by German U-boats was confirmed by successful attacks on British cruisers, including the sinking of three British armored cruisers by the German submarine SM U-9 in less than an hour. The British Super-dreadnought HMS Audacious soon followed suit as she struck a mine laid by a German U-boat in October 1914 and sank. The threat that German U-boats posed to British dreadnoughts was enough to cause the Royal Navy to change their strategy and tactics in the North Sea to reduce the risk of U-boat attack. Further near-misses from submarine attacks on battleships and casualties amongst cruisers led to growing concern in the Royal Navy about the vulnerability of battleships.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "As the war wore on however, it turned out that whilst submarines did prove to be a very dangerous threat to older pre-dreadnought battleships, as shown by examples such as the sinking of Mesûdiye, which was caught in the Dardanelles by a British submarine and HMS Majestic and HMS Triumph were torpedoed by U-21 as well as HMS Formidable, HMS Cornwallis, HMS Britannia etc., the threat posed to dreadnought battleships proved to have been largely a false alarm. HMS Audacious turned out to be the only dreadnought sunk by a submarine in World War I. While battleships were never intended for anti-submarine warfare, there was one instance of a submarine being sunk by a dreadnought battleship. HMS Dreadnought rammed and sank the German submarine U-29 on March 18, 1915, off the Moray Firth.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Whilst the escape of the German fleet from the superior British firepower at Jutland was effected by the German cruisers and destroyers successfully turning away the British battleships, the German attempt to rely on U-boat attacks on the British fleet failed.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Torpedo boats did have some successes against battleships in World War I, as demonstrated by the sinking of the British pre-dreadnought HMS Goliath by Muâvenet-i Millîye during the Dardanelles Campaign and the destruction of the Austro-Hungarian dreadnought SMS Szent István by Italian motor torpedo boats in June 1918. In large fleet actions, however, destroyers and torpedo boats were usually unable to get close enough to the battleships to damage them. The only battleship sunk in a fleet action by either torpedo boats or destroyers was the obsolescent German pre-dreadnought SMS Pommern. She was sunk by destroyers during the night phase of the Battle of Jutland.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The German High Seas Fleet, for their part, were determined not to engage the British without the assistance of submarines; and since the submarines were needed more for raiding commercial traffic, the fleet stayed in port for much of the war.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "For many years, Germany simply had no battleships. The Armistice with Germany required that most of the High Seas Fleet be disarmed and interned in a neutral port; largely because no neutral port could be found, the ships remained in British custody in Scapa Flow, Scotland. The Treaty of Versailles specified that the ships should be handed over to the British. Instead, most of them were scuttled by their German crews on June 21, 1919, just before the signature of the peace treaty. The treaty also limited the German Navy, and prevented Germany from building or possessing any capital ships.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "The inter-war period saw the battleship subjected to strict international limitations to prevent a costly arms race breaking out.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "While the victors were not limited by the Treaty of Versailles, many of the major naval powers were crippled after the war. Faced with the prospect of a naval arms race against the United Kingdom and Japan, which would in turn have led to a possible Pacific war, the United States was keen to conclude the Washington Naval Treaty of 1922. This treaty limited the number and size of battleships that each major nation could possess, and required Britain to accept parity with the U.S. and to abandon the British alliance with Japan. The Washington treaty was followed by a series of other naval treaties, including the First Geneva Naval Conference (1927), the First London Naval Treaty (1930), the Second Geneva Naval Conference (1932), and finally the Second London Naval Treaty (1936), which all set limits on major warships. These treaties became effectively obsolete on September 1, 1939, at the beginning of World War II, but the ship classifications that had been agreed upon still apply. The treaty limitations meant that fewer new battleships were launched in 1919–1939 than in 1905–1914. The treaties also inhibited development by imposing upper limits on the weights of ships. Designs like the projected British N3-class battleship, the first American South Dakota class, and the Japanese Kii class—all of which continued the trend to larger ships with bigger guns and thicker armor—never got off the drawing board. Those designs which were commissioned during this period were referred to as treaty battleships.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "As early as 1914, the British Admiral Percy Scott predicted that battleships would soon be made irrelevant by aircraft. By the end of World War I, aircraft had successfully adopted the torpedo as a weapon. In 1921 the Italian general and air theorist Giulio Douhet completed a hugely influential treatise on strategic bombing titled The Command of the Air, which foresaw the dominance of air power over naval units.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "In the 1920s, General Billy Mitchell of the United States Army Air Corps, believing that air forces had rendered navies around the world obsolete, testified in front of Congress that \"1,000 bombardment airplanes can be built and operated for about the price of one battleship\" and that a squadron of these bombers could sink a battleship, making for more efficient use of government funds. This infuriated the U.S. Navy, but Mitchell was nevertheless allowed to conduct a careful series of bombing tests alongside Navy and Marine bombers. In 1921, he bombed and sank numerous ships, including the \"unsinkable\" German World War I battleship SMS Ostfriesland and the American pre-dreadnought Alabama.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "Although Mitchell had required \"war-time conditions\", the ships sunk were obsolete, stationary, defenseless and had no damage control. The sinking of Ostfriesland was accomplished by violating an agreement that would have allowed Navy engineers to examine the effects of various munitions: Mitchell's airmen disregarded the rules, and sank the ship within minutes in a coordinated attack. The stunt made headlines, and Mitchell declared, \"No surface vessels can exist wherever air forces acting from land bases are able to attack them.\" While far from conclusive, Mitchell's test was significant because it put proponents of the battleship against naval aviation on the defensive. Rear Admiral William A. Moffett used public relations against Mitchell to make headway toward expansion of the U.S. Navy's nascent aircraft carrier program.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "The Royal Navy, United States Navy, and Imperial Japanese Navy extensively upgraded and modernized their World War I–era battleships during the 1930s. Among the new features were an increased tower height and stability for the optical rangefinder equipment (for gunnery control), more armor (especially around turrets) to protect against plunging fire and aerial bombing, and additional anti-aircraft weapons. Some British ships received a large block superstructure nicknamed the \"Queen Anne's castle\", such as in Queen Elizabeth and Warspite, which would be used in the new conning towers of the King George V-class fast battleships. External bulges were added to improve both buoyancy to counteract weight increase and provide underwater protection against mines and torpedoes. The Japanese rebuilt all of their battleships, plus their battlecruisers, with distinctive \"pagoda\" structures, though the Hiei received a more modern bridge tower that would influence the new Yamato class. Bulges were fitted, including steel tube arrays to improve both underwater and vertical protection along the waterline. The U.S. experimented with cage masts and later tripod masts, though after the Japanese attack on Pearl Harbor some of the most severely damaged ships (such as West Virginia and California) were rebuilt with tower masts, for an appearance similar to their Iowa-class contemporaries. Radar, which was effective beyond visual range and effective in complete darkness or adverse weather, was introduced to supplement optical fire control.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Even when war threatened again in the late 1930s, battleship construction did not regain the level of importance it had held in the years before World War I. The \"building holiday\" imposed by the naval treaties meant the capacity of dockyards worldwide had shrunk, and the strategic position had changed.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "In Germany, the ambitious Plan Z for naval rearmament was abandoned in favor of a strategy of submarine warfare supplemented by the use of battlecruisers and commerce raiding (in particular by Bismarck-class battleships). In Britain, the most pressing need was for air defenses and convoy escorts to safeguard the civilian population from bombing or starvation, and re-armament construction plans consisted of five ships of the King George V class. It was in the Mediterranean that navies remained most committed to battleship warfare. France intended to build six battleships of the Dunkerque and Richelieu classes, and the Italians four Littorio-class ships. Neither navy built significant aircraft carriers. The U.S. preferred to spend limited funds on aircraft carriers until the South Dakota class. Japan, also prioritising aircraft carriers, nevertheless began work on three mammoth Yamatos (although the third, Shinano, was later completed as a carrier) and a planned fourth was cancelled.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "At the outbreak of the Spanish Civil War, the Spanish navy included only two small dreadnought battleships, España and Jaime I. España (originally named Alfonso XIII), by then in reserve at the northwestern naval base of El Ferrol, fell into Nationalist hands in July 1936. The crew aboard Jaime I remained loyal to the Republic, killed their officers, who apparently supported Franco's attempted coup, and joined the Republican Navy. Thus each side had one battleship; however, the Republican Navy generally lacked experienced officers. The Spanish battleships mainly restricted themselves to mutual blockades, convoy escort duties, and shore bombardment, rarely in direct fighting against other surface units. In April 1937, España ran into a mine laid by friendly forces, and sank with little loss of life. In May 1937, Jaime I was damaged by Nationalist air attacks and a grounding incident. The ship was forced to go back to port to be repaired. There she was again hit by several aerial bombs. It was then decided to tow the battleship to a more secure port, but during the transport she suffered an internal explosion that caused 300 deaths and her total loss. Several Italian and German capital ships participated in the non-intervention blockade. On May 29, 1937, two Republican aircraft managed to bomb the German pocket battleship Deutschland outside Ibiza, causing severe damage and loss of life. Admiral Scheer retaliated two days later by bombarding Almería, causing much destruction, and the resulting Deutschland incident meant the end of German and Italian participation in non-intervention.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "The German battleship Schleswig-Holstein—an obsolete pre-dreadnought—fired the first shots of World War II with the bombardment of the Polish garrison at Westerplatte; and the final surrender of the Japanese Empire took place aboard a United States Navy battleship, USS Missouri. Between those two events, it had become clear that aircraft carriers were the new principal ships of the fleet and that battleships now performed a secondary role.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "Battleships played a part in major engagements in Atlantic, Pacific and Mediterranean theaters; in the Atlantic, the Germans used their battleships as independent commerce raiders. However, clashes between battleships were of little strategic importance. The Battle of the Atlantic was fought between destroyers and submarines, and most of the decisive fleet clashes of the Pacific war were determined by aircraft carriers.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "In the first year of the war, armored warships defied predictions that aircraft would dominate naval warfare. Scharnhorst and Gneisenau surprised and sank the aircraft carrier Glorious off western Norway in June 1940. This engagement marked the only time a fleet carrier was sunk by surface gunnery. In the attack on Mers-el-Kébir, British battleships opened fire on the French battleships in the harbor near Oran in Algeria with their heavy guns. The fleeing French ships were then pursued by planes from aircraft carriers.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "The subsequent years of the war saw many demonstrations of the maturity of the aircraft carrier as a strategic naval weapon and its effectiveness against battleships. The British air attack on the Italian naval base at Taranto sank one Italian battleship and damaged two more. The same Swordfish torpedo bombers played a crucial role in sinking the German battleship Bismarck.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "On December 7, 1941, the Japanese launched a surprise attack on Pearl Harbor. Within a short time, five of eight U.S. battleships were sunk or sinking, with the rest damaged. All three American aircraft carriers were out to sea, however, and evaded destruction. The sinking of the British battleship Prince of Wales and battlecruiser Repulse, demonstrated the vulnerability of a battleship to air attack while at sea without sufficient air cover, settling the argument begun by Mitchell in 1921. Both warships were under way and en route to attack the Japanese amphibious force that had invaded Malaya when they were caught by Japanese land-based bombers and torpedo bombers on December 10, 1941.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "At many of the early crucial battles of the Pacific, for instance Coral Sea and Midway, battleships were either absent or overshadowed as carriers launched wave after wave of planes into the attack at a range of hundreds of miles. In later battles in the Pacific, battleships primarily performed shore bombardment in support of amphibious landings and provided anti-aircraft defense as escort for the carriers. Even the largest battleships ever constructed, Japan's Yamato class, which carried a main battery of nine 18-inch (46 cm) guns and were designed as a principal strategic weapon, were never given a chance to show their potential in the decisive battleship action that figured in Japanese pre-war planning.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "The last battleship confrontation in history was the Battle of Surigao Strait, on October 25, 1944, in which a numerically and technically superior American battleship group destroyed a lesser Japanese battleship group by gunfire after it had already been devastated by destroyer torpedo attacks. All but one of the American battleships in this confrontation had previously been sunk during the attack on Pearl Harbor and subsequently raised and repaired. Mississippi fired the last major-caliber salvo of this battle. In April 1945, during the battle for Okinawa, the world's most powerful battleship, the Yamato, was sent out on a suicide mission against a massive U.S. force and sunk by overwhelming pressure from carrier aircraft with nearly all hands lost. After that, Japanese fleet remaining in the mainland was also destroyed by the US naval air force.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "After World War II, several navies retained their existing battleships, but they were no longer strategically dominant military assets. It soon became apparent that they were no longer worth the considerable cost of construction and maintenance and only one new battleship was commissioned after the war, HMS Vanguard. During the war it had been demonstrated that battleship-on-battleship engagements like Leyte Gulf or the sinking of HMS Hood were the exception and not the rule, and with the growing role of aircraft engagement ranges were becoming longer and longer, making heavy gun armament irrelevant. The armor of a battleship was equally irrelevant in the face of a nuclear attack as tactical missiles with a range of 100 kilometres (60 mi) or more could be mounted on the Soviet Kildin-class destroyer and Whiskey-class submarines. By the end of the 1950s, smaller vessel classes such as destroyers, which formerly offered no noteworthy opposition to battleships, now were capable of eliminating battleships from outside the range of the ship's heavy guns.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "The remaining battleships met a variety of ends. USS Arkansas and Nagato were sunk during the testing of nuclear weapons in Operation Crossroads in 1946. Both battleships proved resistant to nuclear air burst but vulnerable to underwater nuclear explosions. The Italian battleship Giulio Cesare was taken by the Soviets as reparations and renamed Novorossiysk; she was sunk by a leftover German mine in the Black Sea on October 29, 1955. The two Andrea Doria-class ships were scrapped in 1956. The French Lorraine was scrapped in 1954, Richelieu in 1968, and Jean Bart in 1970.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "The United Kingdom's four surviving King George V-class ships were scrapped in 1957, and Vanguard followed in 1960. All other surviving British battleships had been sold or broken up by 1949. The Soviet Union's Marat was scrapped in 1953, Parizhskaya Kommuna in 1957 and Oktyabrskaya Revolutsiya (back under her original name, Gangut, since 1942) in 1956–57. Brazil's Minas Geraes was scrapped in Genoa in 1953, and her sister ship São Paulo sank during a storm in the Atlantic en route to the breakers in Italy in 1951.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "Argentina kept its two Rivadavia-class ships until 1956 and Chile kept Almirante Latorre (formerly HMS Canada) until 1959. The Turkish battlecruiser Yavûz (formerly SMS Goeben, launched in 1911) was scrapped in 1976 after an offer to sell her back to Germany was refused. Sweden had several small coastal-defense battleships, one of which, HSwMS Gustav V, survived until 1970. The Soviets scrapped four large incomplete cruisers in the late 1950s, whilst plans to build a number of new Stalingrad-class battlecruisers were abandoned following the death of Joseph Stalin in 1953. The three old German battleships Schleswig-Holstein, Schlesien, and Hessen all met similar ends. Hessen was taken over by the Soviet Union and renamed Tsel. She was scrapped in 1960. Schleswig-Holstein was renamed Borodino, and was used as a target ship until 1960. Schlesien, too, was used as a target ship. She was broken up between 1952 and 1957.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "The Iowa-class battleships gained a new lease of life in the U.S. Navy as fire support ships. Radar and computer-controlled gunfire could be aimed with pinpoint accuracy to target. The U.S. recommissioned all four Iowa-class battleships for the Korean War and the New Jersey for the Vietnam War. These were primarily used for shore bombardment, New Jersey firing nearly 6,000 rounds of 16 inch shells and over 14,000 rounds of 5 inch projectiles during her tour on the gunline, seven times more rounds against shore targets in Vietnam than she had fired in the Second World War.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "As part of Navy Secretary John F. Lehman's effort to build a 600-ship Navy in the 1980s, and in response to the commissioning of Kirov by the Soviet Union, the United States recommissioned all four Iowa-class battleships. On several occasions, battleships were support ships in carrier battle groups, or led their own battleship battle group. These were modernized to carry Tomahawk (TLAM) missiles, with New Jersey seeing action bombarding Lebanon in 1983 and 1984, while Missouri and Wisconsin fired their 16-inch (406 mm) guns at land targets and launched missiles during Operation Desert Storm in 1991. Wisconsin served as the TLAM strike commander for the Persian Gulf, directing the sequence of launches that marked the opening of Desert Storm, firing a total of 24 TLAMs during the first two days of the campaign. The primary threat to the battleships were Iraqi shore-based surface-to-surface missiles; Missouri was targeted by two Iraqi Silkworm missiles, with one missing and another being intercepted by the British destroyer HMS Gloucester.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "After Indiana was stricken in 1962, the four Iowa-class ships were the only battleships in commission or reserve anywhere in the world. There was an extended debate when the four Iowa ships were finally decommissioned in the early 1990s. USS Iowa and USS Wisconsin were maintained to a standard whereby they could be rapidly returned to service as fire support vessels, pending the development of a superior fire support vessel. These last two battleships were finally stricken from the U.S. Naval Vessel Register in 2006. The Military Balance and Russian Foreign Military Review states the U.S. Navy listed one battleship in the reserve (Naval Inactive Fleet/Reserve 2nd Turn) in 2010. The Military Balance states the U.S. Navy listed no battleships in the reserve in 2014.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "When the last Iowa-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: Massachusetts, North Carolina, Alabama, Iowa, New Jersey, Missouri, Wisconsin, and Texas. Missouri and New Jersey are museums at Pearl Harbor and Camden, New Jersey, respectively. Iowa is on display as an educational attraction at the Los Angeles Waterfront in San Pedro, California. Wisconsin now serves as a museum ship in Norfolk, Virginia. Massachusetts, which has the distinction of never having lost a man during service, is on display at the Battleship Cove naval museum in Fall River, Massachusetts. Texas, the first battleship turned into a museum, is normally on display at the San Jacinto Battleground State Historic Site, near Houston, but as of 2021 is closed for repairs. North Carolina is on display in Wilmington, North Carolina. Alabama is on display in Mobile, Alabama. The wreck of Arizona, sunk during the Pearl Harbor attack in 1941, is designated a historical landmark and national gravesite. The wreck of Utah, also sunk during the attack, is a historic landmark.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "The only other 20th-century battleship on display is the Japanese pre-dreadnought Mikasa. A replica of the ironclad battleship Dingyuan was built by the Weihai Port Bureau in 2003 and is on display in Weihai, China.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "Former battleships that were previously used as museum ships included USS Oregon (BB-3), SMS Tegetthoff, and SMS Erzherzog Franz Ferdinand.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Battleships were the embodiment of sea power. For American naval officer Alfred Thayer Mahan and his followers, a strong navy was vital to the success of a nation, and control of the seas was vital for the projection of force on land and overseas. Mahan's theory, proposed in The Influence of Sea Power Upon History, 1660–1783 of 1890, dictated the role of the battleship was to sweep the enemy from the seas. While the work of escorting, blockading, and raiding might be done by cruisers or smaller vessels, the presence of the battleship was a potential threat to any convoy escorted by any vessels other than capital ships. This concept of \"potential threat\" can be further generalized to the mere existence (as opposed to presence) of a powerful fleet tying the opposing fleet down. This concept came to be known as a \"fleet in being\"—an idle yet mighty fleet forcing others to spend time, resource and effort to actively guard against it.",
"title": "Strategy and doctrine"
},
{
"paragraph_id": 65,
"text": "Mahan went on to say victory could only be achieved by engagements between battleships, which came to be known as the decisive battle doctrine in some navies, while targeting merchant ships (commerce raiding or guerre de course, as posited by the Jeune École) could never succeed.",
"title": "Strategy and doctrine"
},
{
"paragraph_id": 66,
"text": "Mahan was highly influential in naval and political circles throughout the age of the battleship, calling for a large fleet of the most powerful battleships possible. Mahan's work developed in the late 1880s, and by the end of the 1890s it had acquired much international influence on naval strategy; in the end, it was adopted by many major navies (notably the British, American, German, and Japanese). The strength of Mahanian opinion was important in the development of the battleships arms races, and equally important in the agreement of the Powers to limit battleship numbers in the interwar era.",
"title": "Strategy and doctrine"
},
{
"paragraph_id": 67,
"text": "The \"fleet in being\" suggested battleships could simply by their existence tie down superior enemy resources. This in turn was believed to be able to tip the balance of a conflict even without a battle. This suggested even for inferior naval powers a battleship fleet could have important strategic effect.",
"title": "Strategy and doctrine"
},
{
"paragraph_id": 68,
"text": "While the role of battleships in both World Wars reflected Mahanian doctrine, the details of battleship deployment were more complex. Unlike ships of the line, the battleships of the late 19th and early 20th centuries had significant vulnerability to torpedoes and mines—because efficient mines and torpedoes did not exist before that—which could be used by relatively small and inexpensive craft. The Jeune École doctrine of the 1870s and 1880s recommended placing torpedo boats alongside battleships; these would hide behind the larger ships until gun-smoke obscured visibility enough for them to dart out and fire their torpedoes. While this tactic was made less effective by the development of smokeless propellant, the threat from more capable torpedo craft (later including submarines) remained. By the 1890s, the Royal Navy had developed the first destroyers, which were initially designed to intercept and drive off any attacking torpedo boats. During the First World War and subsequently, battleships were rarely deployed without a protective screen of destroyers.",
"title": "Strategy and doctrine"
},
{
"paragraph_id": 69,
"text": "Battleship doctrine emphasized the concentration of the battlegroup. In order for this concentrated force to be able to bring its power to bear on a reluctant opponent (or to avoid an encounter with a stronger enemy fleet), battlefleets needed some means of locating enemy ships beyond horizon range. This was provided by scouting forces; at various stages battlecruisers, cruisers, destroyers, airships, submarines and aircraft were all used. (With the development of radio, direction finding and traffic analysis would come into play, as well, so even shore stations, broadly speaking, joined the battlegroup.) So for most of their history, battleships operated surrounded by squadrons of destroyers and cruisers. The North Sea campaign of the First World War illustrates how, despite this support, the threat of mine and torpedo attack, and the failure to integrate or appreciate the capabilities of new techniques, seriously inhibited the operations of the Royal Navy Grand Fleet, the greatest battleship fleet of its time.",
"title": "Strategy and doctrine"
},
{
"paragraph_id": 70,
"text": "The presence of battleships had a great psychological and diplomatic impact. Similar to possessing nuclear weapons today, the ownership of battleships served to enhance a nation's force projection.",
"title": "Strategy and doctrine"
},
{
"paragraph_id": 71,
"text": "Even during the Cold War, the psychological impact of a battleship was significant. In 1946, USS Missouri was dispatched to deliver the remains of the ambassador from Turkey, and her presence in Turkish and Greek waters staved off a possible Soviet thrust into the Balkan region. In September 1983, when Druze militia in Lebanon's Shouf Mountains fired upon U.S. Marine peacekeepers, the arrival of USS New Jersey stopped the firing. Gunfire from New Jersey later killed militia leaders.",
"title": "Strategy and doctrine"
},
{
"paragraph_id": 72,
"text": "Battleships were the largest and most complex, and hence the most expensive warships of their time; as a result, the value of investment in battleships has always been contested. As the French politician Etienne Lamy wrote in 1879, \"The construction of battleships is so costly, their effectiveness so uncertain and of such short duration, that the enterprise of creating an armored fleet seems to leave fruitless the perseverance of a people\". The Jeune École school of thought of the 1870s and 1880s sought alternatives to the crippling expense and debatable utility of a conventional battlefleet. It proposed what would nowadays be termed a sea denial strategy, based on fast, long-ranged cruisers for commerce raiding and torpedo boat flotillas to attack enemy ships attempting to blockade French ports. The ideas of the Jeune École were ahead of their time; it was not until the 20th century that efficient mines, torpedoes, submarines, and aircraft were available that allowed similar ideas to be effectively implemented. The determination of powers such as Germany to build battlefleets with which to confront much stronger rivals has been criticized by historians, who emphasise the futility of investment in a battlefleet that has no chance of matching its opponent in an actual battle.",
"title": "Strategy and doctrine"
}
] | A battleship is a large armored warship with a main battery consisting of large caliber guns. It dominated naval warfare in the late 19th and early 20th centuries. The term battleship came into use in the late 1880s to describe a type of ironclad warship, now referred to by historians as pre-dreadnought battleships. In 1906, the commissioning of HMS Dreadnought into the United Kingdom's Royal Navy heralded a revolution in the field of battleship design. Subsequent battleship designs, influenced by HMS Dreadnought, were referred to as "dreadnoughts", though the term eventually became obsolete as dreadnoughts became the only type of battleship in common use. Battleships were a symbol of naval dominance and national might, and for decades the battleship was a major factor in both diplomacy and military strategy. A global arms race in battleship construction began in Europe in the 1890s and culminated at the decisive Battle of Tsushima in 1905, the outcome of which significantly influenced the design of HMS Dreadnought. The launch of Dreadnought in 1906 commenced a new naval arms race. Three major fleet actions between steel battleships took place: the long-range gunnery duel at the Battle of the Yellow Sea in 1904, the decisive Battle of Tsushima in 1905 and the inconclusive Battle of Jutland in 1916, during the First World War. Jutland was the largest naval battle and the only full-scale clash of dreadnoughts of the war, and it was the last major battle in naval history fought primarily by battleships. The Naval Treaties of the 1920s and 1930s limited the number of battleships, though technical innovation in battleship design continued. Both the Allied and Axis powers built battleships during World War II, though the increasing importance of the aircraft carrier meant that the battleship played a less important role than had been expected in that conflict. The value of the battleship has been questioned, even during their heyday. There were few of the decisive fleet battles that battleship proponents expected and used to justify the vast resources spent on building battlefleets. Even in spite of their huge firepower and protection, battleships were increasingly vulnerable to much smaller and relatively inexpensive weapons: initially the torpedo and the naval mine, and later aircraft and the guided missile. The growing range of naval engagements led to the aircraft carrier replacing the battleship as the leading capital ship during World War II, with the last battleship to be launched being HMS Vanguard in 1944. Four battleships were retained by the United States Navy until the end of the Cold War for fire support purposes and were last used in combat during the Gulf War in 1991, and then struck from the U.S. Naval Vessel Register in the 2000s. Many World War II-era battleships remain today as museum ships. | 2001-11-08T14:44:44Z | 2023-12-13T14:31:02Z | [
"Template:Sclass",
"Template:Page needed",
"Template:Cite web",
"Template:Warship types of the 19th & 20th centuries",
"Template:USS",
"Template:SMS",
"Template:Cn",
"Template:Navy",
"Template:ISBN",
"Template:Cite book",
"Template:Naval Vessel Register URL",
"Template:Short description",
"Template:Circa",
"Template:Isbn",
"Template:Authority control",
"Template:HMS",
"Template:Cite journal",
"Template:Unreferenced section",
"Template:Naval",
"Template:Featured article",
"Template:Ship",
"Template:Reflist",
"Template:Refend",
"Template:'",
"Template:Portal",
"Template:TCG",
"Template:Refbegin",
"Template:BBhistory",
"Template:Main",
"Template:Sclass2",
"Template:Convert",
"Template:SMU",
"Template:Citation",
"Template:Use American English",
"Template:Use mdy dates",
"Template:See also",
"Template:HSwMS",
"Template:Webarchive",
"Template:Sister project links",
"Template:About",
"Template:Citation needed"
] | https://en.wikipedia.org/wiki/Battleship |
4,055 | Bifröst | In Norse mythology, Bifröst (/ˈbɪvrɒst/ ), also called Bilröst, is a burning rainbow bridge that reaches between Midgard (Earth) and Asgard, the realm of the gods. The bridge is attested as Bilröst in the Poetic Edda; compiled in the 13th century from earlier traditional sources, and as Bifröst in the Prose Edda; written in the 13th century by Snorri Sturluson, and in the poetry of skalds. Both the Poetic Edda and the Prose Edda alternately refer to the bridge as Ásbrú (Old Norse "Æsir's bridge").
According to the Prose Edda, the bridge ends in heaven at Himinbjörg, the residence of the god Heimdall, who guards it from the jötnar. The bridge's destruction during Ragnarök by the forces of Muspell is foretold. Scholars have proposed that the bridge may have originally represented the Milky Way and have noted parallels between the bridge and another bridge in Norse mythology, Gjallarbrú.
Scholar Andy Orchard suggests that Bifröst may mean "shimmering path." He notes that the first element of Bilröst—bil (meaning "a moment")—"suggests the fleeting nature of the rainbow," which he connects to the first element of Bifröst—the Old Norse verb bifa (meaning "to shimmer" or "to shake")—noting that the element evokes notions of the "lustrous sheen" of the bridge. Austrian Germanist Rudolf Simek says that Bifröst either means "the swaying road to heaven" (also citing bifa) or, if Bilröst is the original form of the two (which Simek says is likely), "the fleetingly glimpsed rainbow" (possibly connected to bil, perhaps meaning "moment, weak point").
Two poems in the Poetic Edda and two books in the Prose Edda provide information about the bridge:
In the Poetic Edda, the bridge is mentioned in the poems Grímnismál and Fáfnismál, where it is referred to as Bilröst. In one of two stanzas in the poem Grímnismál that mentions the bridge, Grímnir (the god Odin in disguise) provides the young Agnarr with cosmological knowledge, including that Bilröst is the best of bridges. Later in Grímnismál, Grímnir notes that Asbrú "burns all with flames" and that, every day, the god Thor wades through the waters of Körmt and Örmt and the two Kerlaugar:
In Fáfnismál, the dying wyrm Fafnir tells the hero Sigurd that, during the events of Ragnarök, bearing spears, gods will meet at Óskópnir. From there, the gods will cross Bilröst, which will break apart as they cross over it, causing their horses to dredge through an immense river.
The bridge is mentioned in the Prose Edda books Gylfaginning and Skáldskaparmál, where it is referred to as Bifröst. In chapter 13 of Gylfaginning, Gangleri (King Gylfi in disguise) asks the enthroned figure of High what way exists between heaven and earth. Laughing, High replies that the question isn't an intelligent one, and goes on to explain that the gods built a bridge from heaven and earth. He incredulously asks Gangleri if he has not heard the story before. High says that Gangleri must have seen it, and notes that Gangleri may call it a rainbow. High says that the bridge consists of three colors, has great strength, "and is built with art and skill to a greater extent than other constructions."
High notes that, although the bridge is strong, it will break when "Muspell's lads" attempt to cross it, and their horses will have to make do with swimming over "great rivers." Gangleri says that it doesn't seem that the gods "built the bridge in good faith if it is liable to break, considering that they can do as they please." High responds that the gods do not deserve blame for the breaking of the bridge, for "there is nothing in this world that will be secure when Muspell's sons attack."
In chapter 15 of Gylfaginning, Just-As-High says that Bifröst is also called Asbrú, and that every day the gods ride their horses across it (with the exception of Thor, who instead wades through the boiling waters of the rivers Körmt and Örmt) to reach Urðarbrunnr, a holy well where the gods have their court. As a reference, Just-As-High quotes the second of the two stanzas in Grímnismál that mention the bridge (see above). Gangleri asks if fire burns over Bifröst. High says that the red in the bridge is burning fire, and, without it, the frost jotnar and mountain jotnar would "go up into heaven" if anyone who wanted could cross Bifröst. High adds that, in heaven, "there are many beautiful places" and that "everywhere there has divine protection around it."
In chapter 17, High tells Gangleri that the location of Himinbjörg "stands at the edge of heaven where Bifrost reaches heaven." While describing the god Heimdallr in chapter 27, High says that Heimdallr lives in Himinbjörg by Bifröst, and guards the bridge from mountain jotnar while sitting at the edge of heaven. In chapter 34, High quotes the first of the two Grímnismál stanzas that mention the bridge. In chapter 51, High foretells the events of Ragnarök. High says that, during Ragnarök, the sky will split open, and from the split will ride forth the "sons of Muspell". When the "sons of Muspell" ride over Bifröst it will break, "as was said above."
In the Prose Edda book Skáldskaparmál, the bridge receives a single mention. In chapter 16, a work by the 10th century skald Úlfr Uggason is provided, where Bifröst is referred to as "the powers' way."
In his translation of the Prose Edda, Henry Adams Bellows comments that the Grímnismál stanza mentioning Thor and the bridge stanza may mean that "Thor has to go on foot in the last days of the destruction, when the bridge is burning. Another interpretation, however, is that when Thor leaves the heavens (i.e., when a thunder-storm is over) the rainbow-bridge becomes hot in the sun."
John Lindow points to a parallel between Bifröst, which he notes is "a bridge between earth and heaven, or earth and the world of the gods", and the bridge Gjallarbrú, "a bridge between earth and the underworld, or earth and the world of the dead." Several scholars have proposed that Bifröst may represent the Milky Way.
In the final scene of Richard Wagner's 1869 opera Das Rheingold, the god Froh summons a rainbow bridge, over which the gods cross to enter Valhalla.
The Bifröst appears in comic books associated with the Marvel Comics character Thor and in subsequent adaptations of those comic books. In the Marvel Cinematic Universe film Thor, Jane Foster describes the Bifröst as an Einstein–Rosen bridge, which functions as a means of transportation across space in a short period of time. | [
{
"paragraph_id": 0,
"text": "In Norse mythology, Bifröst (/ˈbɪvrɒst/ ), also called Bilröst, is a burning rainbow bridge that reaches between Midgard (Earth) and Asgard, the realm of the gods. The bridge is attested as Bilröst in the Poetic Edda; compiled in the 13th century from earlier traditional sources, and as Bifröst in the Prose Edda; written in the 13th century by Snorri Sturluson, and in the poetry of skalds. Both the Poetic Edda and the Prose Edda alternately refer to the bridge as Ásbrú (Old Norse \"Æsir's bridge\").",
"title": ""
},
{
"paragraph_id": 1,
"text": "According to the Prose Edda, the bridge ends in heaven at Himinbjörg, the residence of the god Heimdall, who guards it from the jötnar. The bridge's destruction during Ragnarök by the forces of Muspell is foretold. Scholars have proposed that the bridge may have originally represented the Milky Way and have noted parallels between the bridge and another bridge in Norse mythology, Gjallarbrú.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Scholar Andy Orchard suggests that Bifröst may mean \"shimmering path.\" He notes that the first element of Bilröst—bil (meaning \"a moment\")—\"suggests the fleeting nature of the rainbow,\" which he connects to the first element of Bifröst—the Old Norse verb bifa (meaning \"to shimmer\" or \"to shake\")—noting that the element evokes notions of the \"lustrous sheen\" of the bridge. Austrian Germanist Rudolf Simek says that Bifröst either means \"the swaying road to heaven\" (also citing bifa) or, if Bilröst is the original form of the two (which Simek says is likely), \"the fleetingly glimpsed rainbow\" (possibly connected to bil, perhaps meaning \"moment, weak point\").",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Two poems in the Poetic Edda and two books in the Prose Edda provide information about the bridge:",
"title": "Attestations"
},
{
"paragraph_id": 4,
"text": "In the Poetic Edda, the bridge is mentioned in the poems Grímnismál and Fáfnismál, where it is referred to as Bilröst. In one of two stanzas in the poem Grímnismál that mentions the bridge, Grímnir (the god Odin in disguise) provides the young Agnarr with cosmological knowledge, including that Bilröst is the best of bridges. Later in Grímnismál, Grímnir notes that Asbrú \"burns all with flames\" and that, every day, the god Thor wades through the waters of Körmt and Örmt and the two Kerlaugar:",
"title": "Attestations"
},
{
"paragraph_id": 5,
"text": "In Fáfnismál, the dying wyrm Fafnir tells the hero Sigurd that, during the events of Ragnarök, bearing spears, gods will meet at Óskópnir. From there, the gods will cross Bilröst, which will break apart as they cross over it, causing their horses to dredge through an immense river.",
"title": "Attestations"
},
{
"paragraph_id": 6,
"text": "The bridge is mentioned in the Prose Edda books Gylfaginning and Skáldskaparmál, where it is referred to as Bifröst. In chapter 13 of Gylfaginning, Gangleri (King Gylfi in disguise) asks the enthroned figure of High what way exists between heaven and earth. Laughing, High replies that the question isn't an intelligent one, and goes on to explain that the gods built a bridge from heaven and earth. He incredulously asks Gangleri if he has not heard the story before. High says that Gangleri must have seen it, and notes that Gangleri may call it a rainbow. High says that the bridge consists of three colors, has great strength, \"and is built with art and skill to a greater extent than other constructions.\"",
"title": "Attestations"
},
{
"paragraph_id": 7,
"text": "High notes that, although the bridge is strong, it will break when \"Muspell's lads\" attempt to cross it, and their horses will have to make do with swimming over \"great rivers.\" Gangleri says that it doesn't seem that the gods \"built the bridge in good faith if it is liable to break, considering that they can do as they please.\" High responds that the gods do not deserve blame for the breaking of the bridge, for \"there is nothing in this world that will be secure when Muspell's sons attack.\"",
"title": "Attestations"
},
{
"paragraph_id": 8,
"text": "In chapter 15 of Gylfaginning, Just-As-High says that Bifröst is also called Asbrú, and that every day the gods ride their horses across it (with the exception of Thor, who instead wades through the boiling waters of the rivers Körmt and Örmt) to reach Urðarbrunnr, a holy well where the gods have their court. As a reference, Just-As-High quotes the second of the two stanzas in Grímnismál that mention the bridge (see above). Gangleri asks if fire burns over Bifröst. High says that the red in the bridge is burning fire, and, without it, the frost jotnar and mountain jotnar would \"go up into heaven\" if anyone who wanted could cross Bifröst. High adds that, in heaven, \"there are many beautiful places\" and that \"everywhere there has divine protection around it.\"",
"title": "Attestations"
},
{
"paragraph_id": 9,
"text": "In chapter 17, High tells Gangleri that the location of Himinbjörg \"stands at the edge of heaven where Bifrost reaches heaven.\" While describing the god Heimdallr in chapter 27, High says that Heimdallr lives in Himinbjörg by Bifröst, and guards the bridge from mountain jotnar while sitting at the edge of heaven. In chapter 34, High quotes the first of the two Grímnismál stanzas that mention the bridge. In chapter 51, High foretells the events of Ragnarök. High says that, during Ragnarök, the sky will split open, and from the split will ride forth the \"sons of Muspell\". When the \"sons of Muspell\" ride over Bifröst it will break, \"as was said above.\"",
"title": "Attestations"
},
{
"paragraph_id": 10,
"text": "In the Prose Edda book Skáldskaparmál, the bridge receives a single mention. In chapter 16, a work by the 10th century skald Úlfr Uggason is provided, where Bifröst is referred to as \"the powers' way.\"",
"title": "Attestations"
},
{
"paragraph_id": 11,
"text": "In his translation of the Prose Edda, Henry Adams Bellows comments that the Grímnismál stanza mentioning Thor and the bridge stanza may mean that \"Thor has to go on foot in the last days of the destruction, when the bridge is burning. Another interpretation, however, is that when Thor leaves the heavens (i.e., when a thunder-storm is over) the rainbow-bridge becomes hot in the sun.\"",
"title": "Theories"
},
{
"paragraph_id": 12,
"text": "John Lindow points to a parallel between Bifröst, which he notes is \"a bridge between earth and heaven, or earth and the world of the gods\", and the bridge Gjallarbrú, \"a bridge between earth and the underworld, or earth and the world of the dead.\" Several scholars have proposed that Bifröst may represent the Milky Way.",
"title": "Theories"
},
{
"paragraph_id": 13,
"text": "In the final scene of Richard Wagner's 1869 opera Das Rheingold, the god Froh summons a rainbow bridge, over which the gods cross to enter Valhalla.",
"title": "Adaptations"
},
{
"paragraph_id": 14,
"text": "The Bifröst appears in comic books associated with the Marvel Comics character Thor and in subsequent adaptations of those comic books. In the Marvel Cinematic Universe film Thor, Jane Foster describes the Bifröst as an Einstein–Rosen bridge, which functions as a means of transportation across space in a short period of time.",
"title": "Adaptations"
}
] | In Norse mythology, Bifröst, also called Bilröst, is a burning rainbow bridge that reaches between Midgard (Earth) and Asgard, the realm of the gods. The bridge is attested as Bilröst in the Poetic Edda; compiled in the 13th century from earlier traditional sources, and as Bifröst in the Prose Edda; written in the 13th century by Snorri Sturluson, and in the poetry of skalds. Both the Poetic Edda and the Prose Edda alternately refer to the bridge as Ásbrú. According to the Prose Edda, the bridge ends in heaven at Himinbjörg, the residence of the god Heimdall, who guards it from the jötnar. The bridge's destruction during Ragnarök by the forces of Muspell is foretold. Scholars have proposed that the bridge may have originally represented the Milky Way and have noted parallels between the bridge and another bridge in Norse mythology, Gjallarbrú. | 2001-08-17T07:54:16Z | 2023-11-13T21:28:13Z | [
"Template:Authority control",
"Template:Short description",
"Template:Redirect",
"Template:IPAc-en",
"Template:Reflist",
"Template:Cite book",
"Template:Refend",
"Template:Good article",
"Template:Cite web",
"Template:Refbegin",
"Template:Commons category-inline",
"Template:Norse mythology"
] | https://en.wikipedia.org/wiki/Bifr%C3%B6st |
4,057 | Battlecruiser | The battlecruiser (also written as battle cruiser or battle-cruiser) was a type of capital ship of the first half of the 20th century. These were similar in displacement, armament and cost to battleships, but differed in form and balance of attributes. Battlecruisers typically had thinner armour (to a varying degree) and a somewhat lighter main gun battery than contemporary battleships, installed on a longer hull with much higher engine power in order to attain greater speeds. The first battlecruisers were designed in the United Kingdom, as a development of the armoured cruiser, at the same time as the dreadnought succeeded the pre-dreadnought battleship. The goal of the design was to outrun any ship with similar armament, and chase down any ship with lesser armament; they were intended to hunt down slower, older armoured cruisers and destroy them with heavy gunfire while avoiding combat with the more powerful but slower battleships. However, as more and more battlecruisers were built, they were increasingly used alongside the better-protected battleships.
Battlecruisers served in the navies of the United Kingdom, Germany, the Ottoman Empire, Australia and Japan during World War I, most notably at the Battle of the Falkland Islands and in the several raids and skirmishes in the North Sea which culminated in a pitched fleet battle, the Battle of Jutland. British battlecruisers in particular suffered heavy losses at Jutland, where poor fire safety and ammunition handling practices left them vulnerable to catastrophic magazine explosions following hits to their main turrets from large-calibre shells. This dismal showing led to a persistent general belief that battlecruisers were too thinly armoured to function successfully. By the end of the war, capital ship design had developed, with battleships becoming faster and battlecruisers becoming more heavily armoured, blurring the distinction between a battlecruiser and a fast battleship. The Washington Naval Treaty, which limited capital ship construction from 1922 onwards, treated battleships and battlecruisers identically, and the new generation of battlecruisers planned by the United States, Great Britain and Japan were scrapped or converted into aircraft carriers under the terms of the treaty.
Improvements in armour design and propulsion created the 1930s "fast battleship" with the speed of a battlecruiser and armour of a battleship, making the battlecruiser in the traditional sense effectively an obsolete concept. Thus from the 1930s on, only the Royal Navy continued to use "battlecruiser" as a classification for the World War I–era capital ships that remained in the fleet; while Japan's battlecruisers remained in service, they had been significantly reconstructed and were re-rated as full-fledged fast battleships.
Battlecruisers were put into action again during World War II, and only one survived to the end. There was also renewed interest in large "cruiser-killer" type warships, but few were ever begun, as construction of battleships and battlecruisers was curtailed in favor of more-needed convoy escorts, aircraft carriers, and cargo ships. Near the end, and after the Cold War era, the Soviet Kirov class of large guided missile cruisers have been the only active ships termed "battlecruisers".
The battlecruiser was developed by the Royal Navy in the first years of the 20th century as an evolution of the armoured cruiser. The first armoured cruisers had been built in the 1870s, as an attempt to give armour protection to ships fulfilling the typical cruiser roles of patrol, trade protection and power projection. However, the results were rarely satisfactory, as the weight of armour required for any meaningful protection usually meant that the ship became almost as slow as a battleship. As a result, navies preferred to build protected cruisers with an armoured deck protecting their engines, or simply no armour at all.
In the 1890s, technology began to change this balance. New Krupp steel armour meant that it was now possible to give a cruiser side armour which would protect it against the quick-firing guns of enemy battleships and cruisers alike. In 1896–97 France and Russia, who were regarded as likely allies in the event of war, started to build large, fast armoured cruisers taking advantage of this. In the event of a war between Britain and France or Russia, or both, these cruisers threatened to cause serious difficulties for the British Empire's worldwide trade.
Britain, which had concluded in 1892 that it needed twice as many cruisers as any potential enemy to adequately protect its empire's sea lanes, responded to the perceived threat by laying down its own large armoured cruisers. Between 1899 and 1905, it completed or laid down seven classes of this type, a total of 35 ships. This building program, in turn, prompted the French and Russians to increase their own construction. The Imperial German Navy began to build large armoured cruisers for use on their overseas stations, laying down eight between 1897 and 1906.
The cost of this cruiser arms race was significant. In the period 1889–1896, the Royal Navy spent £7.3 million on new large cruisers. From 1897 to 1904, it spent £26.9 million. Many armoured cruisers of the new kind were just as large and expensive as the equivalent battleship.
The increasing size and power of the armoured cruiser led to suggestions in British naval circles that cruisers should displace battleships entirely. The battleship's main advantage was its 12-inch heavy guns, and heavier armour designed to protect from shells of similar size. However, for a few years after 1900 it seemed that those advantages were of little practical value. The torpedo now had a range of 2,000 yards, and it seemed unlikely that a battleship would engage within torpedo range. However, at ranges of more than 2,000 yards it became increasingly unlikely that the heavy guns of a battleship would score any hits, as the heavy guns relied on primitive aiming techniques. The secondary batteries of 6-inch quick-firing guns, firing more plentiful shells, were more likely to hit the enemy. As naval expert Fred T. Jane wrote in June 1902,
Is there anything outside of 2,000 yards that the big gun in its hundreds of tons of medieval castle can affect, that its weight in 6-inch guns without the castle could not affect equally well? And inside 2,000, what, in these days of gyros, is there that the torpedo cannot effect with far more certainty?
In 1904, Admiral John "Jacky" Fisher became First Sea Lord, the senior officer of the Royal Navy. He had for some time thought about the development of a new fast armoured ship. He was very fond of the "second-class battleship" Renown, a faster, more lightly armoured battleship. As early as 1901, there is confusion in Fisher's writing about whether he saw the battleship or the cruiser as the model for future developments. This did not stop him from commissioning designs from naval architect W. H. Gard for an armoured cruiser with the heaviest possible armament for use with the fleet. The design Gard submitted was for a ship between 14,000–15,000 long tons (14,000–15,000 t), capable of 25 knots (46 km/h; 29 mph), armed with four 9.2-inch and twelve 7.5-inch (190 mm) guns in twin gun turrets and protected with six inches of armour along her belt and 9.2-inch turrets, 4 inches (102 mm) on her 7.5-inch turrets, 10 inches on her conning tower and up to 2.5 inches (64 mm) on her decks. However, mainstream British naval thinking between 1902 and 1904 was clearly in favour of heavily armoured battleships, rather than the fast ships that Fisher favoured.
The Battle of Tsushima proved conclusively the effectiveness of heavy guns over intermediate ones and the need for a uniform main caliber on a ship for fire control. Even before this, the Royal Navy had begun to consider a shift away from the mixed-calibre armament of the 1890s pre-dreadnought to an "all-big-gun" design, and preliminary designs circulated for battleships with all 12-inch or all 10-inch guns and armoured cruisers with all 9.2-inch guns. In late 1904, not long after the Royal Navy had decided to use 12-inch guns for its next generation of battleships because of their superior performance at long range, Fisher began to argue that big-gun cruisers could replace battleships altogether. The continuing improvement of the torpedo meant that submarines and destroyers would be able to destroy battleships; this in Fisher's view heralded the end of the battleship or at least compromised the validity of heavy armour protection. Nevertheless, armoured cruisers would remain vital for commerce protection.
Of what use is a battle fleet to a country called (A) at war with a country called (B) possessing no battleships, but having fast armoured cruisers and clouds of fast torpedo craft? What damage would (A's) battleships do to (B)? Would (B) wish for a few battleships or for more armoured cruisers? Would not (A) willingly exchange a few battleships for more fast armoured cruisers? In such a case, neither side wanting battleships is presumptive evidence that they are not of much value.
Fisher's views were very controversial within the Royal Navy, and even given his position as First Sea Lord, he was not in a position to insist on his own approach. Thus he assembled a "Committee on Designs", consisting of a mixture of civilian and naval experts, to determine the approach to both battleship and armoured cruiser construction in the future. While the stated purpose of the committee was to investigate and report on future requirements of ships, Fisher and his associates had already made key decisions. The terms of reference for the committee were for a battleship capable of 21 knots (39 km/h; 24 mph) with 12-inch guns and no intermediate calibres, capable of docking in existing drydocks; and a cruiser capable of 25.5 knots (47.2 km/h; 29.3 mph), also with 12-inch guns and no intermediate armament, armoured like Minotaur, the most recent armoured cruiser, and also capable of using existing docks.
Under the Selborne plan of 1902, the Royal Navy intended to start three new battleships and four armoured cruisers each year. However, in late 1904 it became clear that the 1905–1906 programme would have to be considerably smaller, because of lower than expected tax revenue and the need to buy out two Chilean battleships under construction in British yards, lest they be purchased by the Russians for use against the Japanese, Britain's ally. These economies meant that the 1905–1906 programme consisted only of one battleship, but three armoured cruisers. The battleship became the revolutionary battleship Dreadnought, and the cruisers became the three ships of the Invincible class. Fisher later claimed, however, that he had argued during the committee for the cancellation of the remaining battleship.
The construction of the new class was begun in 1906 and completed in 1908, delayed perhaps to allow their designers to learn from any problems with Dreadnought. The ships fulfilled the design requirement quite closely. On a displacement similar to Dreadnought, the Invincibles were 40 feet (12.2 m) longer to accommodate additional boilers and more powerful turbines to propel them at 25 knots (46 km/h; 29 mph). Moreover, the new ships could maintain this speed for days, whereas pre-dreadnought battleships could not generally do so for more than an hour. Armed with eight 12-inch Mk X guns, compared to ten on Dreadnought, they had 6–7 inches (152–178 mm) of armour protecting the hull and the gun turrets. (Dreadnought's armour, by comparison, was 11–12 inches (279–305 mm) at its thickest.) The class had a very marked increase in speed, displacement and firepower compared to the most recent armoured cruisers but no more armour.
While the Invincibles were to fill the same role as the armoured cruisers they succeeded, they were expected to do so more effectively. Specifically their roles were:
Confusion about how to refer to these new battleship-size armoured cruisers set in almost immediately. Even in late 1905, before work was begun on the Invincibles, a Royal Navy memorandum refers to "large armoured ships" meaning both battleships and large cruisers. In October 1906, the Admiralty began to classify all post-Dreadnought battleships and armoured cruisers as "capital ships", while Fisher used the term "dreadnought" to refer either to his new battleships or the battleships and armoured cruisers together. At the same time, the Invincible class themselves were referred to as "cruiser-battleships", "dreadnought cruisers"; the term "battlecruiser" was first used by Fisher in 1908. Finally, on 24 November 1911, Admiralty Weekly Order No. 351 laid down that "All cruisers of the "Invincible" and later types are for the future to be described and classified as "battle cruisers" to distinguish them from the armoured cruisers of earlier date."
Along with questions over the new ships' nomenclature came uncertainty about their actual role due to their lack of protection. If they were primarily to act as scouts for the battle fleet and hunter-killers of enemy cruisers and commerce raiders, then the seven inches of belt armour with which they had been equipped would be adequate. If, on the other hand, they were expected to reinforce a battle line of dreadnoughts with their own heavy guns, they were too thin-skinned to be safe from an enemy's heavy guns. The Invincibles were essentially extremely large, heavily armed, fast armoured cruisers. However, the viability of the armoured cruiser was already in doubt. A cruiser that could have worked with the Fleet might have been a more viable option for taking over that role.
Because of the Invincibles' size and armament, naval authorities considered them capital ships almost from their inception—an assumption that might have been inevitable. Complicating matters further was that many naval authorities, including Lord Fisher, had made overoptimistic assessments from the Battle of Tsushima in 1905 about the armoured cruiser's ability to survive in a battle line against enemy capital ships due to their superior speed. These assumptions had been made without taking into account the Russian Baltic Fleet's inefficiency and tactical ineptitude. By the time the term "battlecruiser" had been given to the Invincibles, the idea of their parity with battleships had been fixed in many people's minds.
Not everyone was so convinced. Brassey's Naval Annual, for instance, stated that with vessels as large and expensive as the Invincibles, an admiral "will be certain to put them in the line of battle where their comparatively light protection will be a disadvantage and their high speed of no value." Those in favor of the battlecruiser countered with two points—first, since all capital ships were vulnerable to new weapons such as the torpedo, armour had lost some of its validity; and second, because of its greater speed, the battlecruiser could control the range at which it engaged an enemy.
Between the launching of the Invincibles to just after the outbreak of the First World War, the battlecruiser played a junior role in the developing dreadnought arms race, as it was never wholeheartedly adopted as the key weapon in British imperial defence, as Fisher had presumably desired. The biggest factor for this lack of acceptance was the marked change in Britain's strategic circumstances between their conception and the commissioning of the first ships. The prospective enemy for Britain had shifted from a Franco-Russian alliance with many armoured cruisers to a resurgent and increasingly belligerent Germany. Diplomatically, Britain had entered the Entente cordiale in 1904 and the Anglo-Russian Entente. Neither France nor Russia posed a particular naval threat; the Russian navy had largely been sunk or captured in the Russo-Japanese War of 1904–1905, while the French were in no hurry to adopt the new dreadnought-type design. Britain also boasted very cordial relations with two of the significant new naval powers: Japan (bolstered by the Anglo-Japanese Alliance, signed in 1902 and renewed in 1905), and the US. These changed strategic circumstances, and the great success of the Dreadnought ensured that she rather than the Invincible became the new model capital ship. Nevertheless, battlecruiser construction played a part in the renewed naval arms race sparked by the Dreadnought.
For their first few years of service, the Invincibles entirely fulfilled Fisher's vision of being able to sink any ship fast enough to catch them, and run from any ship capable of sinking them. An Invincible would also, in many circumstances, be able to take on an enemy pre-dreadnought battleship. Naval circles concurred that the armoured cruiser in its current form had come to the logical end of its development and the Invincibles were so far ahead of any enemy armoured cruiser in firepower and speed that it proved difficult to justify building more or bigger cruisers. This lead was extended by the surprise both Dreadnought and Invincible produced by having been built in secret; this prompted most other navies to delay their building programmes and radically revise their designs. This was particularly true for cruisers, because the details of the Invincible class were kept secret for longer; this meant that the last German armoured cruiser, Blücher, was armed with only 21-centimetre (8.3 in) guns, and was no match for the new battlecruisers.
The Royal Navy's early superiority in capital ships led to the rejection of a 1905–1906 design that would, essentially, have fused the battlecruiser and battleship concepts into what would eventually become the fast battleship. The 'X4' design combined the full armour and armament of Dreadnought with the 25-knot speed of Invincible. The additional cost could not be justified given the existing British lead and the new Liberal government's need for economy; the slower and cheaper Bellerophon, a relatively close copy of Dreadnought, was adopted instead. The X4 concept would eventually be fulfilled in the Queen Elizabeth class and later by other navies.
The next British battlecruisers were the three Indefatigable class, slightly improved Invincibles built to fundamentally the same specification, partly due to political pressure to limit costs and partly due to the secrecy surrounding German battlecruiser construction, particularly about the heavy armour of SMS Von der Tann. This class came to be widely seen as a mistake and the next generation of British battlecruisers were markedly more powerful. By 1909–1910 a sense of national crisis about rivalry with Germany outweighed cost-cutting, and a naval panic resulted in the approval of a total of eight capital ships in 1909–1910. Fisher pressed for all eight to be battlecruisers, but was unable to have his way; he had to settle for six battleships and two battlecruisers of the Lion class. The Lions carried eight 13.5-inch guns, the now-standard caliber of the British "super-dreadnought" battleships. Speed increased to 27 knots (50 km/h; 31 mph) and armour protection, while not as good as in German designs, was better than in previous British battlecruisers, with nine-inch (230 mm) armour belt and barbettes. The two Lions were followed by the very similar Queen Mary.
By 1911 Germany had built battlecruisers of her own, and the superiority of the British ships could no longer be assured. Moreover, the German Navy did not share Fisher's view of the battlecruiser. In contrast to the British focus on increasing speed and firepower, Germany progressively improved the armour and staying power of their ships to better the British battlecruisers. Von der Tann, begun in 1908 and completed in 1910, carried eight 11.1-inch guns, but with 11.1-inch (283 mm) armour she was far better protected than the Invincibles. The two Moltkes were quite similar but carried ten 11.1-inch guns of an improved design. Seydlitz, designed in 1909 and finished in 1913, was a modified Moltke; speed increased by one knot to 26.5 knots (49.1 km/h; 30.5 mph), while her armour had a maximum thickness of 12 inches, equivalent to the Helgoland-class battleships of a few years earlier. Seydlitz was Germany's last battlecruiser completed before World War I.
The next step in battlecruiser design came from Japan. The Imperial Japanese Navy had been planning the Kongō-class ships from 1909, and was determined that, since the Japanese economy could support relatively few ships, each would be more powerful than its likely competitors. Initially the class was planned with the Invincibles as the benchmark. On learning of the British plans for Lion, and the likelihood that new U.S. Navy battleships would be armed with 14-inch (360 mm) guns, the Japanese decided to radically revise their plans and go one better. A new plan was drawn up, carrying eight 14-inch guns, and capable of 27.5 knots (50.9 km/h; 31.6 mph), thus marginally having the edge over the Lions in speed and firepower. The heavy guns were also better-positioned, being superfiring both fore and aft with no turret amidships. The armour scheme was also marginally improved over the Lions, with nine inches of armour on the turrets and 8 inches (203 mm) on the barbettes. The first ship in the class was built in Britain, and a further three constructed in Japan. The Japanese also re-classified their powerful armoured cruisers of the Tsukuba and Ibuki classes, carrying four 12-inch guns, as battlecruisers; nonetheless, their armament was weaker and they were slower than any battlecruiser.
The next British battlecruiser, Tiger, was intended initially as the fourth ship in the Lion class, but was substantially redesigned. She retained the eight 13.5-inch guns of her predecessors, but they were positioned like those of Kongō for better fields of fire. She was faster (making 29 knots (54 km/h; 33 mph) on sea trials), and carried a heavier secondary armament. Tiger was also more heavily armoured on the whole; while the maximum thickness of armour was the same at nine inches, the height of the main armour belt was increased. Not all the desired improvements for this ship were approved, however. Her designer, Sir Eustace Tennyson d'Eyncourt, had wanted small-bore water-tube boilers and geared turbines to give her a speed of 32 knots (59 km/h; 37 mph), but he received no support from the authorities and the engine makers refused his request.
1912 saw work begin on three more German battlecruisers of the Derfflinger class, the first German battlecruisers to mount 12-inch guns. These ships, like Tiger and the Kongōs, had their guns arranged in superfiring turrets for greater efficiency. Their armour and speed was similar to the previous Seydlitz class. In 1913, the Russian Empire also began the construction of the four-ship Borodino class, which were designed for service in the Baltic Sea. These ships were designed to carry twelve 14-inch guns, with armour up to 12 inches thick, and a speed of 26.6 knots (49.3 km/h; 30.6 mph). The heavy armour and relatively slow speed of these ships made them more similar to German designs than to British ships; construction of the Borodinos was halted by the First World War and all were scrapped after the end of the Russian Civil War.
For most of the combatants, capital ship construction was very limited during the war. Germany finished the Derfflinger class and began work on the Mackensen class. The Mackensens were a development of the Derfflinger class, with 13.8-inch guns and a broadly similar armour scheme, designed for 28 knots (52 km/h; 32 mph).
In Britain, Jackie Fisher returned to the office of First Sea Lord in October 1914. His enthusiasm for big, fast ships was unabated, and he set designers to producing a design for a battlecruiser with 15-inch guns. Because Fisher expected the next German battlecruiser to steam at 28 knots, he required the new British design to be capable of 32 knots. He planned to reorder two Revenge-class battleships, which had been approved but not yet laid down, to a new design. Fisher finally received approval for this project on 28 December 1914 and they became the Renown class. With six 15-inch guns but only 6-inch armour they were a further step forward from Tiger in firepower and speed, but returned to the level of protection of the first British battlecruisers.
At the same time, Fisher resorted to subterfuge to obtain another three fast, lightly armoured ships that could use several spare 15-inch (381 mm) gun turrets left over from battleship construction. These ships were essentially light battlecruisers, and Fisher occasionally referred to them as such, but officially they were classified as large light cruisers. This unusual designation was required because construction of new capital ships had been placed on hold, while there were no limits on light cruiser construction. They became Courageous and her sisters Glorious and Furious, and there was a bizarre imbalance between their main guns of 15 inches (or 18 inches (457 mm) in Furious) and their armour, which at three inches (76 mm) thickness was on the scale of a light cruiser. The design was generally regarded as a failure (nicknamed in the Fleet Outrageous, Uproarious and Spurious), though the later conversion of the ships to aircraft carriers was very successful. Fisher also speculated about a new mammoth, but lightly built battlecruiser, that would carry 20-inch (508 mm) guns, which he termed HMS Incomparable; this never got beyond the concept stage.
It is often held that the Renown and Courageous classes were designed for Fisher's plan to land troops (possibly Russian) on the German Baltic coast. Specifically, they were designed with a reduced draught, which might be important in the shallow Baltic. This is not clear-cut evidence that the ships were designed for the Baltic: it was considered that earlier ships had too much draught and not enough freeboard under operational conditions. Roberts argues that the focus on the Baltic was probably unimportant at the time the ships were designed, but was inflated later, after the disastrous Dardanelles Campaign.
The final British battlecruiser design of the war was the Admiral class, which was born from a requirement for an improved version of the Queen Elizabeth battleship. The project began at the end of 1915, after Fisher's final departure from the Admiralty. While initially envisaged as a battleship, senior sea officers felt that Britain had enough battleships, but that new battlecruisers might be required to combat German ships being built (the British overestimated German progress on the Mackensen class as well as their likely capabilities). A battlecruiser design with eight 15-inch guns, 8 inches of armour and capable of 32 knots was decided on. The experience of battlecruisers at the Battle of Jutland meant that the design was radically revised and transformed again into a fast battleship with armour up to 12 inches thick, but still capable of 31.5 knots (58.3 km/h; 36.2 mph). The first ship in the class, Hood, was built according to this design to counter the possible completion of any of the Mackensen-class ship. The plans for her three sisters, on which little work had been done, were revised once more later in 1916 and in 1917 to improve protection.
The Admiral class would have been the only British ships capable of taking on the German Mackensen class; nevertheless, German shipbuilding was drastically slowed by the war, and while two Mackensens were launched, none were ever completed. The Germans also worked briefly on a further three ships, of the Ersatz Yorck class, which were modified versions of the Mackensens with 15-inch guns. Work on the three additional Admirals was suspended in March 1917 to enable more escorts and merchant ships to be built to deal with the new threat from U-boats to trade. They were finally cancelled in February 1919.
The first combat involving battlecruisers during World War I was the Battle of Heligoland Bight in August 1914. A force of British light cruisers and destroyers entered the Heligoland Bight (the part of the North Sea closest to Hamburg) to attack German destroyer patrols. When they met opposition from light cruisers, Vice Admiral David Beatty took his squadron of five battlecruisers into the Bight and turned the tide of the battle, ultimately sinking three German light cruisers and killing their commander, Rear Admiral Leberecht Maass.
The German battlecruiser Goeben perhaps made the most impact early in the war. Stationed in the Mediterranean, she and the escorting light cruiser SMS Breslau evaded British and French ships on the outbreak of war, and steamed to Constantinople (Istanbul) with two British battlecruisers in hot pursuit. The two German ships were handed over to the Ottoman Navy, and this was instrumental in bringing the Ottoman Empire into the war as one of the Central Powers. Goeben herself, renamed Yavuz Sultan Selim, fought engagements against the Imperial Russian Navy in the Black Sea before being knocked out of the action for the remainder of the war after the Battle of Imbros against British forces in the Aegean Sea in January 1918.
The original battlecruiser concept proved successful in December 1914 at the Battle of the Falkland Islands. The British battlecruisers Inflexible and Invincible did precisely the job for which they were intended when they chased down and annihilated the German East Asia Squadron, centered on the armoured cruisers Scharnhorst and Gneisenau, along with three light cruisers, commanded by Admiral Maximilian Graf Von Spee, in the South Atlantic Ocean. Prior to the battle, the Australian battlecruiser Australia had unsuccessfully searched for the German ships in the Pacific.
During the Battle of Dogger Bank in 1915, the aftermost barbette of the German flagship Seydlitz was struck by a British 13.5-inch shell from HMS Lion. The shell did not penetrate the barbette, but it dislodged a piece of the barbette armour that allowed the flame from the shell's detonation to enter the barbette. The propellant charges being hoisted upwards were ignited, and the fireball flashed up into the turret and down into the magazine, setting fire to charges removed from their brass cartridge cases. The gun crew tried to escape into the next turret, which allowed the flash to spread into that turret as well, killing the crews of both turrets. Seydlitz was saved from near-certain destruction only by emergency flooding of her after magazines, which had been effected by Wilhelm Heidkamp. This near-disaster was due to the way that ammunition handling was arranged and was common to both German and British battleships and battlecruisers, but the lighter protection on the latter made them more vulnerable to the turret or barbette being penetrated. The Germans learned from investigating the damaged Seydlitz and instituted measures to ensure that ammunition handling minimised any possible exposure to flash.
Apart from the cordite handling, the battle was mostly inconclusive, though both the British flagship Lion and Seydlitz were severely damaged. Lion lost speed, causing her to fall behind the rest of the battleline, and Beatty was unable to effectively command his ships for the remainder of the engagement. A British signalling error allowed the German battlecruisers to withdraw, as most of Beatty's squadron mistakenly concentrated on the crippled armoured cruiser Blücher, sinking her with great loss of life. The British blamed their failure to win a decisive victory on their poor gunnery and attempted to increase their rate of fire by stockpiling unprotected cordite charges in their ammunition hoists and barbettes.
At the Battle of Jutland on 31 May 1916, both British and German battlecruisers were employed as fleet units. The British battlecruisers became engaged with both their German counterparts, the battlecruisers, and then German battleships before the arrival of the battleships of the British Grand Fleet. The result was a disaster for the Royal Navy's battlecruiser squadrons: Invincible, Queen Mary, and Indefatigable exploded with the loss of all but a handful of their crews. The exact reason why the ships' magazines detonated is not known, but the plethora of exposed cordite charges stored in their turrets, ammunition hoists and working chambers in the quest to increase their rate of fire undoubtedly contributed to their loss. Beatty's flagship Lion herself was almost lost in a similar manner, save for the heroic actions of Major Francis Harvey.
The better-armoured German battlecruisers fared better, in part due to the poor performance of British fuzes (the British shells tended to explode or break up on impact with the German armour). Lützow—the only German battlecruiser lost at Jutland—had only 128 killed, for instance, despite receiving more than thirty hits. The other German battlecruisers, Moltke, Von der Tann, Seydlitz, and Derfflinger, were all heavily damaged and required extensive repairs after the battle, Seydlitz barely making it home, for they had been the focus of British fire for much of the battle.
In the years immediately after World War I, Britain, Japan and the US all began design work on a new generation of ever more powerful battleships and battlecruisers. The new burst of shipbuilding that each nation's navy desired was politically controversial and potentially economically crippling. This nascent arms race was prevented by the Washington Naval Treaty of 1922, where the major naval powers agreed to limits on capital ship numbers. The German navy was not represented at the talks; under the terms of the Treaty of Versailles, Germany was not allowed any modern capital ships at all.
Through the 1920s and 1930s only Britain and Japan retained battlecruisers, often modified and rebuilt from their original designs. The line between the battlecruiser and the modern fast battleship became blurred; indeed, the Japanese Kongōs were formally redesignated as battleships after their very comprehensive reconstruction in the 1930s.
Hood, launched in 1918, was the last World War I battlecruiser to be completed. Owing to lessons from Jutland, the ship was modified during construction; the thickness of her belt armour was increased by an average of 50 percent and extended substantially, she was given heavier deck armour, and the protection of her magazines was improved to guard against the ignition of ammunition. This was hoped to be capable of resisting her own weapons—the classic measure of a "balanced" battleship. Hood was the largest ship in the Royal Navy when completed; thanks to her great displacement, in theory she combined the firepower and armour of a battleship with the speed of a battlecruiser, causing some to refer to her as a fast battleship. However, her protection was markedly less than that of the British battleships built immediately after World War I, the Nelson class.
The navies of Japan and the United States, not being affected immediately by the war, had time to develop new heavy 16-inch (410 mm) guns for their latest designs and to refine their battlecruiser designs in light of combat experience in Europe. The Imperial Japanese Navy began four Amagi-class battlecruisers. These vessels would have been of unprecedented size and power, as fast and well armoured as Hood whilst carrying a main battery of ten 16-inch guns, the most powerful armament ever proposed for a battlecruiser. They were, for all intents and purposes, fast battleships—the only differences between them and the Tosa-class battleships which were to precede them were 1 inch (25 mm) less side armour and a .25 knots (0.46 km/h; 0.29 mph) increase in speed. The United States Navy, which had worked on its battlecruiser designs since 1913 and watched the latest developments in this class with great care, responded with the Lexington class. If completed as planned, they would have been exceptionally fast and well armed with eight 16-inch guns, but carried armour little better than the Invincibles—this after an 8,000-long-ton (8,100 t) increase in protection following Jutland. The final stage in the post-war battlecruiser race came with the British response to the Amagi and Lexington types: four 48,000-long-ton (49,000 t) G3 battlecruisers. Royal Navy documents of the period often described any battleship with a speed of over about 24 knots (44 km/h; 28 mph) as a battlecruiser, regardless of the amount of protective armour, although the G3 was considered by most to be a well-balanced fast battleship.
The Washington Naval Treaty meant that none of these designs came to fruition. Ships that had been started were either broken up on the slipway or converted to aircraft carriers. In Japan, Amagi and Akagi were selected for conversion. Amagi was damaged beyond repair by the 1923 Great Kantō earthquake and was broken up for scrap; the hull of one of the proposed Tosa-class battleships, Kaga, was converted in her stead. The United States Navy also converted two battlecruiser hulls into aircraft carriers in the wake of the Washington Treaty: USS Lexington and USS Saratoga, although this was only considered marginally preferable to scrapping the hulls outright (the remaining four: Constellation, Ranger, Constitution and United States were scrapped). In Britain, Fisher's "large light cruisers," were converted to carriers. Furious had already been partially converted during the war and Glorious and Courageous were similarly converted.
In total, nine battlecruisers survived the Washington Naval Treaty, although HMS Tiger later became a victim of the London Naval Conference 1930 and was scrapped. Because their high speed made them valuable surface units in spite of their weaknesses, most of these ships were significantly updated before World War II. Renown and Repulse were modernized significantly in the 1920s and 1930s. Between 1934 and 1936, Repulse was partially modernized and had her bridge modified, an aircraft hangar, catapult and new gunnery equipment added and her anti-aircraft armament increased. Renown underwent a more thorough reconstruction between 1937 and 1939. Her deck armour was increased, new turbines and boilers were fitted, an aircraft hangar and catapult added and she was completely rearmed aside from the main guns which had their elevation increased to +30 degrees. The bridge structure was also removed and a large bridge similar to that used in the King George V-class battleships installed in its place. While conversions of this kind generally added weight to the vessel, Renown's tonnage actually decreased due to a substantially lighter power plant. Similar thorough rebuildings planned for Repulse and Hood were cancelled due to the advent of World War II.
Unable to build new ships, the Imperial Japanese Navy also chose to improve its existing battlecruisers of the Kongō class (initially the Haruna, Kirishima, and Kongō—the Hiei only later as it had been disarmed under the terms of the Washington treaty) in two substantial reconstructions (one for Hiei). During the first of these, elevation of their main guns was increased to +40 degrees, anti-torpedo bulges and 3,800 long tons (3,900 t) of horizontal armour added, and a "pagoda" mast with additional command positions built up. This reduced the ships' speed to 25.9 knots (48.0 km/h; 29.8 mph). The second reconstruction focused on speed as they had been selected as fast escorts for aircraft carrier task forces. Completely new main engines, a reduced number of boilers and an increase in hull length by 26 feet (7.9 m) allowed them to reach up to 30 knots once again. They were reclassified as "fast battleships," although their armour and guns still fell short compared to surviving World War I–era battleships in the American or the British navies, with dire consequences during the Pacific War, when Hiei and Kirishima were easily crippled by US gunfire during actions off Guadalcanal, forcing their scuttling shortly afterwards. Perhaps most tellingly, Hiei was crippled by medium-caliber gunfire from heavy and light cruisers in a close-range night engagement.
There were two exceptions: Turkey's Yavuz Sultan Selim and the Royal Navy's Hood. The Turkish Navy made only minor improvements to the ship in the interwar period, which primarily focused on repairing wartime damage and the installation of new fire control systems and anti-aircraft batteries. Hood was in constant service with the fleet and could not be withdrawn for an extended reconstruction. She received minor improvements over the course of the 1930s, including modern fire control systems, increased numbers of anti-aircraft guns, and in March 1941, radar.
In the late 1930s navies began to build capital ships again, and during this period a number of large commerce raiders and small, fast battleships were built that are sometimes referred to as battlecruisers. Germany and Russia designed new battlecruisers during this period, though only the latter laid down two of the 35,000-ton Kronshtadt class. They were still on the slipways when the Germans invaded in 1941 and construction was suspended. Both ships were scrapped after the war.
The Germans planned three battlecruisers of the O class as part of the expansion of the Kriegsmarine (Plan Z). With six 15-inch guns, high speed, excellent range, but very thin armour, they were intended as commerce raiders. Only one was ordered shortly before World War II; no work was ever done on it. No names were assigned, and they were known by their contract names: 'O', 'P', and 'Q'. The new class was not universally welcomed in the Kriegsmarine. Their abnormally-light protection gained it the derogatory nickname Ohne Panzer Quatsch (without armour nonsense) within certain circles of the Navy.
The Royal Navy deployed some of its battlecruisers during the Norwegian Campaign in April 1940. The Gneisenau and the Scharnhorst were engaged during the action off Lofoten by Renown in very bad weather and disengaged after Gneisenau was damaged. One of Renown's 15-inch shells passed through Gneisenau's director-control tower without exploding, severing electrical and communication cables as it went and destroyed the rangefinders for the forward 150 mm (5.9 in) turrets. Main-battery fire control had to be shifted aft due to the loss of electrical power. Another shell from Renown knocked out Gneisenau's aft turret. The British ship was struck twice by German shells that failed to inflict any significant damage. She was the only pre-war battlecruiser to survive the war.
In the early years of the war various German ships had a measure of success hunting merchant ships in the Atlantic. Allied battlecruisers such as Renown, Repulse, and the fast battleships Dunkerque and Strasbourg were employed on operations to hunt down the commerce-raiding German ships. The one stand-up fight occurred when the battleship Bismarck and the heavy cruiser Prinz Eugen sortied into the North Atlantic to attack British shipping and were intercepted by Hood and the battleship Prince of Wales in May 1941 in the Battle of the Denmark Strait. The elderly British battlecruiser was no match for the modern German battleship: within minutes, the Bismarck's 15-inch shells caused a magazine explosion in Hood reminiscent of the Battle of Jutland. Only three men survived.
The first battlecruiser to see action in the Pacific War was Repulse when she was sunk by Japanese torpedo bombers north of Singapore on 10 December 1941 whilst in company with Prince of Wales. She was lightly damaged by a single 250-kilogram (550 lb) bomb and near-missed by two others in the first Japanese attack. Her speed and agility enabled her to avoid the other attacks by level bombers and dodge 33 torpedoes. The last group of torpedo bombers attacked from multiple directions and Repulse was struck by five torpedoes. She quickly capsized with the loss of 27 officers and 486 crewmen; 42 officers and 754 enlisted men were rescued by the escorting destroyers. The loss of Repulse and Prince of Wales conclusively proved the vulnerability of capital ships to aircraft without air cover of their own.
The Japanese Kongō-class battlecruisers were extensively used as carrier escorts for most of their wartime career due to their high speed. Their World War I–era armament was weaker and their upgraded armour was still thin compared to contemporary battleships. On 13 November 1942, during the First Naval Battle of Guadalcanal, Hiei stumbled across American cruisers and destroyers at point-blank range. The ship was badly damaged in the encounter and had to be towed by her sister ship Kirishima. Both were spotted by American aircraft the following morning and Kirishima was forced to cast off her tow because of repeated aerial attacks. Hiei's captain ordered her crew to abandon ship after further damage and scuttled Hiei in the early evening of 14 November. On the night of 14/15 November during the Second Naval Battle of Guadalcanal, Kirishima returned to Ironbottom Sound, but encountered the American battleships South Dakota and Washington. While failing to detect Washington, Kirishima engaged South Dakota with some effect. Washington opened fire a few minutes later at short range and badly damaged Kirishima, knocking out her aft turrets, jamming her rudder, and hitting the ship below the waterline. The flooding proved to be uncontrollable and Kirishima capsized three and a half hours later.
Returning to Japan after the Battle of Leyte Gulf, Kongō was torpedoed and sunk by the American submarine Sealion II on 21 November 1944. Haruna was moored at Kure, Japan when the naval base was attacked by American carrier aircraft on 24 and 28 July 1945. The ship was only lightly damaged by a single bomb hit on 24 July, but was hit a dozen more times on 28 July and sank at her pier. She was refloated after the war and scrapped in early 1946.
A late renaissance in popularity of ships between battleships and cruisers in size occurred on the eve of World War II. Described by some as battlecruisers, but never classified as capital ships, they were variously described as "super cruisers", "large cruisers" or even "unrestricted cruisers". The Dutch, American, and Japanese navies all planned these new classes specifically to counter the heavy cruisers, or their counterparts, being built by their naval rivals.
The first such battlecruisers were the Dutch Design 1047, designed to protect their colonies in the East Indies in the face of Japanese aggression. Never officially assigned names, these ships were designed with German and Italian assistance. While they broadly resembled the German Scharnhorst class and had the same main battery, they would have been more lightly armoured and only protected against eight-inch gunfire. Although the design was mostly completed, work on the vessels never commenced as the Germans overran the Netherlands in May 1940. The first ship would have been laid down in June of that year.
The only class of these late battlecruisers actually built were the United States Navy's Alaska-class "large cruisers". Two of them were completed, Alaska and Guam; a third, Hawaii, was cancelled while under construction and three others, to be named Philippines, Puerto Rico and Samoa, were cancelled before they were laid down. They were classified as "large cruisers" instead of battlecruisers. These ships were named after territories or protectorates. (Battleships, were named after states and cruisers after cities.) With a main armament of nine 12-inch guns in three triple turrets and a displacement of 27,000 long tons (27,000 t), the Alaskas were twice the size of Baltimore-class cruisers and had guns some 50% larger in diameter. They lacked the thick armoured belt and intricate torpedo defence system of true capital ships. However, unlike most battlecruisers, they were considered a balanced design according to cruiser standards as their protection could withstand fire from their own caliber of gun, albeit only in a very narrow range band. They were designed to hunt down Japanese heavy cruisers, though by the time they entered service most Japanese cruisers had been sunk by American aircraft or submarines. Like the contemporary Iowa-class fast battleships, their speed ultimately made them more useful as carrier escorts and bombardment ships than as the surface combatants they were developed to be.
The Japanese started designing the B64 class, which was similar to the Alaska but with 310-millimetre (12.2 in) guns. News of the Alaskas led them to upgrade the design, creating Design B-65. Armed with 356 mm guns, the B65s would have been the best armed of the new breed of battlecruisers, but they still would have had only sufficient protection to keep out eight-inch shells. Much like the Dutch, the Japanese got as far as completing the design for the B65s, but never laid them down. By the time the designs were ready the Japanese Navy recognized that they had little use for the vessels and that their priority for construction should lie with aircraft carriers. Like the Alaskas, the Japanese did not call these ships battlecruisers, referring to them instead as super-heavy cruisers.
In spite of the fact that most navies abandoned the battleship and battlecruiser concepts after World War II, Joseph Stalin's fondness for big-gun-armed warships caused the Soviet Union to plan a large cruiser class in the late 1940s. In the Soviet Navy, they were termed "heavy cruisers" (tjazholyj krejser). The fruits of this program were the Project 82 (Stalingrad) cruisers, of 36,500 tonnes (35,900 long tons) standard load, nine 305 mm (12 in) guns and a speed of 35 knots (65 km/h; 40 mph). Three ships were laid down in 1951–1952, but they were cancelled in April 1953 after Stalin's death. Only the central armoured hull section of the first ship, Stalingrad, was launched in 1954 and then used as a target.
The Soviet Kirov class is sometimes referred to as a battlecruiser. This description arises from their over 24,000-tonne (24,000-long-ton) displacement, which is roughly equal to that of a First World War battleship and more than twice the displacement of contemporary cruisers; upon entry into service, Kirov was the largest surface combatant to be built since World War II. The Kirov class lacks the armour that distinguishes battlecruisers from ordinary cruisers and they are classified as heavy nuclear-powered missile cruisers (T'Yazholiy atomn'iy raketn'iy Krey'Ser) by Russia, with their primary surface armament consisting of twenty P-700 Granit surface to surface missiles. Four members of the class were completed during the 1980s and 1990s, but due to budget constraints only the Pyotr Velikiy is operational with the Russian Navy, though plans were announced in 2010 to return the other three ships to service. As of 2021, Admiral Nakhimov was being refitted, but the other two ships are reportedly beyond economical repair. | [
{
"paragraph_id": 0,
"text": "The battlecruiser (also written as battle cruiser or battle-cruiser) was a type of capital ship of the first half of the 20th century. These were similar in displacement, armament and cost to battleships, but differed in form and balance of attributes. Battlecruisers typically had thinner armour (to a varying degree) and a somewhat lighter main gun battery than contemporary battleships, installed on a longer hull with much higher engine power in order to attain greater speeds. The first battlecruisers were designed in the United Kingdom, as a development of the armoured cruiser, at the same time as the dreadnought succeeded the pre-dreadnought battleship. The goal of the design was to outrun any ship with similar armament, and chase down any ship with lesser armament; they were intended to hunt down slower, older armoured cruisers and destroy them with heavy gunfire while avoiding combat with the more powerful but slower battleships. However, as more and more battlecruisers were built, they were increasingly used alongside the better-protected battleships.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Battlecruisers served in the navies of the United Kingdom, Germany, the Ottoman Empire, Australia and Japan during World War I, most notably at the Battle of the Falkland Islands and in the several raids and skirmishes in the North Sea which culminated in a pitched fleet battle, the Battle of Jutland. British battlecruisers in particular suffered heavy losses at Jutland, where poor fire safety and ammunition handling practices left them vulnerable to catastrophic magazine explosions following hits to their main turrets from large-calibre shells. This dismal showing led to a persistent general belief that battlecruisers were too thinly armoured to function successfully. By the end of the war, capital ship design had developed, with battleships becoming faster and battlecruisers becoming more heavily armoured, blurring the distinction between a battlecruiser and a fast battleship. The Washington Naval Treaty, which limited capital ship construction from 1922 onwards, treated battleships and battlecruisers identically, and the new generation of battlecruisers planned by the United States, Great Britain and Japan were scrapped or converted into aircraft carriers under the terms of the treaty.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Improvements in armour design and propulsion created the 1930s \"fast battleship\" with the speed of a battlecruiser and armour of a battleship, making the battlecruiser in the traditional sense effectively an obsolete concept. Thus from the 1930s on, only the Royal Navy continued to use \"battlecruiser\" as a classification for the World War I–era capital ships that remained in the fleet; while Japan's battlecruisers remained in service, they had been significantly reconstructed and were re-rated as full-fledged fast battleships.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Battlecruisers were put into action again during World War II, and only one survived to the end. There was also renewed interest in large \"cruiser-killer\" type warships, but few were ever begun, as construction of battleships and battlecruisers was curtailed in favor of more-needed convoy escorts, aircraft carriers, and cargo ships. Near the end, and after the Cold War era, the Soviet Kirov class of large guided missile cruisers have been the only active ships termed \"battlecruisers\".",
"title": ""
},
{
"paragraph_id": 4,
"text": "The battlecruiser was developed by the Royal Navy in the first years of the 20th century as an evolution of the armoured cruiser. The first armoured cruisers had been built in the 1870s, as an attempt to give armour protection to ships fulfilling the typical cruiser roles of patrol, trade protection and power projection. However, the results were rarely satisfactory, as the weight of armour required for any meaningful protection usually meant that the ship became almost as slow as a battleship. As a result, navies preferred to build protected cruisers with an armoured deck protecting their engines, or simply no armour at all.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "In the 1890s, technology began to change this balance. New Krupp steel armour meant that it was now possible to give a cruiser side armour which would protect it against the quick-firing guns of enemy battleships and cruisers alike. In 1896–97 France and Russia, who were regarded as likely allies in the event of war, started to build large, fast armoured cruisers taking advantage of this. In the event of a war between Britain and France or Russia, or both, these cruisers threatened to cause serious difficulties for the British Empire's worldwide trade.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "Britain, which had concluded in 1892 that it needed twice as many cruisers as any potential enemy to adequately protect its empire's sea lanes, responded to the perceived threat by laying down its own large armoured cruisers. Between 1899 and 1905, it completed or laid down seven classes of this type, a total of 35 ships. This building program, in turn, prompted the French and Russians to increase their own construction. The Imperial German Navy began to build large armoured cruisers for use on their overseas stations, laying down eight between 1897 and 1906.",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "The cost of this cruiser arms race was significant. In the period 1889–1896, the Royal Navy spent £7.3 million on new large cruisers. From 1897 to 1904, it spent £26.9 million. Many armoured cruisers of the new kind were just as large and expensive as the equivalent battleship.",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "The increasing size and power of the armoured cruiser led to suggestions in British naval circles that cruisers should displace battleships entirely. The battleship's main advantage was its 12-inch heavy guns, and heavier armour designed to protect from shells of similar size. However, for a few years after 1900 it seemed that those advantages were of little practical value. The torpedo now had a range of 2,000 yards, and it seemed unlikely that a battleship would engage within torpedo range. However, at ranges of more than 2,000 yards it became increasingly unlikely that the heavy guns of a battleship would score any hits, as the heavy guns relied on primitive aiming techniques. The secondary batteries of 6-inch quick-firing guns, firing more plentiful shells, were more likely to hit the enemy. As naval expert Fred T. Jane wrote in June 1902,",
"title": "Background"
},
{
"paragraph_id": 9,
"text": "Is there anything outside of 2,000 yards that the big gun in its hundreds of tons of medieval castle can affect, that its weight in 6-inch guns without the castle could not affect equally well? And inside 2,000, what, in these days of gyros, is there that the torpedo cannot effect with far more certainty?",
"title": "Background"
},
{
"paragraph_id": 10,
"text": "In 1904, Admiral John \"Jacky\" Fisher became First Sea Lord, the senior officer of the Royal Navy. He had for some time thought about the development of a new fast armoured ship. He was very fond of the \"second-class battleship\" Renown, a faster, more lightly armoured battleship. As early as 1901, there is confusion in Fisher's writing about whether he saw the battleship or the cruiser as the model for future developments. This did not stop him from commissioning designs from naval architect W. H. Gard for an armoured cruiser with the heaviest possible armament for use with the fleet. The design Gard submitted was for a ship between 14,000–15,000 long tons (14,000–15,000 t), capable of 25 knots (46 km/h; 29 mph), armed with four 9.2-inch and twelve 7.5-inch (190 mm) guns in twin gun turrets and protected with six inches of armour along her belt and 9.2-inch turrets, 4 inches (102 mm) on her 7.5-inch turrets, 10 inches on her conning tower and up to 2.5 inches (64 mm) on her decks. However, mainstream British naval thinking between 1902 and 1904 was clearly in favour of heavily armoured battleships, rather than the fast ships that Fisher favoured.",
"title": "Background"
},
{
"paragraph_id": 11,
"text": "The Battle of Tsushima proved conclusively the effectiveness of heavy guns over intermediate ones and the need for a uniform main caliber on a ship for fire control. Even before this, the Royal Navy had begun to consider a shift away from the mixed-calibre armament of the 1890s pre-dreadnought to an \"all-big-gun\" design, and preliminary designs circulated for battleships with all 12-inch or all 10-inch guns and armoured cruisers with all 9.2-inch guns. In late 1904, not long after the Royal Navy had decided to use 12-inch guns for its next generation of battleships because of their superior performance at long range, Fisher began to argue that big-gun cruisers could replace battleships altogether. The continuing improvement of the torpedo meant that submarines and destroyers would be able to destroy battleships; this in Fisher's view heralded the end of the battleship or at least compromised the validity of heavy armour protection. Nevertheless, armoured cruisers would remain vital for commerce protection.",
"title": "Background"
},
{
"paragraph_id": 12,
"text": "Of what use is a battle fleet to a country called (A) at war with a country called (B) possessing no battleships, but having fast armoured cruisers and clouds of fast torpedo craft? What damage would (A's) battleships do to (B)? Would (B) wish for a few battleships or for more armoured cruisers? Would not (A) willingly exchange a few battleships for more fast armoured cruisers? In such a case, neither side wanting battleships is presumptive evidence that they are not of much value.",
"title": "Background"
},
{
"paragraph_id": 13,
"text": "Fisher's views were very controversial within the Royal Navy, and even given his position as First Sea Lord, he was not in a position to insist on his own approach. Thus he assembled a \"Committee on Designs\", consisting of a mixture of civilian and naval experts, to determine the approach to both battleship and armoured cruiser construction in the future. While the stated purpose of the committee was to investigate and report on future requirements of ships, Fisher and his associates had already made key decisions. The terms of reference for the committee were for a battleship capable of 21 knots (39 km/h; 24 mph) with 12-inch guns and no intermediate calibres, capable of docking in existing drydocks; and a cruiser capable of 25.5 knots (47.2 km/h; 29.3 mph), also with 12-inch guns and no intermediate armament, armoured like Minotaur, the most recent armoured cruiser, and also capable of using existing docks.",
"title": "Background"
},
{
"paragraph_id": 14,
"text": "Under the Selborne plan of 1902, the Royal Navy intended to start three new battleships and four armoured cruisers each year. However, in late 1904 it became clear that the 1905–1906 programme would have to be considerably smaller, because of lower than expected tax revenue and the need to buy out two Chilean battleships under construction in British yards, lest they be purchased by the Russians for use against the Japanese, Britain's ally. These economies meant that the 1905–1906 programme consisted only of one battleship, but three armoured cruisers. The battleship became the revolutionary battleship Dreadnought, and the cruisers became the three ships of the Invincible class. Fisher later claimed, however, that he had argued during the committee for the cancellation of the remaining battleship.",
"title": "First battlecruisers"
},
{
"paragraph_id": 15,
"text": "The construction of the new class was begun in 1906 and completed in 1908, delayed perhaps to allow their designers to learn from any problems with Dreadnought. The ships fulfilled the design requirement quite closely. On a displacement similar to Dreadnought, the Invincibles were 40 feet (12.2 m) longer to accommodate additional boilers and more powerful turbines to propel them at 25 knots (46 km/h; 29 mph). Moreover, the new ships could maintain this speed for days, whereas pre-dreadnought battleships could not generally do so for more than an hour. Armed with eight 12-inch Mk X guns, compared to ten on Dreadnought, they had 6–7 inches (152–178 mm) of armour protecting the hull and the gun turrets. (Dreadnought's armour, by comparison, was 11–12 inches (279–305 mm) at its thickest.) The class had a very marked increase in speed, displacement and firepower compared to the most recent armoured cruisers but no more armour.",
"title": "First battlecruisers"
},
{
"paragraph_id": 16,
"text": "While the Invincibles were to fill the same role as the armoured cruisers they succeeded, they were expected to do so more effectively. Specifically their roles were:",
"title": "First battlecruisers"
},
{
"paragraph_id": 17,
"text": "Confusion about how to refer to these new battleship-size armoured cruisers set in almost immediately. Even in late 1905, before work was begun on the Invincibles, a Royal Navy memorandum refers to \"large armoured ships\" meaning both battleships and large cruisers. In October 1906, the Admiralty began to classify all post-Dreadnought battleships and armoured cruisers as \"capital ships\", while Fisher used the term \"dreadnought\" to refer either to his new battleships or the battleships and armoured cruisers together. At the same time, the Invincible class themselves were referred to as \"cruiser-battleships\", \"dreadnought cruisers\"; the term \"battlecruiser\" was first used by Fisher in 1908. Finally, on 24 November 1911, Admiralty Weekly Order No. 351 laid down that \"All cruisers of the \"Invincible\" and later types are for the future to be described and classified as \"battle cruisers\" to distinguish them from the armoured cruisers of earlier date.\"",
"title": "First battlecruisers"
},
{
"paragraph_id": 18,
"text": "Along with questions over the new ships' nomenclature came uncertainty about their actual role due to their lack of protection. If they were primarily to act as scouts for the battle fleet and hunter-killers of enemy cruisers and commerce raiders, then the seven inches of belt armour with which they had been equipped would be adequate. If, on the other hand, they were expected to reinforce a battle line of dreadnoughts with their own heavy guns, they were too thin-skinned to be safe from an enemy's heavy guns. The Invincibles were essentially extremely large, heavily armed, fast armoured cruisers. However, the viability of the armoured cruiser was already in doubt. A cruiser that could have worked with the Fleet might have been a more viable option for taking over that role.",
"title": "First battlecruisers"
},
{
"paragraph_id": 19,
"text": "Because of the Invincibles' size and armament, naval authorities considered them capital ships almost from their inception—an assumption that might have been inevitable. Complicating matters further was that many naval authorities, including Lord Fisher, had made overoptimistic assessments from the Battle of Tsushima in 1905 about the armoured cruiser's ability to survive in a battle line against enemy capital ships due to their superior speed. These assumptions had been made without taking into account the Russian Baltic Fleet's inefficiency and tactical ineptitude. By the time the term \"battlecruiser\" had been given to the Invincibles, the idea of their parity with battleships had been fixed in many people's minds.",
"title": "First battlecruisers"
},
{
"paragraph_id": 20,
"text": "Not everyone was so convinced. Brassey's Naval Annual, for instance, stated that with vessels as large and expensive as the Invincibles, an admiral \"will be certain to put them in the line of battle where their comparatively light protection will be a disadvantage and their high speed of no value.\" Those in favor of the battlecruiser countered with two points—first, since all capital ships were vulnerable to new weapons such as the torpedo, armour had lost some of its validity; and second, because of its greater speed, the battlecruiser could control the range at which it engaged an enemy.",
"title": "First battlecruisers"
},
{
"paragraph_id": 21,
"text": "Between the launching of the Invincibles to just after the outbreak of the First World War, the battlecruiser played a junior role in the developing dreadnought arms race, as it was never wholeheartedly adopted as the key weapon in British imperial defence, as Fisher had presumably desired. The biggest factor for this lack of acceptance was the marked change in Britain's strategic circumstances between their conception and the commissioning of the first ships. The prospective enemy for Britain had shifted from a Franco-Russian alliance with many armoured cruisers to a resurgent and increasingly belligerent Germany. Diplomatically, Britain had entered the Entente cordiale in 1904 and the Anglo-Russian Entente. Neither France nor Russia posed a particular naval threat; the Russian navy had largely been sunk or captured in the Russo-Japanese War of 1904–1905, while the French were in no hurry to adopt the new dreadnought-type design. Britain also boasted very cordial relations with two of the significant new naval powers: Japan (bolstered by the Anglo-Japanese Alliance, signed in 1902 and renewed in 1905), and the US. These changed strategic circumstances, and the great success of the Dreadnought ensured that she rather than the Invincible became the new model capital ship. Nevertheless, battlecruiser construction played a part in the renewed naval arms race sparked by the Dreadnought.",
"title": "Battlecruisers in the dreadnought arms race"
},
{
"paragraph_id": 22,
"text": "For their first few years of service, the Invincibles entirely fulfilled Fisher's vision of being able to sink any ship fast enough to catch them, and run from any ship capable of sinking them. An Invincible would also, in many circumstances, be able to take on an enemy pre-dreadnought battleship. Naval circles concurred that the armoured cruiser in its current form had come to the logical end of its development and the Invincibles were so far ahead of any enemy armoured cruiser in firepower and speed that it proved difficult to justify building more or bigger cruisers. This lead was extended by the surprise both Dreadnought and Invincible produced by having been built in secret; this prompted most other navies to delay their building programmes and radically revise their designs. This was particularly true for cruisers, because the details of the Invincible class were kept secret for longer; this meant that the last German armoured cruiser, Blücher, was armed with only 21-centimetre (8.3 in) guns, and was no match for the new battlecruisers.",
"title": "Battlecruisers in the dreadnought arms race"
},
{
"paragraph_id": 23,
"text": "The Royal Navy's early superiority in capital ships led to the rejection of a 1905–1906 design that would, essentially, have fused the battlecruiser and battleship concepts into what would eventually become the fast battleship. The 'X4' design combined the full armour and armament of Dreadnought with the 25-knot speed of Invincible. The additional cost could not be justified given the existing British lead and the new Liberal government's need for economy; the slower and cheaper Bellerophon, a relatively close copy of Dreadnought, was adopted instead. The X4 concept would eventually be fulfilled in the Queen Elizabeth class and later by other navies.",
"title": "Battlecruisers in the dreadnought arms race"
},
{
"paragraph_id": 24,
"text": "The next British battlecruisers were the three Indefatigable class, slightly improved Invincibles built to fundamentally the same specification, partly due to political pressure to limit costs and partly due to the secrecy surrounding German battlecruiser construction, particularly about the heavy armour of SMS Von der Tann. This class came to be widely seen as a mistake and the next generation of British battlecruisers were markedly more powerful. By 1909–1910 a sense of national crisis about rivalry with Germany outweighed cost-cutting, and a naval panic resulted in the approval of a total of eight capital ships in 1909–1910. Fisher pressed for all eight to be battlecruisers, but was unable to have his way; he had to settle for six battleships and two battlecruisers of the Lion class. The Lions carried eight 13.5-inch guns, the now-standard caliber of the British \"super-dreadnought\" battleships. Speed increased to 27 knots (50 km/h; 31 mph) and armour protection, while not as good as in German designs, was better than in previous British battlecruisers, with nine-inch (230 mm) armour belt and barbettes. The two Lions were followed by the very similar Queen Mary.",
"title": "Battlecruisers in the dreadnought arms race"
},
{
"paragraph_id": 25,
"text": "By 1911 Germany had built battlecruisers of her own, and the superiority of the British ships could no longer be assured. Moreover, the German Navy did not share Fisher's view of the battlecruiser. In contrast to the British focus on increasing speed and firepower, Germany progressively improved the armour and staying power of their ships to better the British battlecruisers. Von der Tann, begun in 1908 and completed in 1910, carried eight 11.1-inch guns, but with 11.1-inch (283 mm) armour she was far better protected than the Invincibles. The two Moltkes were quite similar but carried ten 11.1-inch guns of an improved design. Seydlitz, designed in 1909 and finished in 1913, was a modified Moltke; speed increased by one knot to 26.5 knots (49.1 km/h; 30.5 mph), while her armour had a maximum thickness of 12 inches, equivalent to the Helgoland-class battleships of a few years earlier. Seydlitz was Germany's last battlecruiser completed before World War I.",
"title": "Battlecruisers in the dreadnought arms race"
},
{
"paragraph_id": 26,
"text": "The next step in battlecruiser design came from Japan. The Imperial Japanese Navy had been planning the Kongō-class ships from 1909, and was determined that, since the Japanese economy could support relatively few ships, each would be more powerful than its likely competitors. Initially the class was planned with the Invincibles as the benchmark. On learning of the British plans for Lion, and the likelihood that new U.S. Navy battleships would be armed with 14-inch (360 mm) guns, the Japanese decided to radically revise their plans and go one better. A new plan was drawn up, carrying eight 14-inch guns, and capable of 27.5 knots (50.9 km/h; 31.6 mph), thus marginally having the edge over the Lions in speed and firepower. The heavy guns were also better-positioned, being superfiring both fore and aft with no turret amidships. The armour scheme was also marginally improved over the Lions, with nine inches of armour on the turrets and 8 inches (203 mm) on the barbettes. The first ship in the class was built in Britain, and a further three constructed in Japan. The Japanese also re-classified their powerful armoured cruisers of the Tsukuba and Ibuki classes, carrying four 12-inch guns, as battlecruisers; nonetheless, their armament was weaker and they were slower than any battlecruiser.",
"title": "Battlecruisers in the dreadnought arms race"
},
{
"paragraph_id": 27,
"text": "The next British battlecruiser, Tiger, was intended initially as the fourth ship in the Lion class, but was substantially redesigned. She retained the eight 13.5-inch guns of her predecessors, but they were positioned like those of Kongō for better fields of fire. She was faster (making 29 knots (54 km/h; 33 mph) on sea trials), and carried a heavier secondary armament. Tiger was also more heavily armoured on the whole; while the maximum thickness of armour was the same at nine inches, the height of the main armour belt was increased. Not all the desired improvements for this ship were approved, however. Her designer, Sir Eustace Tennyson d'Eyncourt, had wanted small-bore water-tube boilers and geared turbines to give her a speed of 32 knots (59 km/h; 37 mph), but he received no support from the authorities and the engine makers refused his request.",
"title": "Battlecruisers in the dreadnought arms race"
},
{
"paragraph_id": 28,
"text": "1912 saw work begin on three more German battlecruisers of the Derfflinger class, the first German battlecruisers to mount 12-inch guns. These ships, like Tiger and the Kongōs, had their guns arranged in superfiring turrets for greater efficiency. Their armour and speed was similar to the previous Seydlitz class. In 1913, the Russian Empire also began the construction of the four-ship Borodino class, which were designed for service in the Baltic Sea. These ships were designed to carry twelve 14-inch guns, with armour up to 12 inches thick, and a speed of 26.6 knots (49.3 km/h; 30.6 mph). The heavy armour and relatively slow speed of these ships made them more similar to German designs than to British ships; construction of the Borodinos was halted by the First World War and all were scrapped after the end of the Russian Civil War.",
"title": "Battlecruisers in the dreadnought arms race"
},
{
"paragraph_id": 29,
"text": "For most of the combatants, capital ship construction was very limited during the war. Germany finished the Derfflinger class and began work on the Mackensen class. The Mackensens were a development of the Derfflinger class, with 13.8-inch guns and a broadly similar armour scheme, designed for 28 knots (52 km/h; 32 mph).",
"title": "World War I"
},
{
"paragraph_id": 30,
"text": "In Britain, Jackie Fisher returned to the office of First Sea Lord in October 1914. His enthusiasm for big, fast ships was unabated, and he set designers to producing a design for a battlecruiser with 15-inch guns. Because Fisher expected the next German battlecruiser to steam at 28 knots, he required the new British design to be capable of 32 knots. He planned to reorder two Revenge-class battleships, which had been approved but not yet laid down, to a new design. Fisher finally received approval for this project on 28 December 1914 and they became the Renown class. With six 15-inch guns but only 6-inch armour they were a further step forward from Tiger in firepower and speed, but returned to the level of protection of the first British battlecruisers.",
"title": "World War I"
},
{
"paragraph_id": 31,
"text": "At the same time, Fisher resorted to subterfuge to obtain another three fast, lightly armoured ships that could use several spare 15-inch (381 mm) gun turrets left over from battleship construction. These ships were essentially light battlecruisers, and Fisher occasionally referred to them as such, but officially they were classified as large light cruisers. This unusual designation was required because construction of new capital ships had been placed on hold, while there were no limits on light cruiser construction. They became Courageous and her sisters Glorious and Furious, and there was a bizarre imbalance between their main guns of 15 inches (or 18 inches (457 mm) in Furious) and their armour, which at three inches (76 mm) thickness was on the scale of a light cruiser. The design was generally regarded as a failure (nicknamed in the Fleet Outrageous, Uproarious and Spurious), though the later conversion of the ships to aircraft carriers was very successful. Fisher also speculated about a new mammoth, but lightly built battlecruiser, that would carry 20-inch (508 mm) guns, which he termed HMS Incomparable; this never got beyond the concept stage.",
"title": "World War I"
},
{
"paragraph_id": 32,
"text": "It is often held that the Renown and Courageous classes were designed for Fisher's plan to land troops (possibly Russian) on the German Baltic coast. Specifically, they were designed with a reduced draught, which might be important in the shallow Baltic. This is not clear-cut evidence that the ships were designed for the Baltic: it was considered that earlier ships had too much draught and not enough freeboard under operational conditions. Roberts argues that the focus on the Baltic was probably unimportant at the time the ships were designed, but was inflated later, after the disastrous Dardanelles Campaign.",
"title": "World War I"
},
{
"paragraph_id": 33,
"text": "The final British battlecruiser design of the war was the Admiral class, which was born from a requirement for an improved version of the Queen Elizabeth battleship. The project began at the end of 1915, after Fisher's final departure from the Admiralty. While initially envisaged as a battleship, senior sea officers felt that Britain had enough battleships, but that new battlecruisers might be required to combat German ships being built (the British overestimated German progress on the Mackensen class as well as their likely capabilities). A battlecruiser design with eight 15-inch guns, 8 inches of armour and capable of 32 knots was decided on. The experience of battlecruisers at the Battle of Jutland meant that the design was radically revised and transformed again into a fast battleship with armour up to 12 inches thick, but still capable of 31.5 knots (58.3 km/h; 36.2 mph). The first ship in the class, Hood, was built according to this design to counter the possible completion of any of the Mackensen-class ship. The plans for her three sisters, on which little work had been done, were revised once more later in 1916 and in 1917 to improve protection.",
"title": "World War I"
},
{
"paragraph_id": 34,
"text": "The Admiral class would have been the only British ships capable of taking on the German Mackensen class; nevertheless, German shipbuilding was drastically slowed by the war, and while two Mackensens were launched, none were ever completed. The Germans also worked briefly on a further three ships, of the Ersatz Yorck class, which were modified versions of the Mackensens with 15-inch guns. Work on the three additional Admirals was suspended in March 1917 to enable more escorts and merchant ships to be built to deal with the new threat from U-boats to trade. They were finally cancelled in February 1919.",
"title": "World War I"
},
{
"paragraph_id": 35,
"text": "The first combat involving battlecruisers during World War I was the Battle of Heligoland Bight in August 1914. A force of British light cruisers and destroyers entered the Heligoland Bight (the part of the North Sea closest to Hamburg) to attack German destroyer patrols. When they met opposition from light cruisers, Vice Admiral David Beatty took his squadron of five battlecruisers into the Bight and turned the tide of the battle, ultimately sinking three German light cruisers and killing their commander, Rear Admiral Leberecht Maass.",
"title": "World War I"
},
{
"paragraph_id": 36,
"text": "The German battlecruiser Goeben perhaps made the most impact early in the war. Stationed in the Mediterranean, she and the escorting light cruiser SMS Breslau evaded British and French ships on the outbreak of war, and steamed to Constantinople (Istanbul) with two British battlecruisers in hot pursuit. The two German ships were handed over to the Ottoman Navy, and this was instrumental in bringing the Ottoman Empire into the war as one of the Central Powers. Goeben herself, renamed Yavuz Sultan Selim, fought engagements against the Imperial Russian Navy in the Black Sea before being knocked out of the action for the remainder of the war after the Battle of Imbros against British forces in the Aegean Sea in January 1918.",
"title": "World War I"
},
{
"paragraph_id": 37,
"text": "The original battlecruiser concept proved successful in December 1914 at the Battle of the Falkland Islands. The British battlecruisers Inflexible and Invincible did precisely the job for which they were intended when they chased down and annihilated the German East Asia Squadron, centered on the armoured cruisers Scharnhorst and Gneisenau, along with three light cruisers, commanded by Admiral Maximilian Graf Von Spee, in the South Atlantic Ocean. Prior to the battle, the Australian battlecruiser Australia had unsuccessfully searched for the German ships in the Pacific.",
"title": "World War I"
},
{
"paragraph_id": 38,
"text": "During the Battle of Dogger Bank in 1915, the aftermost barbette of the German flagship Seydlitz was struck by a British 13.5-inch shell from HMS Lion. The shell did not penetrate the barbette, but it dislodged a piece of the barbette armour that allowed the flame from the shell's detonation to enter the barbette. The propellant charges being hoisted upwards were ignited, and the fireball flashed up into the turret and down into the magazine, setting fire to charges removed from their brass cartridge cases. The gun crew tried to escape into the next turret, which allowed the flash to spread into that turret as well, killing the crews of both turrets. Seydlitz was saved from near-certain destruction only by emergency flooding of her after magazines, which had been effected by Wilhelm Heidkamp. This near-disaster was due to the way that ammunition handling was arranged and was common to both German and British battleships and battlecruisers, but the lighter protection on the latter made them more vulnerable to the turret or barbette being penetrated. The Germans learned from investigating the damaged Seydlitz and instituted measures to ensure that ammunition handling minimised any possible exposure to flash.",
"title": "World War I"
},
{
"paragraph_id": 39,
"text": "Apart from the cordite handling, the battle was mostly inconclusive, though both the British flagship Lion and Seydlitz were severely damaged. Lion lost speed, causing her to fall behind the rest of the battleline, and Beatty was unable to effectively command his ships for the remainder of the engagement. A British signalling error allowed the German battlecruisers to withdraw, as most of Beatty's squadron mistakenly concentrated on the crippled armoured cruiser Blücher, sinking her with great loss of life. The British blamed their failure to win a decisive victory on their poor gunnery and attempted to increase their rate of fire by stockpiling unprotected cordite charges in their ammunition hoists and barbettes.",
"title": "World War I"
},
{
"paragraph_id": 40,
"text": "At the Battle of Jutland on 31 May 1916, both British and German battlecruisers were employed as fleet units. The British battlecruisers became engaged with both their German counterparts, the battlecruisers, and then German battleships before the arrival of the battleships of the British Grand Fleet. The result was a disaster for the Royal Navy's battlecruiser squadrons: Invincible, Queen Mary, and Indefatigable exploded with the loss of all but a handful of their crews. The exact reason why the ships' magazines detonated is not known, but the plethora of exposed cordite charges stored in their turrets, ammunition hoists and working chambers in the quest to increase their rate of fire undoubtedly contributed to their loss. Beatty's flagship Lion herself was almost lost in a similar manner, save for the heroic actions of Major Francis Harvey.",
"title": "World War I"
},
{
"paragraph_id": 41,
"text": "The better-armoured German battlecruisers fared better, in part due to the poor performance of British fuzes (the British shells tended to explode or break up on impact with the German armour). Lützow—the only German battlecruiser lost at Jutland—had only 128 killed, for instance, despite receiving more than thirty hits. The other German battlecruisers, Moltke, Von der Tann, Seydlitz, and Derfflinger, were all heavily damaged and required extensive repairs after the battle, Seydlitz barely making it home, for they had been the focus of British fire for much of the battle.",
"title": "World War I"
},
{
"paragraph_id": 42,
"text": "In the years immediately after World War I, Britain, Japan and the US all began design work on a new generation of ever more powerful battleships and battlecruisers. The new burst of shipbuilding that each nation's navy desired was politically controversial and potentially economically crippling. This nascent arms race was prevented by the Washington Naval Treaty of 1922, where the major naval powers agreed to limits on capital ship numbers. The German navy was not represented at the talks; under the terms of the Treaty of Versailles, Germany was not allowed any modern capital ships at all.",
"title": "Interwar period"
},
{
"paragraph_id": 43,
"text": "Through the 1920s and 1930s only Britain and Japan retained battlecruisers, often modified and rebuilt from their original designs. The line between the battlecruiser and the modern fast battleship became blurred; indeed, the Japanese Kongōs were formally redesignated as battleships after their very comprehensive reconstruction in the 1930s.",
"title": "Interwar period"
},
{
"paragraph_id": 44,
"text": "Hood, launched in 1918, was the last World War I battlecruiser to be completed. Owing to lessons from Jutland, the ship was modified during construction; the thickness of her belt armour was increased by an average of 50 percent and extended substantially, she was given heavier deck armour, and the protection of her magazines was improved to guard against the ignition of ammunition. This was hoped to be capable of resisting her own weapons—the classic measure of a \"balanced\" battleship. Hood was the largest ship in the Royal Navy when completed; thanks to her great displacement, in theory she combined the firepower and armour of a battleship with the speed of a battlecruiser, causing some to refer to her as a fast battleship. However, her protection was markedly less than that of the British battleships built immediately after World War I, the Nelson class.",
"title": "Interwar period"
},
{
"paragraph_id": 45,
"text": "The navies of Japan and the United States, not being affected immediately by the war, had time to develop new heavy 16-inch (410 mm) guns for their latest designs and to refine their battlecruiser designs in light of combat experience in Europe. The Imperial Japanese Navy began four Amagi-class battlecruisers. These vessels would have been of unprecedented size and power, as fast and well armoured as Hood whilst carrying a main battery of ten 16-inch guns, the most powerful armament ever proposed for a battlecruiser. They were, for all intents and purposes, fast battleships—the only differences between them and the Tosa-class battleships which were to precede them were 1 inch (25 mm) less side armour and a .25 knots (0.46 km/h; 0.29 mph) increase in speed. The United States Navy, which had worked on its battlecruiser designs since 1913 and watched the latest developments in this class with great care, responded with the Lexington class. If completed as planned, they would have been exceptionally fast and well armed with eight 16-inch guns, but carried armour little better than the Invincibles—this after an 8,000-long-ton (8,100 t) increase in protection following Jutland. The final stage in the post-war battlecruiser race came with the British response to the Amagi and Lexington types: four 48,000-long-ton (49,000 t) G3 battlecruisers. Royal Navy documents of the period often described any battleship with a speed of over about 24 knots (44 km/h; 28 mph) as a battlecruiser, regardless of the amount of protective armour, although the G3 was considered by most to be a well-balanced fast battleship.",
"title": "Interwar period"
},
{
"paragraph_id": 46,
"text": "The Washington Naval Treaty meant that none of these designs came to fruition. Ships that had been started were either broken up on the slipway or converted to aircraft carriers. In Japan, Amagi and Akagi were selected for conversion. Amagi was damaged beyond repair by the 1923 Great Kantō earthquake and was broken up for scrap; the hull of one of the proposed Tosa-class battleships, Kaga, was converted in her stead. The United States Navy also converted two battlecruiser hulls into aircraft carriers in the wake of the Washington Treaty: USS Lexington and USS Saratoga, although this was only considered marginally preferable to scrapping the hulls outright (the remaining four: Constellation, Ranger, Constitution and United States were scrapped). In Britain, Fisher's \"large light cruisers,\" were converted to carriers. Furious had already been partially converted during the war and Glorious and Courageous were similarly converted.",
"title": "Interwar period"
},
{
"paragraph_id": 47,
"text": "In total, nine battlecruisers survived the Washington Naval Treaty, although HMS Tiger later became a victim of the London Naval Conference 1930 and was scrapped. Because their high speed made them valuable surface units in spite of their weaknesses, most of these ships were significantly updated before World War II. Renown and Repulse were modernized significantly in the 1920s and 1930s. Between 1934 and 1936, Repulse was partially modernized and had her bridge modified, an aircraft hangar, catapult and new gunnery equipment added and her anti-aircraft armament increased. Renown underwent a more thorough reconstruction between 1937 and 1939. Her deck armour was increased, new turbines and boilers were fitted, an aircraft hangar and catapult added and she was completely rearmed aside from the main guns which had their elevation increased to +30 degrees. The bridge structure was also removed and a large bridge similar to that used in the King George V-class battleships installed in its place. While conversions of this kind generally added weight to the vessel, Renown's tonnage actually decreased due to a substantially lighter power plant. Similar thorough rebuildings planned for Repulse and Hood were cancelled due to the advent of World War II.",
"title": "Interwar period"
},
{
"paragraph_id": 48,
"text": "Unable to build new ships, the Imperial Japanese Navy also chose to improve its existing battlecruisers of the Kongō class (initially the Haruna, Kirishima, and Kongō—the Hiei only later as it had been disarmed under the terms of the Washington treaty) in two substantial reconstructions (one for Hiei). During the first of these, elevation of their main guns was increased to +40 degrees, anti-torpedo bulges and 3,800 long tons (3,900 t) of horizontal armour added, and a \"pagoda\" mast with additional command positions built up. This reduced the ships' speed to 25.9 knots (48.0 km/h; 29.8 mph). The second reconstruction focused on speed as they had been selected as fast escorts for aircraft carrier task forces. Completely new main engines, a reduced number of boilers and an increase in hull length by 26 feet (7.9 m) allowed them to reach up to 30 knots once again. They were reclassified as \"fast battleships,\" although their armour and guns still fell short compared to surviving World War I–era battleships in the American or the British navies, with dire consequences during the Pacific War, when Hiei and Kirishima were easily crippled by US gunfire during actions off Guadalcanal, forcing their scuttling shortly afterwards. Perhaps most tellingly, Hiei was crippled by medium-caliber gunfire from heavy and light cruisers in a close-range night engagement.",
"title": "Interwar period"
},
{
"paragraph_id": 49,
"text": "There were two exceptions: Turkey's Yavuz Sultan Selim and the Royal Navy's Hood. The Turkish Navy made only minor improvements to the ship in the interwar period, which primarily focused on repairing wartime damage and the installation of new fire control systems and anti-aircraft batteries. Hood was in constant service with the fleet and could not be withdrawn for an extended reconstruction. She received minor improvements over the course of the 1930s, including modern fire control systems, increased numbers of anti-aircraft guns, and in March 1941, radar.",
"title": "Interwar period"
},
{
"paragraph_id": 50,
"text": "In the late 1930s navies began to build capital ships again, and during this period a number of large commerce raiders and small, fast battleships were built that are sometimes referred to as battlecruisers. Germany and Russia designed new battlecruisers during this period, though only the latter laid down two of the 35,000-ton Kronshtadt class. They were still on the slipways when the Germans invaded in 1941 and construction was suspended. Both ships were scrapped after the war.",
"title": "Interwar period"
},
{
"paragraph_id": 51,
"text": "The Germans planned three battlecruisers of the O class as part of the expansion of the Kriegsmarine (Plan Z). With six 15-inch guns, high speed, excellent range, but very thin armour, they were intended as commerce raiders. Only one was ordered shortly before World War II; no work was ever done on it. No names were assigned, and they were known by their contract names: 'O', 'P', and 'Q'. The new class was not universally welcomed in the Kriegsmarine. Their abnormally-light protection gained it the derogatory nickname Ohne Panzer Quatsch (without armour nonsense) within certain circles of the Navy.",
"title": "Interwar period"
},
{
"paragraph_id": 52,
"text": "The Royal Navy deployed some of its battlecruisers during the Norwegian Campaign in April 1940. The Gneisenau and the Scharnhorst were engaged during the action off Lofoten by Renown in very bad weather and disengaged after Gneisenau was damaged. One of Renown's 15-inch shells passed through Gneisenau's director-control tower without exploding, severing electrical and communication cables as it went and destroyed the rangefinders for the forward 150 mm (5.9 in) turrets. Main-battery fire control had to be shifted aft due to the loss of electrical power. Another shell from Renown knocked out Gneisenau's aft turret. The British ship was struck twice by German shells that failed to inflict any significant damage. She was the only pre-war battlecruiser to survive the war.",
"title": "World War II"
},
{
"paragraph_id": 53,
"text": "In the early years of the war various German ships had a measure of success hunting merchant ships in the Atlantic. Allied battlecruisers such as Renown, Repulse, and the fast battleships Dunkerque and Strasbourg were employed on operations to hunt down the commerce-raiding German ships. The one stand-up fight occurred when the battleship Bismarck and the heavy cruiser Prinz Eugen sortied into the North Atlantic to attack British shipping and were intercepted by Hood and the battleship Prince of Wales in May 1941 in the Battle of the Denmark Strait. The elderly British battlecruiser was no match for the modern German battleship: within minutes, the Bismarck's 15-inch shells caused a magazine explosion in Hood reminiscent of the Battle of Jutland. Only three men survived.",
"title": "World War II"
},
{
"paragraph_id": 54,
"text": "The first battlecruiser to see action in the Pacific War was Repulse when she was sunk by Japanese torpedo bombers north of Singapore on 10 December 1941 whilst in company with Prince of Wales. She was lightly damaged by a single 250-kilogram (550 lb) bomb and near-missed by two others in the first Japanese attack. Her speed and agility enabled her to avoid the other attacks by level bombers and dodge 33 torpedoes. The last group of torpedo bombers attacked from multiple directions and Repulse was struck by five torpedoes. She quickly capsized with the loss of 27 officers and 486 crewmen; 42 officers and 754 enlisted men were rescued by the escorting destroyers. The loss of Repulse and Prince of Wales conclusively proved the vulnerability of capital ships to aircraft without air cover of their own.",
"title": "World War II"
},
{
"paragraph_id": 55,
"text": "The Japanese Kongō-class battlecruisers were extensively used as carrier escorts for most of their wartime career due to their high speed. Their World War I–era armament was weaker and their upgraded armour was still thin compared to contemporary battleships. On 13 November 1942, during the First Naval Battle of Guadalcanal, Hiei stumbled across American cruisers and destroyers at point-blank range. The ship was badly damaged in the encounter and had to be towed by her sister ship Kirishima. Both were spotted by American aircraft the following morning and Kirishima was forced to cast off her tow because of repeated aerial attacks. Hiei's captain ordered her crew to abandon ship after further damage and scuttled Hiei in the early evening of 14 November. On the night of 14/15 November during the Second Naval Battle of Guadalcanal, Kirishima returned to Ironbottom Sound, but encountered the American battleships South Dakota and Washington. While failing to detect Washington, Kirishima engaged South Dakota with some effect. Washington opened fire a few minutes later at short range and badly damaged Kirishima, knocking out her aft turrets, jamming her rudder, and hitting the ship below the waterline. The flooding proved to be uncontrollable and Kirishima capsized three and a half hours later.",
"title": "World War II"
},
{
"paragraph_id": 56,
"text": "Returning to Japan after the Battle of Leyte Gulf, Kongō was torpedoed and sunk by the American submarine Sealion II on 21 November 1944. Haruna was moored at Kure, Japan when the naval base was attacked by American carrier aircraft on 24 and 28 July 1945. The ship was only lightly damaged by a single bomb hit on 24 July, but was hit a dozen more times on 28 July and sank at her pier. She was refloated after the war and scrapped in early 1946.",
"title": "World War II"
},
{
"paragraph_id": 57,
"text": "A late renaissance in popularity of ships between battleships and cruisers in size occurred on the eve of World War II. Described by some as battlecruisers, but never classified as capital ships, they were variously described as \"super cruisers\", \"large cruisers\" or even \"unrestricted cruisers\". The Dutch, American, and Japanese navies all planned these new classes specifically to counter the heavy cruisers, or their counterparts, being built by their naval rivals.",
"title": "World War II"
},
{
"paragraph_id": 58,
"text": "The first such battlecruisers were the Dutch Design 1047, designed to protect their colonies in the East Indies in the face of Japanese aggression. Never officially assigned names, these ships were designed with German and Italian assistance. While they broadly resembled the German Scharnhorst class and had the same main battery, they would have been more lightly armoured and only protected against eight-inch gunfire. Although the design was mostly completed, work on the vessels never commenced as the Germans overran the Netherlands in May 1940. The first ship would have been laid down in June of that year.",
"title": "World War II"
},
{
"paragraph_id": 59,
"text": "The only class of these late battlecruisers actually built were the United States Navy's Alaska-class \"large cruisers\". Two of them were completed, Alaska and Guam; a third, Hawaii, was cancelled while under construction and three others, to be named Philippines, Puerto Rico and Samoa, were cancelled before they were laid down. They were classified as \"large cruisers\" instead of battlecruisers. These ships were named after territories or protectorates. (Battleships, were named after states and cruisers after cities.) With a main armament of nine 12-inch guns in three triple turrets and a displacement of 27,000 long tons (27,000 t), the Alaskas were twice the size of Baltimore-class cruisers and had guns some 50% larger in diameter. They lacked the thick armoured belt and intricate torpedo defence system of true capital ships. However, unlike most battlecruisers, they were considered a balanced design according to cruiser standards as their protection could withstand fire from their own caliber of gun, albeit only in a very narrow range band. They were designed to hunt down Japanese heavy cruisers, though by the time they entered service most Japanese cruisers had been sunk by American aircraft or submarines. Like the contemporary Iowa-class fast battleships, their speed ultimately made them more useful as carrier escorts and bombardment ships than as the surface combatants they were developed to be.",
"title": "World War II"
},
{
"paragraph_id": 60,
"text": "The Japanese started designing the B64 class, which was similar to the Alaska but with 310-millimetre (12.2 in) guns. News of the Alaskas led them to upgrade the design, creating Design B-65. Armed with 356 mm guns, the B65s would have been the best armed of the new breed of battlecruisers, but they still would have had only sufficient protection to keep out eight-inch shells. Much like the Dutch, the Japanese got as far as completing the design for the B65s, but never laid them down. By the time the designs were ready the Japanese Navy recognized that they had little use for the vessels and that their priority for construction should lie with aircraft carriers. Like the Alaskas, the Japanese did not call these ships battlecruisers, referring to them instead as super-heavy cruisers.",
"title": "World War II"
},
{
"paragraph_id": 61,
"text": "In spite of the fact that most navies abandoned the battleship and battlecruiser concepts after World War II, Joseph Stalin's fondness for big-gun-armed warships caused the Soviet Union to plan a large cruiser class in the late 1940s. In the Soviet Navy, they were termed \"heavy cruisers\" (tjazholyj krejser). The fruits of this program were the Project 82 (Stalingrad) cruisers, of 36,500 tonnes (35,900 long tons) standard load, nine 305 mm (12 in) guns and a speed of 35 knots (65 km/h; 40 mph). Three ships were laid down in 1951–1952, but they were cancelled in April 1953 after Stalin's death. Only the central armoured hull section of the first ship, Stalingrad, was launched in 1954 and then used as a target.",
"title": "Cold War–era designs"
},
{
"paragraph_id": 62,
"text": "The Soviet Kirov class is sometimes referred to as a battlecruiser. This description arises from their over 24,000-tonne (24,000-long-ton) displacement, which is roughly equal to that of a First World War battleship and more than twice the displacement of contemporary cruisers; upon entry into service, Kirov was the largest surface combatant to be built since World War II. The Kirov class lacks the armour that distinguishes battlecruisers from ordinary cruisers and they are classified as heavy nuclear-powered missile cruisers (T'Yazholiy atomn'iy raketn'iy Krey'Ser) by Russia, with their primary surface armament consisting of twenty P-700 Granit surface to surface missiles. Four members of the class were completed during the 1980s and 1990s, but due to budget constraints only the Pyotr Velikiy is operational with the Russian Navy, though plans were announced in 2010 to return the other three ships to service. As of 2021, Admiral Nakhimov was being refitted, but the other two ships are reportedly beyond economical repair.",
"title": "Cold War–era designs"
}
] | The battlecruiser was a type of capital ship of the first half of the 20th century. These were similar in displacement, armament and cost to battleships, but differed in form and balance of attributes. Battlecruisers typically had thinner armour and a somewhat lighter main gun battery than contemporary battleships, installed on a longer hull with much higher engine power in order to attain greater speeds. The first battlecruisers were designed in the United Kingdom, as a development of the armoured cruiser, at the same time as the dreadnought succeeded the pre-dreadnought battleship. The goal of the design was to outrun any ship with similar armament, and chase down any ship with lesser armament; they were intended to hunt down slower, older armoured cruisers and destroy them with heavy gunfire while avoiding combat with the more powerful but slower battleships. However, as more and more battlecruisers were built, they were increasingly used alongside the better-protected battleships. Battlecruisers served in the navies of the United Kingdom, Germany, the Ottoman Empire, Australia and Japan during World War I, most notably at the Battle of the Falkland Islands and in the several raids and skirmishes in the North Sea which culminated in a pitched fleet battle, the Battle of Jutland. British battlecruisers in particular suffered heavy losses at Jutland, where poor fire safety and ammunition handling practices left them vulnerable to catastrophic magazine explosions following hits to their main turrets from large-calibre shells. This dismal showing led to a persistent general belief that battlecruisers were too thinly armoured to function successfully. By the end of the war, capital ship design had developed, with battleships becoming faster and battlecruisers becoming more heavily armoured, blurring the distinction between a battlecruiser and a fast battleship. The Washington Naval Treaty, which limited capital ship construction from 1922 onwards, treated battleships and battlecruisers identically, and the new generation of battlecruisers planned by the United States, Great Britain and Japan were scrapped or converted into aircraft carriers under the terms of the treaty. Improvements in armour design and propulsion created the 1930s "fast battleship" with the speed of a battlecruiser and armour of a battleship, making the battlecruiser in the traditional sense effectively an obsolete concept. Thus from the 1930s on, only the Royal Navy continued to use "battlecruiser" as a classification for the World War I–era capital ships that remained in the fleet; while Japan's battlecruisers remained in service, they had been significantly reconstructed and were re-rated as full-fledged fast battleships. Battlecruisers were put into action again during World War II, and only one survived to the end. There was also renewed interest in large "cruiser-killer" type warships, but few were ever begun, as construction of battleships and battlecruisers was curtailed in favor of more-needed convoy escorts, aircraft carriers, and cargo ships. Near the end, and after the Cold War era, the Soviet Kirov class of large guided missile cruisers have been the only active ships termed "battlecruisers". | 2001-08-18T00:24:09Z | 2023-11-06T04:36:32Z | [
"Template:See also",
"Template:Portal",
"Template:Cite journal",
"Template:Warship types of the 19th & 20th centuries",
"Template:Authority control",
"Template:Convert",
"Template:SMS",
"Template:HMAS",
"Template:USS",
"Template:Further",
"Template:Navy",
"Template:Naval",
"Template:Reflist",
"Template:HMS",
"Template:Refn",
"Template:Cite book",
"Template:Sclass",
"Template:'",
"Template:Short description",
"Template:Good article",
"Template:Ship",
"Template:Cite web",
"Template:Commons category",
"Template:Large cruisers",
"Template:Blockquote",
"Template:Sclass2"
] | https://en.wikipedia.org/wiki/Battlecruiser |
4,059 | Bob Hawke | Robert James Lee Hawke AC GCL (9 December 1929 – 16 May 2019) was an Australian politician and trade unionist who served as the 23rd prime minister of Australia from 1983 to 1991. He held office as the leader of the Australian Labor Party (ALP), having previously served as the president of the Australian Council of Trade Unions from 1969 to 1980 and president of the Labor Party national executive from 1973 to 1978.
Hawke was born in Border Town, South Australia. He attended the University of Western Australia and went on to study at University College, Oxford as a Rhodes Scholar. In 1956, Hawke joined the Australian Council of Trade Unions (ACTU) as a research officer. Having risen to become responsible for national wage case arbitration, he was elected as president of the ACTU in 1969, where he achieved a high public profile. In 1973, he was appointed as president of the Labor Party.
In 1980, Hawke stood down from his roles as ACTU and Labor Party president to announce his intention to enter parliamentary politics, and was subsequently elected to the Australian House of Representatives as a member of parliament (MP) for the division of Wills at the 1980 federal election. Three years later, he was elected unopposed to replace Bill Hayden as leader of the Australian Labor Party, and within five weeks led Labor to a landslide victory at the 1983 election, and was sworn in as prime minister. He led Labor to victory three times, with successful outcomes in 1984, 1987 and 1990 elections, making him the most electorally successful prime minister in the history of the Labor Party.
The Hawke government implemented a significant number of reforms, including major economic reforms, the establishment of Landcare, the introduction of the universal healthcare scheme Medicare, brokering the Prices and Incomes Accord, creating APEC, floating the Australian dollar, deregulating the financial sector, introducing the Family Assistance Scheme, enacting the Sex Discrimination Act to prevent discrimination in the workplace, declaring "Advance Australia Fair" as the country's national anthem, initiating superannuation pension schemes for all workers, negotiating a ban on mining in Antarctica and overseeing passage of the Australia Act that removed all remaining jurisdiction by the United Kingdom from Australia.
In June 1991, Hawke faced a leadership challenge by the Treasurer, Paul Keating, but Hawke managed to retain power; however, Keating mounted a second challenge six months later, and won narrowly, replacing Hawke as prime minister. Hawke subsequently retired from parliament, pursuing both a business career and a number of charitable causes, until his death in 2019, aged 89. Hawke remains his party's longest-serving Prime Minister, and Australia's third-longest-serving prime minister behind Robert Menzies and John Howard. He is also the only prime minister to be born in South Australia and the only one raised and educated in Western Australia. Hawke holds the highest-ever approval rating for an Australian prime minister, reaching 75% approval in 1984. Hawke is frequently ranked within the upper tier of Australian prime ministers by historians.
Bob Hawke was born on 9 December 1929 in Border Town, South Australia, the second child of Arthur "Clem" Hawke (1898–1989), a Congregationalist minister, and his wife Edith Emily (Lee) (1897–1979) (known as Ellie), a schoolteacher. His uncle, Albert, was the Labor premier of Western Australia between 1953 and 1959.
Hawke's brother Neil, who was seven years his senior, died at the age of seventeen after contracting meningitis, for which there was no cure at the time. Ellie Hawke subsequently developed an almost messianic belief in her son's destiny, and this contributed to Hawke's supreme self-confidence throughout his career. At the age of fifteen, he presciently boasted to friends that he would one day become the prime minister of Australia.
At the age of seventeen, the same age that his brother Neil had died, Hawke had a serious crash while riding his Panther motorcycle that left him in a critical condition for several days. This near-death experience acted as his catalyst, driving him to make the most of his talents and not let his abilities go to waste. He joined the Labor Party in 1947 at the age of eighteen.
Hawke was educated at West Leederville State School, Perth Modern School and the University of Western Australia, graduating in 1952 with Bachelor of Arts and Bachelor of Laws degrees. He was also president of the university's guild during the same year. The following year, Hawke won a Rhodes Scholarship to attend University College, Oxford, where he began a Bachelor of Arts course in philosophy, politics and economics (PPE). He soon found he was covering much the same ground as he had in his education at the University of Western Australia, and transferred to a Bachelor of Letters course. He wrote his thesis on wage-fixing in Australia and successfully presented it in January 1956.
In 1956, Hawke accepted a scholarship to undertake doctoral studies in the area of arbitration law in the law department at the Australian National University in Canberra. Soon after his arrival at ANU, Hawke became the students' representative on the University Council. A year later, Hawke was recommended to the President of the ACTU to become a research officer, replacing Harold Souter who had become ACTU Secretary. The recommendation was made by Hawke's mentor at ANU, H. P. Brown, who for a number of years had assisted the ACTU in national wage cases. Hawke decided to abandon his doctoral studies and accept the offer, moving to Melbourne with his wife Hazel.
Hawke is well known for a "world record" allegedly achieved at Oxford University for a beer skol (scull) of a yard of ale in 11 seconds. The record is widely regarded as having been important to his career and ocker chic image. A recent historical journal article describes the record as "possibly fabricated" and "cultural propaganda" designed to make Hawke appealing to unionised workers and nationalistic middle-class voters. The article demonstrates that "the record is apocryphal: its location and time remain uncertain; there are no known witnesses; the field of competition was exclusive and with no scientific accountability; the record was first published in a beer pamphlet; and Hawke's recollections were unreliable."
Not long after Hawke began work at the ACTU, he became responsible for the presentation of its annual case for higher wages to the national wages tribunal, the Commonwealth Conciliation and Arbitration Commission. He was first appointed as an ACTU advocate in 1959. The 1958 case, under previous advocate R.L. Eggleston, had yielded only a five-shilling increase. The 1959 case found for a fifteen-shilling increase, and was regarded as a personal triumph for Hawke. He went on to attain such success and prominence in his role as an ACTU advocate that, in 1969, he was encouraged to run for the position of ACTU President, despite the fact that he had never held elected office in a trade union.
He was elected ACTU President in 1969 on a modernising platform by the narrow margin of 399 to 350, with the support of the left of the union movement, including some associated with the Communist Party of Australia. He later credited Ray Gietzelt, General Secretary of the FMWU, as the single most significant union figure in helping him achieve this outcome. Questioned after his election on his political stance, Hawke stated that "socialist is not a word I would use to describe myself", saying instead his approach to politics was pragmatic. His commitment to the cause of Jewish Refuseniks purportedly led to a planned assassination attempt on Hawke by the Popular Front for the Liberation of Palestine, and its Australian operative Munif Mohammed Abou Rish.
In 1971, Hawke along with other members of the ACTU requested that South Africa send a non-racially biased team for the rugby union tour, with the intention of unions agreeing not to serve the team in Australia. Prior to arrival, the Western Australian branch of the Transport Workers' Union, and the Barmaids' and Barmens' Union, announced that they would serve the team, which allowed the Springboks to land in Perth. The tour commenced on 26 June and riots occurred as anti-apartheid protesters disrupted games. Hawke and his family started to receive malicious mail and phone calls from people who thought that sport and politics should not mix. Hawke remained committed to the ban on apartheid teams and later that year, the South African cricket team was successfully denied and no apartheid team was to ever come to Australia again. It was this ongoing dedication to racial equality in South Africa that would later earn Hawke the respect and friendship of Nelson Mandela.
In industrial matters, Hawke continued to demonstrate a preference for, and considerable skill at, negotiation, and was generally liked and respected by employers as well as the unions he advocated for. As early as 1972, speculation began that he would seek to enter the Parliament of Australia and eventually run to become the Leader of the Australian Labor Party. But while his professional career continued successfully, his heavy drinking and womanising placed considerable strains on his family life.
In June 1973, Hawke was elected as the Federal President of the Labor Party. Two years later, when the Whitlam government was controversially dismissed by the Governor-General, Hawke showed an initial keenness to enter Parliament at the ensuing election. Harry Jenkins, the MP for Scullin, came under pressure to step down to allow Hawke to stand in his place, but he strongly resisted this push. Hawke eventually decided not to attempt to enter Parliament at that time, a decision he soon regretted. After Labor was defeated at the election, Whitlam initially offered the leadership to Hawke, although it was not within Whitlam's power to decide who would succeed him. Despite not taking on the offer, Hawke remained influential, playing a key role in averting national strike action.
During the 1977 federal election, he emerged as a strident opponent of accepting Vietnamese boat people as refugees into Australia, stating that they should be subject to normal immigration requirements and should otherwise be deported. He further stated only refugees selected off-shore should be accepted.
Hawke resigned as President of the Labor Party in August 1978. Neil Batt was elected in his place. The strain of this period took its toll on Hawke and in 1979 he suffered a physical collapse. This shock led Hawke to publicly announce his alcoholism in a television interview, and that he would make a concerted—and ultimately successful—effort to overcome it. He was helped through this period by the relationship that he had established with writer Blanche d'Alpuget, who, in 1982, published a biography of Hawke. His popularity with the public was, if anything, enhanced by this period of rehabilitation, and opinion polling suggested that he was a more popular public figure than either Labor Leader Bill Hayden or Liberal Prime Minister Malcolm Fraser.
During the period of 1973 to 1979, Hawke acted as an informant for the United States government. During his time as ACTU leader, Hawke informed the US of details surrounding labour disputes, especially those relating to American companies and individuals, such as union disputes with Ford Motor Company and the black ban of Frank Sinatra. The major industrial action taken against Sinatra came about because Sinatra had made sexist comments against female journalists. The dispute was the subject of the 2003 film The Night We Called It a Day.
In retaliation, unions grounded Sinatra's private jet in Melbourne, demanding he apologise. The popular view was that Mr Hawke engaged in protracted, boozy negotiations with Ol' Blue Eyes to reach a settlement. The [diplomatic] cables say the US embassy reached a deal with Mr Hawke to end the standoff, no apology was sought from Sinatra and that most of Mr Hawke's time was spent with the singer's lawyer.
Hawke was described by US diplomats as "a bulwark against anti-American sentiment and resurgent communism during the economic turmoil of the 1970s", and often disputed with the Whitlam government over issues of foreign policy and industrial relations. With the knowledge of US diplomats, Hawke secretly planned to leave Labor in 1974 to form a new centrist political party to challenge the Whitlam government. This plan had the support of Rupert Murdoch and Hawke's confidant, Peter Abeles, but did not eventuate because of the events of 1975. US diplomats played a major role in shaping Hawke's consensus politics and economics.
Hawke's first attempt to enter Parliament came during the 1963 federal election. He stood in the seat of Corio in Geelong and managed to achieve a 3.1% swing against the national trend, although he fell short of ousting longtime Liberal incumbent Hubert Opperman. Hawke rejected several opportunities to enter Parliament throughout the 1970s, something he later wrote that he "regretted". He eventually stood for election to the House of Representatives at the 1980 election for the safe Melbourne seat of Wills, winning it comfortably. Immediately upon his election to Parliament, Hawke was appointed to the Shadow Cabinet by Labor Leader Bill Hayden as Shadow Minister for Industrial Relations.
Hayden, after having led the Labour party to narrowly lose the 1980 election, was increasingly subject to criticism from Labor MPs over his leadership style. To quell speculation over his position, Hayden called a leadership spill on 16 July 1982, believing that if he won he would be guaranteed to lead Labor through to the next election. Hawke decided to challenge Hayden in the spill, but Hayden defeated him by five votes; the margin of victory, however, was too slim to dispel doubts that he could lead the Labor Party to victory at an election. Despite his defeat, Hawke began to agitate more seriously behind the scenes for a change in leadership, with opinion polls continuing to show that Hawke was a far more popular public figure than both Hayden and Prime Minister Malcolm Fraser. Hayden was further weakened after Labor's unexpectedly poor performance at a by-election in December 1982 for the Victorian seat of Flinders, following the resignation of the sitting member, former deputy Liberal leader Phillip Lynch. Labor needed a swing of 5.5% to win the seat and had been predicted by the media to win, but could only achieve 3%.
Labor Party power-brokers, such as Graham Richardson and Barrie Unsworth, now openly switched their allegiance from Hayden to Hawke. More significantly, Hayden's staunch friend and political ally, Labor's Senate Leader John Button, had become convinced that Hawke's chances of victory at an election were greater than Hayden's. Initially, Hayden believed that he could remain in his job, but Button's defection proved to be the final straw in convincing Hayden that he would have to resign as Labor Leader. Less than two months after the Flinders by-election result, Hayden announced his resignation as Leader of the Labor Party on 3 February 1983. Hawke was subsequently elected as Leader unopposed on 8 February, and became Leader of the Opposition in the process. Having learned that morning about the possible leadership change, on the same that Hawke assumed the leadership of the Labor Party, Malcolm Fraser called a snap election for 5 March 1983, unsuccessfully attempting to prevent Labor from making the leadership change. However, he was unable to have the Governor-General confirm the election before Labor announced the change.
At the 1983 election, Hawke led Labor to a landslide victory, achieving a 24-seat swing and ending seven years of Liberal Party rule.
With the election called at the same time that Hawke became Labor leader this meant that Hawke never sat in Parliament as Leader of the Opposition having spent the entirety of his short Opposition leadership in the election campaign which he won.
After Labor's landslide victory, Hawke was sworn in as the Prime Minister by the Governor-General Ninian Stephen on 11 March 1983. The style of the Hawke government was deliberately distinct from the Whitlam government, the most recent Labor government that preceded it. Rather than immediately initiating multiple extensive reform programs as Whitlam had, Hawke announced that Malcolm Fraser's pre-election concealment of the budget deficit meant that many of Labor's election commitments would have to be deferred. As part of his internal reforms package, Hawke divided the government into two tiers, with only the most senior ministers sitting in the Cabinet of Australia. The Labor caucus was still given the authority to determine who would make up the Ministry, but this move gave Hawke unprecedented powers to empower individual ministers.
In particular, the political partnership that developed between Hawke and his Treasurer, Paul Keating, proved to be essential to Labor's success in government, with multiple Labor figures in years since citing the partnership as the party's greatest ever. The two men proved a study in contrasts: Hawke was a Rhodes Scholar; Keating left high school early. Hawke's enthusiasms were cigars, betting and most forms of sport; Keating preferred classical architecture, Mahler symphonies and collecting British Regency and French Empire antiques. Despite not knowing one another before Hawke assumed the leadership in 1983, the two formed a personal as well as political relationship which enabled the Government to pursue a significant number of reforms, although there were occasional points of tension between the two.
The Labor Caucus under Hawke also developed a more formalised system of parliamentary factions, which significantly altered the dynamics of caucus operations. Unlike many of his predecessor leaders, Hawke's authority within the Labor Party was absolute. This enabled him to persuade MPs to support a substantial set of policy changes which had not been considered achievable by Labor governments in the past. Individual accounts from ministers indicate that while Hawke was not often the driving force behind individual reforms, outside of broader economic changes, he took on the role of providing political guidance on what was electorally feasible and how best to sell it to the public, tasks at which he proved highly successful. Hawke took on a very public role as Prime Minister, campaigning frequently even outside of election periods, and for much of his time in office proved to be incredibly popular with the Australian electorate; to this date he still holds the highest ever AC Nielsen approval rating of 75%.
The Hawke government oversaw significant economic reforms, and is often cited by economic historians as being a "turning point" from a protectionist, agricultural model to a more globalised and services-oriented economy. According to the journalist Paul Kelly, "the most influential economic decisions of the 1980s were the floating of the Australian dollar and the deregulation of the financial system". Although the Fraser government had played a part in the process of financial deregulation by commissioning the 1981 Campbell Report, opposition from Fraser himself had stalled this process. Shortly after its election in 1983, the Hawke government took the opportunity to implement a comprehensive program of economic reform, in the process "transform(ing) economics and politics in Australia".
Hawke and Keating together led the process for overseeing the economic changes by launching a "National Economic Summit" one month after their election in 1983, which brought together business and industrial leaders together with politicians and trade union leaders; the three-day summit led to a unanimous adoption of a national economic strategy, generating sufficient political capital for widespread reform to follow. Among other reforms, the Hawke government floated the Australian dollar, repealed rules that prohibited foreign-owned banks to operate in Australia, dismantled the protectionist tariff system, privatised several state sector industries, ended the subsidisation of loss-making industries, and sold off part of the state-owned Commonwealth Bank.
The taxation system was also significantly reformed, with income tax rates reduced and the introduction of a fringe benefits tax and a capital gains tax; the latter two reforms were strongly opposed by the Liberal Party at the time, but were never reversed by them when they eventually returned to office in 1996. Partially offsetting these imposts upon the business community—the "main loser" from the 1985 Tax Summit according to Paul Kelly—was the introduction of full dividend imputation, a reform insisted upon by Keating. Funding for schools was also considerably increased as part of this package, while financial assistance was provided for students to enable them to stay at school longer; the number of Australian children completing school rose from 3 in 10 at the beginning of the Hawke government to 7 in 10 by its conclusion in 1991. Considerable progress was also made in directing assistance "to the most disadvantaged recipients over the whole range of welfare benefits."
Although criticisms were leveled against the Hawke government that it did not achieve all it said it would do on social policy, it nevertheless enacting a series of reforms which remain in place to the present day. From 1983 to 1989, the Government oversaw the permanent establishment of universal health care in Australia with the creation of Medicare, doubled the number of subsidised childcare places, began the introduction of occupational superannuation, oversaw a significant increase in school retention rates, created subsidised homecare services, oversaw the elimination of poverty traps in the welfare system, increased the real value of the old-age pension, reintroduced the six-monthly indexation of single-person unemployment benefits, and established a wide-ranging programme for paid family support, known as the Family Income Supplement. During the 1980s, the proportion of total government outlays allocated to families, the sick, single parents, widows, the handicapped, and veterans was significantly higher than under the previous Fraser and Whitlam governments.
In 1984, the Hawke government enacted the landmark Sex Discrimination Act 1984, which eliminated discrimination on the grounds of sex within the workplace. In 1989, Hawke oversaw the gradual re-introduction of some tuition fees for university study, creating set up the Higher Education Contributions Scheme (HECS). Under the original HECS, a $1,800 fee was charged to all university students, and the Commonwealth paid the balance. A student could defer payment of this HECS amount and repay the debt through the tax system, when the student's income exceeds a threshold level. As part of the reforms, Colleges of Advanced Education entered the University sector by various means. by doing so, university places were able to be expanded. Further notable policy decisions taken during the Government's time in office included the public health campaign regarding HIV/AIDS, and Indigenous land rights reform, with an investigation of the idea of a treaty between Aborigines and the Government being launched, although the latter would be overtaken by events, notably the Mabo court decision.
The Hawke government also drew attention for a series of notable environmental decisions, particularly in its second and third terms. In 1983, Hawke personally vetoed the construction of the Franklin Dam in Tasmania, responding to a groundswell of protest around the issue. Hawke also secured the nomination of the Wet Tropics of Queensland as a UNESCO World Heritage Site in 1987, preventing the forests there from being logged. Hawke would later appoint Graham Richardson as Environment Minister, tasking him with winning the second-preference support from environmental parties, something which Richardson later claimed was the major factor in the government's narrow re-election at the 1990 election. In the Government's fourth term, Hawke personally led the Australian delegation to secure changes to the Protocol on Environmental Protection to the Antarctic Treaty, ultimately winning a guarantee that drilling for minerals within Antarctica would be totally prohibited until 2048 at the earliest. Hawke later claimed that the Antarctic drilling ban was his "proudest achievement".
As a former ACTU President, Hawke was well-placed to engage in reform of the industrial relations system in Australia, taking a lead on this policy area as in few others. Working closely with ministerial colleagues and the ACTU Secretary, Bill Kelty, Hawke negotiated with trade unions to establish the Prices and Incomes Accord in 1983, an agreement whereby unions agreed to restrict their demands for wage increases, and in turn the Government guaranteed to both minimise inflation and promote an increased social wage, including by establishing new social programmes such as Medicare.
Inflation had been a significant issue for the previous decade prior to the election of the Hawke government, regularly running into double-digits. The process of the Accord, by which the Government and trade unions would arbitrate and agree upon wage increases in many sectors, led to a decrease in both inflation and unemployment through to 1990. Criticisms of the Accord would come from both the right and the left of politics. Left-wing critics claimed that it kept real wages stagnant, and that the Accord was a policy of class collaboration and corporatism. By contrast, right-wing critics claimed that the Accord reduced the flexibility of the wages system. Supporters of the Accord, however, pointed to the improvements in the social security system that occurred, including the introduction of rental assistance for social security recipients, the creation of labour market schemes such as NewStart, and the introduction of the Family Income Supplement. In 1986, the Hawke government passed a bill to de-register the Builders Labourers Federation federally due to the union not following the Accord agreements.
Despite a percentage fall in real money wages from 1983 to 1991, the social wage of Australian workers was argued by the Government to have improved drastically as a result of these reforms, and the ensuing decline in inflation. The Accord was revisited six further times during the Hawke government, each time in response to new economic developments. The seventh and final revisiting would ultimately lead to the establishment of the enterprise bargaining system, although this would be finalised shortly after Hawke left office in 1991.
Arguably the most significant foreign policy achievement of the Government took place in 1989, after Hawke proposed a south-east Asian region-wide forum for leaders and economic ministers to discuss issues of common concern. After winning the support of key countries in the region, this led to the creation of the Asia-Pacific Economic Cooperation (APEC). The first APEC meeting duly took place in Canberra in November 1989; the economic ministers of Australia, Brunei, Canada, Indonesia, Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore, Thailand and the United States all attended. APEC would subsequently grow to become one of the most pre-eminent high-level international forums in the world, particularly after the later inclusions of China and Russia, and the Keating government's later establishment of the APEC Leaders' Forum.
Elsewhere in Asia, the Hawke government played a significant role in the build-up to the United Nations peace process for Cambodia, culminating in the Transitional Authority; Hawke's Foreign Minister Gareth Evans was nominated for the Nobel Peace Prize for his role in negotiations. Hawke also took a major public stand after the 1989 Tiananmen Square protests and massacre; despite having spent years trying to get closer relations with China, Hawke gave a tearful address on national television describing the massacre in graphic detail, and unilaterally offered asylum to over 42,000 Chinese students who were living in Australia at the time, many of whom had publicly supported the Tiananmen protesters. Hawke did so without even consulting his Cabinet, stating later that he felt he simply had to act.
The Hawke government pursued a close relationship with the United States, assisted by Hawke's close friendship with US Secretary of State George Shultz; this led to a degree of controversy when the Government supported the US's plans to test ballistic missiles off the coast of Tasmania in 1985, as well as seeking to overturn Australia's long-standing ban on uranium exports. Although the US ultimately withdrew the plans to test the missiles, the furore led to a fall in Hawke's approval ratings. Shortly after the 1990 election, Hawke would lead Australia into its first overseas military campaign since the Vietnam War, forming a close alliance with US President George H. W. Bush to join the coalition in the Gulf War. The Royal Australian Navy contributed several destroyers and frigates to the war effort, which successfully concluded in February 1991, with the expulsion of Iraqi forces from Kuwait. The success of the campaign, and the lack of any Australian casualties, led to a brief increase in the popularity of the Government.
Through his role on the Commonwealth Heads of Government Meeting, Hawke played a leading role in ensuring the Commonwealth initiated an international boycott on foreign investment into South Africa, building on work undertaken by his predecessor Malcolm Fraser, and in the process clashing publicly with Prime Minister of the United Kingdom Margaret Thatcher, who initially favoured a more cautious approach. The resulting boycott, led by the Commonwealth, was widely credited with helping bring about the collapse of apartheid, and resulted in a high-profile visit by Nelson Mandela in October 1990, months after the latter's release from a 27-year stint in prison. During the visit, Mandela publicly thanked the Hawke government for the role it played in the boycott.
Hawke benefited greatly from the disarray into which the Liberal Party fell after the resignation of Fraser following the 1983 election. The Liberals were torn between supporters of the more conservative John Howard and the more liberal Andrew Peacock, with the pair frequently contesting the leadership. Hawke and Keating were also able to use the concealment of the size of the budget deficit by Fraser before the 1983 election to great effect, damaging the Liberal Party's economic credibility as a result.
However, Hawke's time as Prime Minister also saw friction develop between himself and the grassroots of the Labor Party, many of whom were unhappy at what they viewed as Hawke's iconoclasm and willingness to cooperate with business interests. Hawke regularly and publicly expressed his willingness to cull Labor's "sacred cows". The Labor Left faction, as well as prominent Labor backbencher Barry Jones, offered repeated criticisms of a number of government decisions. Hawke was also subject to challenges from some former colleagues in the trade union movement over his "confrontationalist style" in siding with the airline companies in the 1989 Australian pilots' strike.
Nevertheless, Hawke was able to comfortably maintain a lead as preferred prime minister in the vast majority of opinion polls carried out throughout his time in office. He recorded the highest popularity rating ever measured by an Australian opinion poll, reaching 75% approval in 1984. After leading Labor to a comfortable victory in the snap 1984 election, called to bring the mandate of the House of Representatives back in line with the Senate, Hawke was able to secure an unprecedented third consecutive term for Labor with a landslide victory in the double dissolution election of 1987. Hawke was subsequently able to lead the nation in the bicentennial celebrations of 1988, culminating with him welcoming Queen Elizabeth II to open the newly constructed Parliament House.
The late-1980s recession, and the accompanying high interest rates, saw the Government fall in opinion polls, with many doubting that Hawke could win a fourth election. Keating, who had long understood that he would eventually succeed Hawke as prime minister, began to plan a leadership change; at the end of 1988, Keating put pressure on Hawke to retire in the new year. Hawke rejected this suggestion but reached a secret agreement with Keating, the so-called "Kirribilli Agreement", stating that he would step down in Keating's favour at some point after the 1990 election. Hawke subsequently won that election, in the process leading Labor to a record fourth consecutive electoral victory, albeit by a slim margin. Hawke appointed Keating as deputy prime minister to replace the retiring Lionel Bowen.
By the end of 1990, frustrated by the lack of any indication from Hawke as to when he might retire, Keating made a provocative speech to the Federal Parliamentary Press Gallery. Hawke considered the speech disloyal, and told Keating he would renege on the Kirribilli Agreement as a result. After attempting to force a resolution privately, Keating finally resigned from the Government in June 1991 to challenge Hawke for the leadership. His resignation came soon after Hawke vetoed in Cabinet a proposal backed by Keating and other ministers for mining to take place at Coronation Hill in Kakadu National Park. Hawke won the leadership spill, and in a press conference after the result, Keating declared that he had fired his "one shot" on the leadership. Hawke appointed John Kerin to replace Keating as Treasurer.
Despite his victory in the June spill, Hawke quickly began to be regarded by many of his colleagues as a "wounded" leader; he had now lost his long-term political partner, his rating in opinion polls were beginning to fall significantly, and after nearly nine years as Prime Minister, there was speculation that it would soon be time for a new leader. Hawke's leadership was ultimately irrevocably damaged at the end of 1991; after Liberal Leader John Hewson released 'Fightback!', a detailed proposal for sweeping economic change, including the introduction of a goods and services tax, Hawke was forced to sack Kerin as Treasurer after the latter made a public gaffe attempting to attack the policy. Keating duly challenged for the leadership a second time on 19 December, arguing that he would better placed to defeat Hewson; this time, Keating succeeded, narrowly defeating Hawke by 56 votes to 51.
In a speech to the House of Representatives following the vote, Hawke declared that his nine years as prime minister had left Australia a better and wealthier country, and he was given a standing ovation by those present. He subsequently tendered his resignation to the Governor-General and pledged support to his successor. Hawke briefly returned to the backbench, before resigning from Parliament on 20 February 1992, sparking a by-election which was won by the independent candidate Phil Cleary from among a record field of 22 candidates. Keating would go on to lead Labor to a fifth victory at the 1993 election, although he was defeated by the Liberal Party at the 1996 election.
Hawke wrote that he had very few regrets over his time in office, although stated he wished he had been able to advance the cause of Indigenous land rights further. His bitterness towards Keating over the leadership challenges surfaced in his earlier memoirs, although by the 2000s Hawke stated he and Keating had buried their differences, and that they regularly dined together and considered each other friends. The publication of the book Hawke: The Prime Minister, by Hawke's second wife, Blanche d'Alpuget, in 2010, reignited conflict between the two, with Keating accusing Hawke and d'Alpuget of spreading falsehoods about his role in the Hawke government. Despite this, the two campaigned together for Labor several times, including at the 2019 election, where they released their first joint article for nearly three decades; Craig Emerson, who worked for both men, said they had reconciled in later years after Hawke grew ill.
After leaving Parliament, Hawke entered the business world, taking on a number of directorships and consultancy positions which enabled him to achieve considerable financial success. He avoided public involvement with the Labor Party during Keating's tenure as Prime Minister, not wanting to be seen as attempting to overshadow his successor. After Keating's defeat and the election of the Howard government at the 1996 election, he returned to public campaigning with Labor and regularly appearing at election launches. Despite his personal affection for Queen Elizabeth II, boasting that he had been her "favourite Prime Minister", Hawke was an enthusiastic republican and joined the campaign for a Yes vote in the 1999 republic referendum.
In 2002, Hawke was named to South Australia's Economic Development Board during the Rann government. In the lead up to the 2007 election, Hawke made a considerable personal effort to support Kevin Rudd, making speeches at a large number of campaign office openings across Australia, and appearing in multiple campaign advertisements. As well as campaigning against WorkChoices, Hawke also attacked John Howard's record as Treasurer, stating "it was the judgement of every economist and international financial institution that it was the restructuring reforms undertaken by my government, with the full cooperation of the trade union movement, which created the strength of the Australian economy today". In February 2008, after Rudd's victory, Hawke joined former Prime Ministers Gough Whitlam, Malcolm Fraser and Paul Keating in Parliament House to witness the long anticipated apology to the Stolen Generations.
In 2009, Hawke helped establish the Centre for Muslim and Non-Muslim Understanding at the University of South Australia. Interfaith dialogue was an important issue for Hawke, who told The Adelaide Review that he was "convinced that one of the great potential dangers confronting the world is the lack of understanding in regard to the Muslim world. Fanatics have misrepresented what Islam is. They give a false impression of the essential nature of Islam."
In 2016, after taking part in Andrew Denton's Better Off Dead podcast, Hawke added his voice to calls for voluntary euthanasia to be legalised. Hawke labelled as 'absurd' the lack of political will to fix the problem. He revealed that he had such an arrangement with his wife Blanche should such a devastating medical situation occur. He also publicly advocated for nuclear power and the importation of international spent nuclear fuel to Australia for storage and disposal, stating that this could lead to considerable economic benefits for Australia.
In late December 2018, Hawke revealed that he was in "terrible health". While predicting a Labor win in the upcoming 2019 federal election, Hawke said he "may not witness the party's success". In May 2019, the month of the election, he issued a joint statement with Paul Keating endorsing Labor's economic plan and condemning the Liberal Party for "completely [giving] up the economic reform agenda". They stated that "Shorten's Labor is the only party of government focused on the need to modernise the economy to deal with the major challenge of our time: human induced climate change". It was the first joint press statement released by the two since 1991.
On 16 May 2019, two days before the election, Hawke died at his home in Northbridge at the age of 89, following a short illness. His family held a private cremation on 27 May at Macquarie Park Cemetery and Crematorium where he was subsequently interred. A state memorial was held at the Sydney Opera House on 14 June; speakers included Craig Emerson as master of ceremonies and Kim Beazley reading the eulogy, as well as Paul Keating, Julia Gillard, Bill Kelty, Ross Garnaut, and incumbent Prime Minister Scott Morrison and Opposition Leader Anthony Albanese.
Hawke married Hazel Masterson in 1956 at Perth Trinity Church. They had three children: Susan (born 1957), Stephen (born 1959) and Roslyn (born 1960). Their fourth child, Robert Jr, died in early infancy in 1963. Hawke was named Victorian Father of the Year in 1971, an honour which his wife disputed due to his heavy drinking and womanising. The couple divorced in 1995, after he left her for the writer Blanche d'Alpuget, and the two lived together in Northbridge, a suburb of the North Shore of Sydney. The divorce estranged Hawke from some of his family for a period, although they had reconciled by the 2010s.
Throughout his early life, Hawke was a heavy drinker, having set a world record for drinking during his years as a student. Hawke eventually suffered from alcohol poisoning following the death of his and Hazel's infant son in 1963. He publicly announced in 1980 that he would abstain from alcohol to seek election to Parliament, in a move which garnered significant public attention and support. Hawke began to drink again following his retirement from politics, although to a more manageable extent; on several occasions, in his later years, videos of Hawke downing beer at cricket matches would frequently go viral.
On the subject of religion, Hawke wrote, while attending the 1952 World Christian Youth Conference in India, that "there were all these poverty stricken kids at the gate of this palatial place where we were feeding our face and I just (was) struck by this enormous sense of irrelevance of religion to the needs of people". He subsequently abandoned his Christian beliefs. By the time he entered politics he was a self-described agnostic. Hawke told Andrew Denton in 2008 that his father's Christian faith had continued to influence his outlook, saying "My father said if you believe in the fatherhood of God you must necessarily believe in the brotherhood of man, it follows necessarily, and even though I left the church and was not religious, that truth remained with me."
Hawke was a supporter of National Rugby League club the Canberra Raiders.
A biographical television film, Hawke, premiered on the Ten Network in Australia on 18 July 2010, with Richard Roxburgh playing the title character. Rachael Blake and Felix Williamson portrayed Hazel Hawke and Paul Keating, respectively. Roxburgh reprised his role as Hawke in the 2020 episode "Terra Nullius" of the Netflix series The Crown.
In July 2019, the Australian Government announced it would spend $750,000 to purchase and renovate the house in Bordertown where Hawke was born and spent his early childhood. In January 2021, the Tatiara District Council decided to turn the house into tourist accommodation.
In December 2020, the Western Australian Government announced that it had purchased Hawke's childhood home in West Leederville and would maintain it as a state asset. The property will also be assessed for entry onto the State Register of Heritage Places.
The Australian Government pledged $5 million in July 2019 to establish a new annual scholarship—the Bob Hawke John Monash Scholarship—through the General Sir John Monash Foundation. Bob Hawke College, a high school in Subiaco, Western Australia named after Hawke, was opened in February 2020.
In March 2020, the Australian Electoral Commission announced that it would create a new Australian electoral division in the House of Representatives named in honour of Hawke. The Division of Hawke was first contested at the 2022 federal election, and is located in the state of Victoria, near the seat of Wills, which Hawke represented from 1980 to 1992.
Orders
Foreign honours
Fellowships
Honorary degrees | [
{
"paragraph_id": 0,
"text": "Robert James Lee Hawke AC GCL (9 December 1929 – 16 May 2019) was an Australian politician and trade unionist who served as the 23rd prime minister of Australia from 1983 to 1991. He held office as the leader of the Australian Labor Party (ALP), having previously served as the president of the Australian Council of Trade Unions from 1969 to 1980 and president of the Labor Party national executive from 1973 to 1978.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Hawke was born in Border Town, South Australia. He attended the University of Western Australia and went on to study at University College, Oxford as a Rhodes Scholar. In 1956, Hawke joined the Australian Council of Trade Unions (ACTU) as a research officer. Having risen to become responsible for national wage case arbitration, he was elected as president of the ACTU in 1969, where he achieved a high public profile. In 1973, he was appointed as president of the Labor Party.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 1980, Hawke stood down from his roles as ACTU and Labor Party president to announce his intention to enter parliamentary politics, and was subsequently elected to the Australian House of Representatives as a member of parliament (MP) for the division of Wills at the 1980 federal election. Three years later, he was elected unopposed to replace Bill Hayden as leader of the Australian Labor Party, and within five weeks led Labor to a landslide victory at the 1983 election, and was sworn in as prime minister. He led Labor to victory three times, with successful outcomes in 1984, 1987 and 1990 elections, making him the most electorally successful prime minister in the history of the Labor Party.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Hawke government implemented a significant number of reforms, including major economic reforms, the establishment of Landcare, the introduction of the universal healthcare scheme Medicare, brokering the Prices and Incomes Accord, creating APEC, floating the Australian dollar, deregulating the financial sector, introducing the Family Assistance Scheme, enacting the Sex Discrimination Act to prevent discrimination in the workplace, declaring \"Advance Australia Fair\" as the country's national anthem, initiating superannuation pension schemes for all workers, negotiating a ban on mining in Antarctica and overseeing passage of the Australia Act that removed all remaining jurisdiction by the United Kingdom from Australia.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In June 1991, Hawke faced a leadership challenge by the Treasurer, Paul Keating, but Hawke managed to retain power; however, Keating mounted a second challenge six months later, and won narrowly, replacing Hawke as prime minister. Hawke subsequently retired from parliament, pursuing both a business career and a number of charitable causes, until his death in 2019, aged 89. Hawke remains his party's longest-serving Prime Minister, and Australia's third-longest-serving prime minister behind Robert Menzies and John Howard. He is also the only prime minister to be born in South Australia and the only one raised and educated in Western Australia. Hawke holds the highest-ever approval rating for an Australian prime minister, reaching 75% approval in 1984. Hawke is frequently ranked within the upper tier of Australian prime ministers by historians.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Bob Hawke was born on 9 December 1929 in Border Town, South Australia, the second child of Arthur \"Clem\" Hawke (1898–1989), a Congregationalist minister, and his wife Edith Emily (Lee) (1897–1979) (known as Ellie), a schoolteacher. His uncle, Albert, was the Labor premier of Western Australia between 1953 and 1959.",
"title": "Early life and family"
},
{
"paragraph_id": 6,
"text": "Hawke's brother Neil, who was seven years his senior, died at the age of seventeen after contracting meningitis, for which there was no cure at the time. Ellie Hawke subsequently developed an almost messianic belief in her son's destiny, and this contributed to Hawke's supreme self-confidence throughout his career. At the age of fifteen, he presciently boasted to friends that he would one day become the prime minister of Australia.",
"title": "Early life and family"
},
{
"paragraph_id": 7,
"text": "At the age of seventeen, the same age that his brother Neil had died, Hawke had a serious crash while riding his Panther motorcycle that left him in a critical condition for several days. This near-death experience acted as his catalyst, driving him to make the most of his talents and not let his abilities go to waste. He joined the Labor Party in 1947 at the age of eighteen.",
"title": "Early life and family"
},
{
"paragraph_id": 8,
"text": "Hawke was educated at West Leederville State School, Perth Modern School and the University of Western Australia, graduating in 1952 with Bachelor of Arts and Bachelor of Laws degrees. He was also president of the university's guild during the same year. The following year, Hawke won a Rhodes Scholarship to attend University College, Oxford, where he began a Bachelor of Arts course in philosophy, politics and economics (PPE). He soon found he was covering much the same ground as he had in his education at the University of Western Australia, and transferred to a Bachelor of Letters course. He wrote his thesis on wage-fixing in Australia and successfully presented it in January 1956.",
"title": "Education and early career"
},
{
"paragraph_id": 9,
"text": "In 1956, Hawke accepted a scholarship to undertake doctoral studies in the area of arbitration law in the law department at the Australian National University in Canberra. Soon after his arrival at ANU, Hawke became the students' representative on the University Council. A year later, Hawke was recommended to the President of the ACTU to become a research officer, replacing Harold Souter who had become ACTU Secretary. The recommendation was made by Hawke's mentor at ANU, H. P. Brown, who for a number of years had assisted the ACTU in national wage cases. Hawke decided to abandon his doctoral studies and accept the offer, moving to Melbourne with his wife Hazel.",
"title": "Education and early career"
},
{
"paragraph_id": 10,
"text": "Hawke is well known for a \"world record\" allegedly achieved at Oxford University for a beer skol (scull) of a yard of ale in 11 seconds. The record is widely regarded as having been important to his career and ocker chic image. A recent historical journal article describes the record as \"possibly fabricated\" and \"cultural propaganda\" designed to make Hawke appealing to unionised workers and nationalistic middle-class voters. The article demonstrates that \"the record is apocryphal: its location and time remain uncertain; there are no known witnesses; the field of competition was exclusive and with no scientific accountability; the record was first published in a beer pamphlet; and Hawke's recollections were unreliable.\"",
"title": "Education and early career"
},
{
"paragraph_id": 11,
"text": "Not long after Hawke began work at the ACTU, he became responsible for the presentation of its annual case for higher wages to the national wages tribunal, the Commonwealth Conciliation and Arbitration Commission. He was first appointed as an ACTU advocate in 1959. The 1958 case, under previous advocate R.L. Eggleston, had yielded only a five-shilling increase. The 1959 case found for a fifteen-shilling increase, and was regarded as a personal triumph for Hawke. He went on to attain such success and prominence in his role as an ACTU advocate that, in 1969, he was encouraged to run for the position of ACTU President, despite the fact that he had never held elected office in a trade union.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 12,
"text": "He was elected ACTU President in 1969 on a modernising platform by the narrow margin of 399 to 350, with the support of the left of the union movement, including some associated with the Communist Party of Australia. He later credited Ray Gietzelt, General Secretary of the FMWU, as the single most significant union figure in helping him achieve this outcome. Questioned after his election on his political stance, Hawke stated that \"socialist is not a word I would use to describe myself\", saying instead his approach to politics was pragmatic. His commitment to the cause of Jewish Refuseniks purportedly led to a planned assassination attempt on Hawke by the Popular Front for the Liberation of Palestine, and its Australian operative Munif Mohammed Abou Rish.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 13,
"text": "In 1971, Hawke along with other members of the ACTU requested that South Africa send a non-racially biased team for the rugby union tour, with the intention of unions agreeing not to serve the team in Australia. Prior to arrival, the Western Australian branch of the Transport Workers' Union, and the Barmaids' and Barmens' Union, announced that they would serve the team, which allowed the Springboks to land in Perth. The tour commenced on 26 June and riots occurred as anti-apartheid protesters disrupted games. Hawke and his family started to receive malicious mail and phone calls from people who thought that sport and politics should not mix. Hawke remained committed to the ban on apartheid teams and later that year, the South African cricket team was successfully denied and no apartheid team was to ever come to Australia again. It was this ongoing dedication to racial equality in South Africa that would later earn Hawke the respect and friendship of Nelson Mandela.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 14,
"text": "In industrial matters, Hawke continued to demonstrate a preference for, and considerable skill at, negotiation, and was generally liked and respected by employers as well as the unions he advocated for. As early as 1972, speculation began that he would seek to enter the Parliament of Australia and eventually run to become the Leader of the Australian Labor Party. But while his professional career continued successfully, his heavy drinking and womanising placed considerable strains on his family life.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 15,
"text": "In June 1973, Hawke was elected as the Federal President of the Labor Party. Two years later, when the Whitlam government was controversially dismissed by the Governor-General, Hawke showed an initial keenness to enter Parliament at the ensuing election. Harry Jenkins, the MP for Scullin, came under pressure to step down to allow Hawke to stand in his place, but he strongly resisted this push. Hawke eventually decided not to attempt to enter Parliament at that time, a decision he soon regretted. After Labor was defeated at the election, Whitlam initially offered the leadership to Hawke, although it was not within Whitlam's power to decide who would succeed him. Despite not taking on the offer, Hawke remained influential, playing a key role in averting national strike action.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 16,
"text": "During the 1977 federal election, he emerged as a strident opponent of accepting Vietnamese boat people as refugees into Australia, stating that they should be subject to normal immigration requirements and should otherwise be deported. He further stated only refugees selected off-shore should be accepted.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 17,
"text": "Hawke resigned as President of the Labor Party in August 1978. Neil Batt was elected in his place. The strain of this period took its toll on Hawke and in 1979 he suffered a physical collapse. This shock led Hawke to publicly announce his alcoholism in a television interview, and that he would make a concerted—and ultimately successful—effort to overcome it. He was helped through this period by the relationship that he had established with writer Blanche d'Alpuget, who, in 1982, published a biography of Hawke. His popularity with the public was, if anything, enhanced by this period of rehabilitation, and opinion polling suggested that he was a more popular public figure than either Labor Leader Bill Hayden or Liberal Prime Minister Malcolm Fraser.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 18,
"text": "During the period of 1973 to 1979, Hawke acted as an informant for the United States government. During his time as ACTU leader, Hawke informed the US of details surrounding labour disputes, especially those relating to American companies and individuals, such as union disputes with Ford Motor Company and the black ban of Frank Sinatra. The major industrial action taken against Sinatra came about because Sinatra had made sexist comments against female journalists. The dispute was the subject of the 2003 film The Night We Called It a Day.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 19,
"text": "In retaliation, unions grounded Sinatra's private jet in Melbourne, demanding he apologise. The popular view was that Mr Hawke engaged in protracted, boozy negotiations with Ol' Blue Eyes to reach a settlement. The [diplomatic] cables say the US embassy reached a deal with Mr Hawke to end the standoff, no apology was sought from Sinatra and that most of Mr Hawke's time was spent with the singer's lawyer.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 20,
"text": "Hawke was described by US diplomats as \"a bulwark against anti-American sentiment and resurgent communism during the economic turmoil of the 1970s\", and often disputed with the Whitlam government over issues of foreign policy and industrial relations. With the knowledge of US diplomats, Hawke secretly planned to leave Labor in 1974 to form a new centrist political party to challenge the Whitlam government. This plan had the support of Rupert Murdoch and Hawke's confidant, Peter Abeles, but did not eventuate because of the events of 1975. US diplomats played a major role in shaping Hawke's consensus politics and economics.",
"title": "Australian Council of Trade Unions"
},
{
"paragraph_id": 21,
"text": "Hawke's first attempt to enter Parliament came during the 1963 federal election. He stood in the seat of Corio in Geelong and managed to achieve a 3.1% swing against the national trend, although he fell short of ousting longtime Liberal incumbent Hubert Opperman. Hawke rejected several opportunities to enter Parliament throughout the 1970s, something he later wrote that he \"regretted\". He eventually stood for election to the House of Representatives at the 1980 election for the safe Melbourne seat of Wills, winning it comfortably. Immediately upon his election to Parliament, Hawke was appointed to the Shadow Cabinet by Labor Leader Bill Hayden as Shadow Minister for Industrial Relations.",
"title": "Member of Parliament"
},
{
"paragraph_id": 22,
"text": "Hayden, after having led the Labour party to narrowly lose the 1980 election, was increasingly subject to criticism from Labor MPs over his leadership style. To quell speculation over his position, Hayden called a leadership spill on 16 July 1982, believing that if he won he would be guaranteed to lead Labor through to the next election. Hawke decided to challenge Hayden in the spill, but Hayden defeated him by five votes; the margin of victory, however, was too slim to dispel doubts that he could lead the Labor Party to victory at an election. Despite his defeat, Hawke began to agitate more seriously behind the scenes for a change in leadership, with opinion polls continuing to show that Hawke was a far more popular public figure than both Hayden and Prime Minister Malcolm Fraser. Hayden was further weakened after Labor's unexpectedly poor performance at a by-election in December 1982 for the Victorian seat of Flinders, following the resignation of the sitting member, former deputy Liberal leader Phillip Lynch. Labor needed a swing of 5.5% to win the seat and had been predicted by the media to win, but could only achieve 3%.",
"title": "Member of Parliament"
},
{
"paragraph_id": 23,
"text": "Labor Party power-brokers, such as Graham Richardson and Barrie Unsworth, now openly switched their allegiance from Hayden to Hawke. More significantly, Hayden's staunch friend and political ally, Labor's Senate Leader John Button, had become convinced that Hawke's chances of victory at an election were greater than Hayden's. Initially, Hayden believed that he could remain in his job, but Button's defection proved to be the final straw in convincing Hayden that he would have to resign as Labor Leader. Less than two months after the Flinders by-election result, Hayden announced his resignation as Leader of the Labor Party on 3 February 1983. Hawke was subsequently elected as Leader unopposed on 8 February, and became Leader of the Opposition in the process. Having learned that morning about the possible leadership change, on the same that Hawke assumed the leadership of the Labor Party, Malcolm Fraser called a snap election for 5 March 1983, unsuccessfully attempting to prevent Labor from making the leadership change. However, he was unable to have the Governor-General confirm the election before Labor announced the change.",
"title": "Member of Parliament"
},
{
"paragraph_id": 24,
"text": "At the 1983 election, Hawke led Labor to a landslide victory, achieving a 24-seat swing and ending seven years of Liberal Party rule.",
"title": "Member of Parliament"
},
{
"paragraph_id": 25,
"text": "With the election called at the same time that Hawke became Labor leader this meant that Hawke never sat in Parliament as Leader of the Opposition having spent the entirety of his short Opposition leadership in the election campaign which he won.",
"title": "Member of Parliament"
},
{
"paragraph_id": 26,
"text": "After Labor's landslide victory, Hawke was sworn in as the Prime Minister by the Governor-General Ninian Stephen on 11 March 1983. The style of the Hawke government was deliberately distinct from the Whitlam government, the most recent Labor government that preceded it. Rather than immediately initiating multiple extensive reform programs as Whitlam had, Hawke announced that Malcolm Fraser's pre-election concealment of the budget deficit meant that many of Labor's election commitments would have to be deferred. As part of his internal reforms package, Hawke divided the government into two tiers, with only the most senior ministers sitting in the Cabinet of Australia. The Labor caucus was still given the authority to determine who would make up the Ministry, but this move gave Hawke unprecedented powers to empower individual ministers.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 27,
"text": "In particular, the political partnership that developed between Hawke and his Treasurer, Paul Keating, proved to be essential to Labor's success in government, with multiple Labor figures in years since citing the partnership as the party's greatest ever. The two men proved a study in contrasts: Hawke was a Rhodes Scholar; Keating left high school early. Hawke's enthusiasms were cigars, betting and most forms of sport; Keating preferred classical architecture, Mahler symphonies and collecting British Regency and French Empire antiques. Despite not knowing one another before Hawke assumed the leadership in 1983, the two formed a personal as well as political relationship which enabled the Government to pursue a significant number of reforms, although there were occasional points of tension between the two.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 28,
"text": "The Labor Caucus under Hawke also developed a more formalised system of parliamentary factions, which significantly altered the dynamics of caucus operations. Unlike many of his predecessor leaders, Hawke's authority within the Labor Party was absolute. This enabled him to persuade MPs to support a substantial set of policy changes which had not been considered achievable by Labor governments in the past. Individual accounts from ministers indicate that while Hawke was not often the driving force behind individual reforms, outside of broader economic changes, he took on the role of providing political guidance on what was electorally feasible and how best to sell it to the public, tasks at which he proved highly successful. Hawke took on a very public role as Prime Minister, campaigning frequently even outside of election periods, and for much of his time in office proved to be incredibly popular with the Australian electorate; to this date he still holds the highest ever AC Nielsen approval rating of 75%.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 29,
"text": "The Hawke government oversaw significant economic reforms, and is often cited by economic historians as being a \"turning point\" from a protectionist, agricultural model to a more globalised and services-oriented economy. According to the journalist Paul Kelly, \"the most influential economic decisions of the 1980s were the floating of the Australian dollar and the deregulation of the financial system\". Although the Fraser government had played a part in the process of financial deregulation by commissioning the 1981 Campbell Report, opposition from Fraser himself had stalled this process. Shortly after its election in 1983, the Hawke government took the opportunity to implement a comprehensive program of economic reform, in the process \"transform(ing) economics and politics in Australia\".",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 30,
"text": "Hawke and Keating together led the process for overseeing the economic changes by launching a \"National Economic Summit\" one month after their election in 1983, which brought together business and industrial leaders together with politicians and trade union leaders; the three-day summit led to a unanimous adoption of a national economic strategy, generating sufficient political capital for widespread reform to follow. Among other reforms, the Hawke government floated the Australian dollar, repealed rules that prohibited foreign-owned banks to operate in Australia, dismantled the protectionist tariff system, privatised several state sector industries, ended the subsidisation of loss-making industries, and sold off part of the state-owned Commonwealth Bank.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 31,
"text": "The taxation system was also significantly reformed, with income tax rates reduced and the introduction of a fringe benefits tax and a capital gains tax; the latter two reforms were strongly opposed by the Liberal Party at the time, but were never reversed by them when they eventually returned to office in 1996. Partially offsetting these imposts upon the business community—the \"main loser\" from the 1985 Tax Summit according to Paul Kelly—was the introduction of full dividend imputation, a reform insisted upon by Keating. Funding for schools was also considerably increased as part of this package, while financial assistance was provided for students to enable them to stay at school longer; the number of Australian children completing school rose from 3 in 10 at the beginning of the Hawke government to 7 in 10 by its conclusion in 1991. Considerable progress was also made in directing assistance \"to the most disadvantaged recipients over the whole range of welfare benefits.\"",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 32,
"text": "Although criticisms were leveled against the Hawke government that it did not achieve all it said it would do on social policy, it nevertheless enacting a series of reforms which remain in place to the present day. From 1983 to 1989, the Government oversaw the permanent establishment of universal health care in Australia with the creation of Medicare, doubled the number of subsidised childcare places, began the introduction of occupational superannuation, oversaw a significant increase in school retention rates, created subsidised homecare services, oversaw the elimination of poverty traps in the welfare system, increased the real value of the old-age pension, reintroduced the six-monthly indexation of single-person unemployment benefits, and established a wide-ranging programme for paid family support, known as the Family Income Supplement. During the 1980s, the proportion of total government outlays allocated to families, the sick, single parents, widows, the handicapped, and veterans was significantly higher than under the previous Fraser and Whitlam governments.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 33,
"text": "In 1984, the Hawke government enacted the landmark Sex Discrimination Act 1984, which eliminated discrimination on the grounds of sex within the workplace. In 1989, Hawke oversaw the gradual re-introduction of some tuition fees for university study, creating set up the Higher Education Contributions Scheme (HECS). Under the original HECS, a $1,800 fee was charged to all university students, and the Commonwealth paid the balance. A student could defer payment of this HECS amount and repay the debt through the tax system, when the student's income exceeds a threshold level. As part of the reforms, Colleges of Advanced Education entered the University sector by various means. by doing so, university places were able to be expanded. Further notable policy decisions taken during the Government's time in office included the public health campaign regarding HIV/AIDS, and Indigenous land rights reform, with an investigation of the idea of a treaty between Aborigines and the Government being launched, although the latter would be overtaken by events, notably the Mabo court decision.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 34,
"text": "The Hawke government also drew attention for a series of notable environmental decisions, particularly in its second and third terms. In 1983, Hawke personally vetoed the construction of the Franklin Dam in Tasmania, responding to a groundswell of protest around the issue. Hawke also secured the nomination of the Wet Tropics of Queensland as a UNESCO World Heritage Site in 1987, preventing the forests there from being logged. Hawke would later appoint Graham Richardson as Environment Minister, tasking him with winning the second-preference support from environmental parties, something which Richardson later claimed was the major factor in the government's narrow re-election at the 1990 election. In the Government's fourth term, Hawke personally led the Australian delegation to secure changes to the Protocol on Environmental Protection to the Antarctic Treaty, ultimately winning a guarantee that drilling for minerals within Antarctica would be totally prohibited until 2048 at the earliest. Hawke later claimed that the Antarctic drilling ban was his \"proudest achievement\".",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 35,
"text": "As a former ACTU President, Hawke was well-placed to engage in reform of the industrial relations system in Australia, taking a lead on this policy area as in few others. Working closely with ministerial colleagues and the ACTU Secretary, Bill Kelty, Hawke negotiated with trade unions to establish the Prices and Incomes Accord in 1983, an agreement whereby unions agreed to restrict their demands for wage increases, and in turn the Government guaranteed to both minimise inflation and promote an increased social wage, including by establishing new social programmes such as Medicare.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 36,
"text": "Inflation had been a significant issue for the previous decade prior to the election of the Hawke government, regularly running into double-digits. The process of the Accord, by which the Government and trade unions would arbitrate and agree upon wage increases in many sectors, led to a decrease in both inflation and unemployment through to 1990. Criticisms of the Accord would come from both the right and the left of politics. Left-wing critics claimed that it kept real wages stagnant, and that the Accord was a policy of class collaboration and corporatism. By contrast, right-wing critics claimed that the Accord reduced the flexibility of the wages system. Supporters of the Accord, however, pointed to the improvements in the social security system that occurred, including the introduction of rental assistance for social security recipients, the creation of labour market schemes such as NewStart, and the introduction of the Family Income Supplement. In 1986, the Hawke government passed a bill to de-register the Builders Labourers Federation federally due to the union not following the Accord agreements.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 37,
"text": "Despite a percentage fall in real money wages from 1983 to 1991, the social wage of Australian workers was argued by the Government to have improved drastically as a result of these reforms, and the ensuing decline in inflation. The Accord was revisited six further times during the Hawke government, each time in response to new economic developments. The seventh and final revisiting would ultimately lead to the establishment of the enterprise bargaining system, although this would be finalised shortly after Hawke left office in 1991.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 38,
"text": "Arguably the most significant foreign policy achievement of the Government took place in 1989, after Hawke proposed a south-east Asian region-wide forum for leaders and economic ministers to discuss issues of common concern. After winning the support of key countries in the region, this led to the creation of the Asia-Pacific Economic Cooperation (APEC). The first APEC meeting duly took place in Canberra in November 1989; the economic ministers of Australia, Brunei, Canada, Indonesia, Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore, Thailand and the United States all attended. APEC would subsequently grow to become one of the most pre-eminent high-level international forums in the world, particularly after the later inclusions of China and Russia, and the Keating government's later establishment of the APEC Leaders' Forum.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 39,
"text": "Elsewhere in Asia, the Hawke government played a significant role in the build-up to the United Nations peace process for Cambodia, culminating in the Transitional Authority; Hawke's Foreign Minister Gareth Evans was nominated for the Nobel Peace Prize for his role in negotiations. Hawke also took a major public stand after the 1989 Tiananmen Square protests and massacre; despite having spent years trying to get closer relations with China, Hawke gave a tearful address on national television describing the massacre in graphic detail, and unilaterally offered asylum to over 42,000 Chinese students who were living in Australia at the time, many of whom had publicly supported the Tiananmen protesters. Hawke did so without even consulting his Cabinet, stating later that he felt he simply had to act.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 40,
"text": "The Hawke government pursued a close relationship with the United States, assisted by Hawke's close friendship with US Secretary of State George Shultz; this led to a degree of controversy when the Government supported the US's plans to test ballistic missiles off the coast of Tasmania in 1985, as well as seeking to overturn Australia's long-standing ban on uranium exports. Although the US ultimately withdrew the plans to test the missiles, the furore led to a fall in Hawke's approval ratings. Shortly after the 1990 election, Hawke would lead Australia into its first overseas military campaign since the Vietnam War, forming a close alliance with US President George H. W. Bush to join the coalition in the Gulf War. The Royal Australian Navy contributed several destroyers and frigates to the war effort, which successfully concluded in February 1991, with the expulsion of Iraqi forces from Kuwait. The success of the campaign, and the lack of any Australian casualties, led to a brief increase in the popularity of the Government.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 41,
"text": "Through his role on the Commonwealth Heads of Government Meeting, Hawke played a leading role in ensuring the Commonwealth initiated an international boycott on foreign investment into South Africa, building on work undertaken by his predecessor Malcolm Fraser, and in the process clashing publicly with Prime Minister of the United Kingdom Margaret Thatcher, who initially favoured a more cautious approach. The resulting boycott, led by the Commonwealth, was widely credited with helping bring about the collapse of apartheid, and resulted in a high-profile visit by Nelson Mandela in October 1990, months after the latter's release from a 27-year stint in prison. During the visit, Mandela publicly thanked the Hawke government for the role it played in the boycott.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 42,
"text": "Hawke benefited greatly from the disarray into which the Liberal Party fell after the resignation of Fraser following the 1983 election. The Liberals were torn between supporters of the more conservative John Howard and the more liberal Andrew Peacock, with the pair frequently contesting the leadership. Hawke and Keating were also able to use the concealment of the size of the budget deficit by Fraser before the 1983 election to great effect, damaging the Liberal Party's economic credibility as a result.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 43,
"text": "However, Hawke's time as Prime Minister also saw friction develop between himself and the grassroots of the Labor Party, many of whom were unhappy at what they viewed as Hawke's iconoclasm and willingness to cooperate with business interests. Hawke regularly and publicly expressed his willingness to cull Labor's \"sacred cows\". The Labor Left faction, as well as prominent Labor backbencher Barry Jones, offered repeated criticisms of a number of government decisions. Hawke was also subject to challenges from some former colleagues in the trade union movement over his \"confrontationalist style\" in siding with the airline companies in the 1989 Australian pilots' strike.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 44,
"text": "Nevertheless, Hawke was able to comfortably maintain a lead as preferred prime minister in the vast majority of opinion polls carried out throughout his time in office. He recorded the highest popularity rating ever measured by an Australian opinion poll, reaching 75% approval in 1984. After leading Labor to a comfortable victory in the snap 1984 election, called to bring the mandate of the House of Representatives back in line with the Senate, Hawke was able to secure an unprecedented third consecutive term for Labor with a landslide victory in the double dissolution election of 1987. Hawke was subsequently able to lead the nation in the bicentennial celebrations of 1988, culminating with him welcoming Queen Elizabeth II to open the newly constructed Parliament House.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 45,
"text": "The late-1980s recession, and the accompanying high interest rates, saw the Government fall in opinion polls, with many doubting that Hawke could win a fourth election. Keating, who had long understood that he would eventually succeed Hawke as prime minister, began to plan a leadership change; at the end of 1988, Keating put pressure on Hawke to retire in the new year. Hawke rejected this suggestion but reached a secret agreement with Keating, the so-called \"Kirribilli Agreement\", stating that he would step down in Keating's favour at some point after the 1990 election. Hawke subsequently won that election, in the process leading Labor to a record fourth consecutive electoral victory, albeit by a slim margin. Hawke appointed Keating as deputy prime minister to replace the retiring Lionel Bowen.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 46,
"text": "By the end of 1990, frustrated by the lack of any indication from Hawke as to when he might retire, Keating made a provocative speech to the Federal Parliamentary Press Gallery. Hawke considered the speech disloyal, and told Keating he would renege on the Kirribilli Agreement as a result. After attempting to force a resolution privately, Keating finally resigned from the Government in June 1991 to challenge Hawke for the leadership. His resignation came soon after Hawke vetoed in Cabinet a proposal backed by Keating and other ministers for mining to take place at Coronation Hill in Kakadu National Park. Hawke won the leadership spill, and in a press conference after the result, Keating declared that he had fired his \"one shot\" on the leadership. Hawke appointed John Kerin to replace Keating as Treasurer.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 47,
"text": "Despite his victory in the June spill, Hawke quickly began to be regarded by many of his colleagues as a \"wounded\" leader; he had now lost his long-term political partner, his rating in opinion polls were beginning to fall significantly, and after nearly nine years as Prime Minister, there was speculation that it would soon be time for a new leader. Hawke's leadership was ultimately irrevocably damaged at the end of 1991; after Liberal Leader John Hewson released 'Fightback!', a detailed proposal for sweeping economic change, including the introduction of a goods and services tax, Hawke was forced to sack Kerin as Treasurer after the latter made a public gaffe attempting to attack the policy. Keating duly challenged for the leadership a second time on 19 December, arguing that he would better placed to defeat Hewson; this time, Keating succeeded, narrowly defeating Hawke by 56 votes to 51.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 48,
"text": "In a speech to the House of Representatives following the vote, Hawke declared that his nine years as prime minister had left Australia a better and wealthier country, and he was given a standing ovation by those present. He subsequently tendered his resignation to the Governor-General and pledged support to his successor. Hawke briefly returned to the backbench, before resigning from Parliament on 20 February 1992, sparking a by-election which was won by the independent candidate Phil Cleary from among a record field of 22 candidates. Keating would go on to lead Labor to a fifth victory at the 1993 election, although he was defeated by the Liberal Party at the 1996 election.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 49,
"text": "Hawke wrote that he had very few regrets over his time in office, although stated he wished he had been able to advance the cause of Indigenous land rights further. His bitterness towards Keating over the leadership challenges surfaced in his earlier memoirs, although by the 2000s Hawke stated he and Keating had buried their differences, and that they regularly dined together and considered each other friends. The publication of the book Hawke: The Prime Minister, by Hawke's second wife, Blanche d'Alpuget, in 2010, reignited conflict between the two, with Keating accusing Hawke and d'Alpuget of spreading falsehoods about his role in the Hawke government. Despite this, the two campaigned together for Labor several times, including at the 2019 election, where they released their first joint article for nearly three decades; Craig Emerson, who worked for both men, said they had reconciled in later years after Hawke grew ill.",
"title": "Prime Minister of Australia (1983–1991)"
},
{
"paragraph_id": 50,
"text": "After leaving Parliament, Hawke entered the business world, taking on a number of directorships and consultancy positions which enabled him to achieve considerable financial success. He avoided public involvement with the Labor Party during Keating's tenure as Prime Minister, not wanting to be seen as attempting to overshadow his successor. After Keating's defeat and the election of the Howard government at the 1996 election, he returned to public campaigning with Labor and regularly appearing at election launches. Despite his personal affection for Queen Elizabeth II, boasting that he had been her \"favourite Prime Minister\", Hawke was an enthusiastic republican and joined the campaign for a Yes vote in the 1999 republic referendum.",
"title": "Retirement and later life"
},
{
"paragraph_id": 51,
"text": "In 2002, Hawke was named to South Australia's Economic Development Board during the Rann government. In the lead up to the 2007 election, Hawke made a considerable personal effort to support Kevin Rudd, making speeches at a large number of campaign office openings across Australia, and appearing in multiple campaign advertisements. As well as campaigning against WorkChoices, Hawke also attacked John Howard's record as Treasurer, stating \"it was the judgement of every economist and international financial institution that it was the restructuring reforms undertaken by my government, with the full cooperation of the trade union movement, which created the strength of the Australian economy today\". In February 2008, after Rudd's victory, Hawke joined former Prime Ministers Gough Whitlam, Malcolm Fraser and Paul Keating in Parliament House to witness the long anticipated apology to the Stolen Generations.",
"title": "Retirement and later life"
},
{
"paragraph_id": 52,
"text": "In 2009, Hawke helped establish the Centre for Muslim and Non-Muslim Understanding at the University of South Australia. Interfaith dialogue was an important issue for Hawke, who told The Adelaide Review that he was \"convinced that one of the great potential dangers confronting the world is the lack of understanding in regard to the Muslim world. Fanatics have misrepresented what Islam is. They give a false impression of the essential nature of Islam.\"",
"title": "Retirement and later life"
},
{
"paragraph_id": 53,
"text": "In 2016, after taking part in Andrew Denton's Better Off Dead podcast, Hawke added his voice to calls for voluntary euthanasia to be legalised. Hawke labelled as 'absurd' the lack of political will to fix the problem. He revealed that he had such an arrangement with his wife Blanche should such a devastating medical situation occur. He also publicly advocated for nuclear power and the importation of international spent nuclear fuel to Australia for storage and disposal, stating that this could lead to considerable economic benefits for Australia.",
"title": "Retirement and later life"
},
{
"paragraph_id": 54,
"text": "In late December 2018, Hawke revealed that he was in \"terrible health\". While predicting a Labor win in the upcoming 2019 federal election, Hawke said he \"may not witness the party's success\". In May 2019, the month of the election, he issued a joint statement with Paul Keating endorsing Labor's economic plan and condemning the Liberal Party for \"completely [giving] up the economic reform agenda\". They stated that \"Shorten's Labor is the only party of government focused on the need to modernise the economy to deal with the major challenge of our time: human induced climate change\". It was the first joint press statement released by the two since 1991.",
"title": "Retirement and later life"
},
{
"paragraph_id": 55,
"text": "On 16 May 2019, two days before the election, Hawke died at his home in Northbridge at the age of 89, following a short illness. His family held a private cremation on 27 May at Macquarie Park Cemetery and Crematorium where he was subsequently interred. A state memorial was held at the Sydney Opera House on 14 June; speakers included Craig Emerson as master of ceremonies and Kim Beazley reading the eulogy, as well as Paul Keating, Julia Gillard, Bill Kelty, Ross Garnaut, and incumbent Prime Minister Scott Morrison and Opposition Leader Anthony Albanese.",
"title": "Retirement and later life"
},
{
"paragraph_id": 56,
"text": "Hawke married Hazel Masterson in 1956 at Perth Trinity Church. They had three children: Susan (born 1957), Stephen (born 1959) and Roslyn (born 1960). Their fourth child, Robert Jr, died in early infancy in 1963. Hawke was named Victorian Father of the Year in 1971, an honour which his wife disputed due to his heavy drinking and womanising. The couple divorced in 1995, after he left her for the writer Blanche d'Alpuget, and the two lived together in Northbridge, a suburb of the North Shore of Sydney. The divorce estranged Hawke from some of his family for a period, although they had reconciled by the 2010s.",
"title": "Personal life"
},
{
"paragraph_id": 57,
"text": "Throughout his early life, Hawke was a heavy drinker, having set a world record for drinking during his years as a student. Hawke eventually suffered from alcohol poisoning following the death of his and Hazel's infant son in 1963. He publicly announced in 1980 that he would abstain from alcohol to seek election to Parliament, in a move which garnered significant public attention and support. Hawke began to drink again following his retirement from politics, although to a more manageable extent; on several occasions, in his later years, videos of Hawke downing beer at cricket matches would frequently go viral.",
"title": "Personal life"
},
{
"paragraph_id": 58,
"text": "On the subject of religion, Hawke wrote, while attending the 1952 World Christian Youth Conference in India, that \"there were all these poverty stricken kids at the gate of this palatial place where we were feeding our face and I just (was) struck by this enormous sense of irrelevance of religion to the needs of people\". He subsequently abandoned his Christian beliefs. By the time he entered politics he was a self-described agnostic. Hawke told Andrew Denton in 2008 that his father's Christian faith had continued to influence his outlook, saying \"My father said if you believe in the fatherhood of God you must necessarily believe in the brotherhood of man, it follows necessarily, and even though I left the church and was not religious, that truth remained with me.\"",
"title": "Personal life"
},
{
"paragraph_id": 59,
"text": "Hawke was a supporter of National Rugby League club the Canberra Raiders.",
"title": "Personal life"
},
{
"paragraph_id": 60,
"text": "A biographical television film, Hawke, premiered on the Ten Network in Australia on 18 July 2010, with Richard Roxburgh playing the title character. Rachael Blake and Felix Williamson portrayed Hazel Hawke and Paul Keating, respectively. Roxburgh reprised his role as Hawke in the 2020 episode \"Terra Nullius\" of the Netflix series The Crown.",
"title": "Legacy"
},
{
"paragraph_id": 61,
"text": "In July 2019, the Australian Government announced it would spend $750,000 to purchase and renovate the house in Bordertown where Hawke was born and spent his early childhood. In January 2021, the Tatiara District Council decided to turn the house into tourist accommodation.",
"title": "Legacy"
},
{
"paragraph_id": 62,
"text": "In December 2020, the Western Australian Government announced that it had purchased Hawke's childhood home in West Leederville and would maintain it as a state asset. The property will also be assessed for entry onto the State Register of Heritage Places.",
"title": "Legacy"
},
{
"paragraph_id": 63,
"text": "The Australian Government pledged $5 million in July 2019 to establish a new annual scholarship—the Bob Hawke John Monash Scholarship—through the General Sir John Monash Foundation. Bob Hawke College, a high school in Subiaco, Western Australia named after Hawke, was opened in February 2020.",
"title": "Legacy"
},
{
"paragraph_id": 64,
"text": "In March 2020, the Australian Electoral Commission announced that it would create a new Australian electoral division in the House of Representatives named in honour of Hawke. The Division of Hawke was first contested at the 2022 federal election, and is located in the state of Victoria, near the seat of Wills, which Hawke represented from 1980 to 1992.",
"title": "Legacy"
},
{
"paragraph_id": 65,
"text": "Orders",
"title": "Honours"
},
{
"paragraph_id": 66,
"text": "Foreign honours",
"title": "Honours"
},
{
"paragraph_id": 67,
"text": "Fellowships",
"title": "Honours"
},
{
"paragraph_id": 68,
"text": "Honorary degrees",
"title": "Honours"
}
] | Robert James Lee Hawke was an Australian politician and trade unionist who served as the 23rd prime minister of Australia from 1983 to 1991. He held office as the leader of the Australian Labor Party (ALP), having previously served as the president of the Australian Council of Trade Unions from 1969 to 1980 and president of the Labor Party national executive from 1973 to 1978. Hawke was born in Border Town, South Australia. He attended the University of Western Australia and went on to study at University College, Oxford as a Rhodes Scholar. In 1956, Hawke joined the Australian Council of Trade Unions (ACTU) as a research officer. Having risen to become responsible for national wage case arbitration, he was elected as president of the ACTU in 1969, where he achieved a high public profile. In 1973, he was appointed as president of the Labor Party. In 1980, Hawke stood down from his roles as ACTU and Labor Party president to announce his intention to enter parliamentary politics, and was subsequently elected to the Australian House of Representatives as a member of parliament (MP) for the division of Wills at the 1980 federal election. Three years later, he was elected unopposed to replace Bill Hayden as leader of the Australian Labor Party, and within five weeks led Labor to a landslide victory at the 1983 election, and was sworn in as prime minister. He led Labor to victory three times, with successful outcomes in 1984, 1987 and 1990 elections, making him the most electorally successful prime minister in the history of the Labor Party. The Hawke government implemented a significant number of reforms, including major economic reforms, the establishment of Landcare, the introduction of the universal healthcare scheme Medicare, brokering the Prices and Incomes Accord, creating APEC, floating the Australian dollar, deregulating the financial sector, introducing the Family Assistance Scheme, enacting the Sex Discrimination Act to prevent discrimination in the workplace, declaring "Advance Australia Fair" as the country's national anthem, initiating superannuation pension schemes for all workers, negotiating a ban on mining in Antarctica and overseeing passage of the Australia Act that removed all remaining jurisdiction by the United Kingdom from Australia. In June 1991, Hawke faced a leadership challenge by the Treasurer, Paul Keating, but Hawke managed to retain power; however, Keating mounted a second challenge six months later, and won narrowly, replacing Hawke as prime minister. Hawke subsequently retired from parliament, pursuing both a business career and a number of charitable causes, until his death in 2019, aged 89. Hawke remains his party's longest-serving Prime Minister, and Australia's third-longest-serving prime minister behind Robert Menzies and John Howard. He is also the only prime minister to be born in South Australia and the only one raised and educated in Western Australia. Hawke holds the highest-ever approval rating for an Australian prime minister, reaching 75% approval in 1984. Hawke is frequently ranked within the upper tier of Australian prime ministers by historians. | 2001-08-17T16:08:51Z | 2023-12-19T10:21:33Z | [
"Template:S-npo",
"Template:S-bef",
"Template:Clear",
"Template:Citation",
"Template:Commons category",
"Template:Refend",
"Template:S-ttl",
"Template:Australian Labor Party",
"Template:See",
"Template:Flagicon",
"Template:Webarchive",
"Template:S-off",
"Template:Prime Ministers of Australia",
"Template:Quotation",
"Template:Dead link",
"Template:S-aft",
"Template:S-ppo",
"Template:S-end",
"Template:Authority control",
"Template:Cite news",
"Template:Cite AV media",
"Template:S-start",
"Template:Portal",
"Template:Reflist",
"Template:Cite book",
"Template:ISBN",
"Template:Cite journal",
"Template:Short description",
"Template:Infobox officeholder",
"Template:Efn",
"Template:Cbignore",
"Template:Cite press release",
"Template:Cite episode",
"Template:Refbegin",
"Template:Leaders of the Australian Labor Party",
"Template:Use dmy dates",
"Template:Bob Hawke sidebar",
"Template:Cite web",
"Template:Sfn",
"Template:Multiple image",
"Template:Notelist",
"Template:Use Australian English",
"Template:Small",
"Template:Main",
"Template:C-SPAN",
"Template:S-par",
"Template:ACTU Presidents"
] | https://en.wikipedia.org/wiki/Bob_Hawke |
4,060 | Baldr | Baldr (Old Norse: [ˈbɑldz̠]; also Balder, Baldur) is a god in Germanic mythology. In Norse mythology, he is a son of the god Odin and the goddess Frigg, and has numerous brothers, such as Thor and Váli. In wider Germanic mythology, the god was known in Old English as Bældæġ, and in Old High German as Balder, all ultimately stemming from the Proto-Germanic theonym *Balðraz ('hero' or 'prince').
During the 12th century, Danish accounts by Saxo Grammaticus and other Danish Latin chroniclers recorded a euhemerized account of his story. Compiled in Iceland during the 13th century, but based on older Old Norse poetry, the Poetic Edda and the Prose Edda contain numerous references to the death of Baldr as both a great tragedy to the Æsir and a harbinger of Ragnarök.
According to Gylfaginning, a book of Snorri Sturluson's Prose Edda, Baldr's wife is Nanna and their son is Forseti. Baldr had the greatest ship ever built, Hringhorni, and there is no place more beautiful than his hall, Breidablik.
The Old Norse theonym Baldr ('brave, defiant'; also 'lord, prince') and its various Germanic cognates – including Old English Bældæg and Old High German Balder (or Palter) – probably stems from Proto-Germanic *Balðraz ('Hero, Prince'; cf. Old Norse mann-baldr 'great man', Old English bealdor 'prince, hero'), itself a derivative of *balþaz, meaning 'brave' (cf. Old Norse ballr 'hard, stubborn', Gothic balþa* 'bold, frank', Old English beald 'bold, brave, confident', Old Saxon bald 'valiant, bold', Old High German bald 'brave, courageous').
This etymology was originally proposed by Jacob Grimm (1835), who also speculated on a comparison with the Lithuanian báltas ('white', also the name of a light-god) based on the semantic development from 'white' to 'shining' then 'strong'. According to linguist Vladimir Orel, this could be linguistically tenable. Philologist Rudolf Simek also argues that the Old English Bældæg should be interpreted as meaning 'shining day', from a Proto-Germanic root *bēl- (cf. Old English bæl, Old Norse bál 'fire') attached to dæg ('day').
Old Norse also shows the usage of the word as an honorific in a few cases, as in baldur î brynju (Sæm. 272b) and herbaldr (Sæm. 218b), in general epithets of heroes. In continental Saxon and Anglo-Saxon tradition, the son of Woden is called not Bealdor but Baldag (Saxon) and Bældæg, Beldeg (Anglo-Saxon), which shows association with "day", possibly with Day personified as a deity. This, as Grimm points out, would agree with the meaning "shining one, white one, a god" derived from the meaning of Baltic baltas, further adducing Slavic Belobog and German Berhta.
One of the two Merseburg Incantations names Balder (in the genitive singular Balderes), but also mentions a figure named Phol, considered to be a byname for Baldr (as in Scandinavian Falr, Fjalarr; (in Saxo) Balderus : Fjallerus). The incantation relates of Phol ende Wotan riding to the woods, where the foot of Baldr's foal is sprained. Sinthgunt (the sister of the sun), Frigg and Odin sing to the foot in order for it to heal.
Unlike the Prose Edda, in the Poetic Edda the tale of Baldr's death is referred to rather than recounted at length. Baldr is mentioned in Völuspá, in Lokasenna, and is the subject of the Eddic poem Baldr's Dreams.
Among the visions which the Völva sees and describes in Völuspá is Baldr's death. In stanza 32, the Völva says she saw the fate of Baldr "the bleeding god":
Henry Adams Bellows translation: I saw for Baldr, | the bleeding god, The son of Othin, | his destiny set: Famous and fair | in the lofty fields, Full grown in strength | the mistletoe stood.
In the next two stanzas, the Völva refers to Baldr's killing, describes the birth of Váli for the slaying of Höðr and the weeping of Frigg:
Stanza 33: From the branch which seemed | so slender and fair Came a harmful shaft | that Hoth should hurl; But the brother of Baldr | was born ere long, And one night old | fought Othin's son. Stanza 34: His hands he washed not, | his hair he combed not, Till he bore to the bale-blaze | Baldr's foe. But in Fensalir | did Frigg weep sore For Valhall's need: | would you know yet more?
In stanza 62 of Völuspá, looking far into the future, the Völva says that Höðr and Baldr will come back, with the union, according to Bellows, being a symbol of the new age of peace:
Then fields unsowed | bear ripened fruit, All ills grow better, | and Baldr comes back; Baldr and Hoth dwell | in Hropt's battle-hall, And the mighty gods: | would you know yet more?
Baldr is mentioned in two stanzas of Lokasenna, a poem which describes a flyting between the gods and the god Loki. In the first of the two stanzas, Frigg, Baldr's mother, tells Loki that if she had a son like Baldr, Loki would be killed:
Jackson Crawford translation: You know, if I had a son like Balder, sitting here with me in Aegir's hall, in the presence of these gods, I declare you would never come out alive, you'd be killed shortly.
In the next stanza, Loki responds to Frigg, and says that he is the reason Baldr "will never ride home again":
You must want me to recount even more of my mischief, Frigg. After all, I'm the one who made it so that Balder will never ride home again.
The Eddic poem Baldr's Dreams opens with the gods holding a council discussing why Baldr had had bad dreams:
Henry Adams Bellows translation: Once were the gods | together met, And the goddesses came | and council held, And the far-famed ones | the truth would find, Why baleful dreams | to Baldr had come.
Odin then rides to Hel to a Völva's grave and awakens her using magic. The Völva asks Odin, who she does not recognize, who he is, and Odin answers that he is Vegtam ("Wanderer"). Odin asks the Völva for whom are the benches covered in rings and the floor covered in gold. The Völva tells him that in their location mead is brewed for Baldr, and that she spoke unwillingly, so she will speak no more:
Here for Baldr | the mead is brewed, The shining drink, | and a shield lies o'er it; But their hope is gone | from the mighty gods. Unwilling I spake, | and now would be still.
Odin asks the Völva to not be silent and asks her who will kill Baldr. The Völva replies and says that Höðr will kill Baldr, and again says that she spoke unwillingly, and that she will speak no more:
Hoth thither bears | the far-famed branch, He shall the bane | of Baldr become, And steal the life | from Othin's son. Unwilling I spake, | and now would be still.
Odin again asks the Völva to not be silent and asks her who will avenge Baldr's death. The Völva replies that Váli will, when he will be one night old. Once again, she says that she will speak no more:
Rind bears Vali | in Vestrsalir, And one night old | fights Othin's son; His hands he shall wash not, | his hair he shall comb not, Till the slayer of Baldr | he brings to the flames. Unwilling I spake, | and now would be still.
Odin again asks the Völva to not be silent and says that he seeks to know who the women that will then weep be. The Völva realizes that Vegtam is Odin in disguise. Odin says that the Völva is not a Völva, and that she is the mother of three giants. The Völva tells Odin to ride back home proud, because she will speak to no more men until Loki escapes his bounds.
In Gylfaginning, Baldr is described as follows:
Apart from this description, Baldr is known primarily for the story of his death, which is seen as the first in a chain of events that will ultimately lead to the destruction of the gods at Ragnarök. According to Völuspá, Baldr will be reborn in the new world.
Baldr had a dream of his own death and his mother, Frigg, had the same dream. Since dreams were usually prophetic, this depressed him, and so Frigg made every object on earth vow never to hurt Baldr. All objects made this vow, save for the mistletoe—a detail which has traditionally been explained with the idea that it was too unimportant and nonthreatening to bother asking it to make the vow, but which Merrill Kaplan has instead argued echoes the fact that young people were not eligible to swear legal oaths, which could make them a threat later in life.
When Loki, the mischief-maker, heard of this, he made a magical spear from this plant (in some later versions, an arrow). He hurried to the place where the gods were indulging in their new pastime of hurling objects at Baldr, which would bounce off without harming him. Loki gave the spear to Baldr's brother, the blind god Höðr, who then inadvertently killed his brother with it (other versions suggest that Loki guided the arrow himself). For this act, Odin and the ásynja Rindr gave birth to Váli, who grew to adulthood within a day and slew Höðr.
Baldr was ceremonially burnt upon his ship Hringhorni, the largest of all ships. On the pyre he was given the magical ring Draupnir. At first the gods were not able to push the ship out onto sea, and so they sent for Hyrrokin, a giantess, who came riding on a wolf and gave the ship such a push that fire flashed from the rollers and all the earth shook.
As he was carried to the ship, Odin whispered something in his ear. The import of this speech was held to be unknowable, and the question of what was said was thus used as an unanswerable riddle by Odin in other sources, namely against the giant Vafthrudnir in the Eddic poem Vafthrudnismal and in the riddles of Gestumblindi in Hervarar saga.
Upon seeing the corpse being carried to the ship, Nanna, his wife, died of grief. She was then placed on the funeral fire (perhaps a toned-down instance of Sati, also attested in the Arab traveller Ibn Fadlan’s account of a funeral among the Rus'), after which it was set on fire. Baldr's horse with all its trappings was also laid on the pyre.
As the pyre was set on fire, Thor blessed it with his hammer Mjǫllnir. As he did a small dwarf named Litr came running before his feet. Thor then kicked him into the pyre.
Upon Frigg's entreaties, delivered through the messenger Hermod, Hel promised to release Baldr from the underworld if all objects alive and dead would weep for him. All did, except a giantess, Þökk (often presumed to be the god Loki in disguise), who refused to mourn the slain god. Thus Baldr had to remain in the underworld, not to emerge until after Ragnarök, when he and his brother Höðr would be reconciled and rule the new earth together with Thor's sons.
Besides these descriptions of Baldr, the Prose Edda also explicitly links him to the Anglo-Saxon Beldeg in its prologue.
Writing during the end of the 12th century, the Danish historian Saxo Grammaticus tells the story of Baldr (recorded as Balderus) in a form that professes to be historical. According to him, Balderus and Høtherus were rival suitors for the hand of Nanna, daughter of Gewar, King of Norway. Balderus was a demigod and common steel could not wound his sacred body. The two rivals encountered each other in a terrific battle. Though Odin and Thor and the other gods fought for Balderus, he was defeated and fled away, and Høtherus married the princess.
Nevertheless, Balderus took heart of grace and again met Høtherus in a stricken field. But he fared even worse than before. Høtherus dealt him a deadly wound with a magic sword, named Mistletoe, which he had received from Mimir, the satyr of the woods; after lingering three days in pain Balderus died of his injury and was buried with royal honours in a barrow.
A Latin votive inscription from Utrecht, from the 3rd or 4th century C.E., has been theorized as containing the dative form Baldruo, pointing to a Latin nominative singular *Baldruus, which some have identified with the Norse/Germanic god, although both the reading and this interpretation have been questioned.
In the Anglo-Saxon Chronicle Baldr is named as the ancestor of the monarchy of Kent, Bernicia, Deira, and Wessex through his supposed son Brond.
There are a few old place names in Scandinavia that contain the name Baldr. The most certain and notable one is the (former) parish name Balleshol in Hedmark county, Norway: "a Balldrshole" 1356 (where the last element is hóll m "mound; small hill"). Others may be (in Norse forms) Baldrsberg in Vestfold county, Baldrsheimr in Hordaland county Baldrsnes in Sør-Trøndelag county—and (very uncertain) the Balsfjorden fjord and Balsfjord municipality in Troms county.
In Copenhagen, there is also a Baldersgade, or "Balder's Street". A street in downtown Reykjavík is called Baldursgata (Baldur's Street).
In Sweden there is a Baldersgatan (Balder's Street) in Stockholm. There is also Baldersnäs (Balder's isthmus), Baldersvik (Balder's bay), Balders udde (Balder's headland) and Baldersberg (Balder's mountain) at various places.
Balder the Brave is a fictional character based on Baldr. He has appeared in comic books published by Marvel Comics as the half-brother of Thor, and son of Odin, ruler of the gods. Baldr is featured in a number of video games. In Ensemble Studios' 2002 video game Age of Mythology Baldr is one of nine minor gods Norse players can worship. Baldr (spelled Baldur in-game) is also the main antagonist in Santa Monica Studio's 2018 video game God of War. However, he differs greatly in the game from the Baldr depicted in Norse writings and traditional artistic depictions as he is much more aggressive, crude, and rugged in appearance. | [
{
"paragraph_id": 0,
"text": "Baldr (Old Norse: [ˈbɑldz̠]; also Balder, Baldur) is a god in Germanic mythology. In Norse mythology, he is a son of the god Odin and the goddess Frigg, and has numerous brothers, such as Thor and Váli. In wider Germanic mythology, the god was known in Old English as Bældæġ, and in Old High German as Balder, all ultimately stemming from the Proto-Germanic theonym *Balðraz ('hero' or 'prince').",
"title": ""
},
{
"paragraph_id": 1,
"text": "During the 12th century, Danish accounts by Saxo Grammaticus and other Danish Latin chroniclers recorded a euhemerized account of his story. Compiled in Iceland during the 13th century, but based on older Old Norse poetry, the Poetic Edda and the Prose Edda contain numerous references to the death of Baldr as both a great tragedy to the Æsir and a harbinger of Ragnarök.",
"title": ""
},
{
"paragraph_id": 2,
"text": "According to Gylfaginning, a book of Snorri Sturluson's Prose Edda, Baldr's wife is Nanna and their son is Forseti. Baldr had the greatest ship ever built, Hringhorni, and there is no place more beautiful than his hall, Breidablik.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Old Norse theonym Baldr ('brave, defiant'; also 'lord, prince') and its various Germanic cognates – including Old English Bældæg and Old High German Balder (or Palter) – probably stems from Proto-Germanic *Balðraz ('Hero, Prince'; cf. Old Norse mann-baldr 'great man', Old English bealdor 'prince, hero'), itself a derivative of *balþaz, meaning 'brave' (cf. Old Norse ballr 'hard, stubborn', Gothic balþa* 'bold, frank', Old English beald 'bold, brave, confident', Old Saxon bald 'valiant, bold', Old High German bald 'brave, courageous').",
"title": "Name"
},
{
"paragraph_id": 4,
"text": "This etymology was originally proposed by Jacob Grimm (1835), who also speculated on a comparison with the Lithuanian báltas ('white', also the name of a light-god) based on the semantic development from 'white' to 'shining' then 'strong'. According to linguist Vladimir Orel, this could be linguistically tenable. Philologist Rudolf Simek also argues that the Old English Bældæg should be interpreted as meaning 'shining day', from a Proto-Germanic root *bēl- (cf. Old English bæl, Old Norse bál 'fire') attached to dæg ('day').",
"title": "Name"
},
{
"paragraph_id": 5,
"text": "Old Norse also shows the usage of the word as an honorific in a few cases, as in baldur î brynju (Sæm. 272b) and herbaldr (Sæm. 218b), in general epithets of heroes. In continental Saxon and Anglo-Saxon tradition, the son of Woden is called not Bealdor but Baldag (Saxon) and Bældæg, Beldeg (Anglo-Saxon), which shows association with \"day\", possibly with Day personified as a deity. This, as Grimm points out, would agree with the meaning \"shining one, white one, a god\" derived from the meaning of Baltic baltas, further adducing Slavic Belobog and German Berhta.",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "One of the two Merseburg Incantations names Balder (in the genitive singular Balderes), but also mentions a figure named Phol, considered to be a byname for Baldr (as in Scandinavian Falr, Fjalarr; (in Saxo) Balderus : Fjallerus). The incantation relates of Phol ende Wotan riding to the woods, where the foot of Baldr's foal is sprained. Sinthgunt (the sister of the sun), Frigg and Odin sing to the foot in order for it to heal.",
"title": "Attestations"
},
{
"paragraph_id": 7,
"text": "Unlike the Prose Edda, in the Poetic Edda the tale of Baldr's death is referred to rather than recounted at length. Baldr is mentioned in Völuspá, in Lokasenna, and is the subject of the Eddic poem Baldr's Dreams.",
"title": "Attestations"
},
{
"paragraph_id": 8,
"text": "Among the visions which the Völva sees and describes in Völuspá is Baldr's death. In stanza 32, the Völva says she saw the fate of Baldr \"the bleeding god\":",
"title": "Attestations"
},
{
"paragraph_id": 9,
"text": "Henry Adams Bellows translation: I saw for Baldr, | the bleeding god, The son of Othin, | his destiny set: Famous and fair | in the lofty fields, Full grown in strength | the mistletoe stood.",
"title": "Attestations"
},
{
"paragraph_id": 10,
"text": "In the next two stanzas, the Völva refers to Baldr's killing, describes the birth of Váli for the slaying of Höðr and the weeping of Frigg:",
"title": "Attestations"
},
{
"paragraph_id": 11,
"text": "Stanza 33: From the branch which seemed | so slender and fair Came a harmful shaft | that Hoth should hurl; But the brother of Baldr | was born ere long, And one night old | fought Othin's son. Stanza 34: His hands he washed not, | his hair he combed not, Till he bore to the bale-blaze | Baldr's foe. But in Fensalir | did Frigg weep sore For Valhall's need: | would you know yet more?",
"title": "Attestations"
},
{
"paragraph_id": 12,
"text": "In stanza 62 of Völuspá, looking far into the future, the Völva says that Höðr and Baldr will come back, with the union, according to Bellows, being a symbol of the new age of peace:",
"title": "Attestations"
},
{
"paragraph_id": 13,
"text": "Then fields unsowed | bear ripened fruit, All ills grow better, | and Baldr comes back; Baldr and Hoth dwell | in Hropt's battle-hall, And the mighty gods: | would you know yet more?",
"title": "Attestations"
},
{
"paragraph_id": 14,
"text": "Baldr is mentioned in two stanzas of Lokasenna, a poem which describes a flyting between the gods and the god Loki. In the first of the two stanzas, Frigg, Baldr's mother, tells Loki that if she had a son like Baldr, Loki would be killed:",
"title": "Attestations"
},
{
"paragraph_id": 15,
"text": "Jackson Crawford translation: You know, if I had a son like Balder, sitting here with me in Aegir's hall, in the presence of these gods, I declare you would never come out alive, you'd be killed shortly.",
"title": "Attestations"
},
{
"paragraph_id": 16,
"text": "In the next stanza, Loki responds to Frigg, and says that he is the reason Baldr \"will never ride home again\":",
"title": "Attestations"
},
{
"paragraph_id": 17,
"text": "You must want me to recount even more of my mischief, Frigg. After all, I'm the one who made it so that Balder will never ride home again.",
"title": "Attestations"
},
{
"paragraph_id": 18,
"text": "The Eddic poem Baldr's Dreams opens with the gods holding a council discussing why Baldr had had bad dreams:",
"title": "Attestations"
},
{
"paragraph_id": 19,
"text": "Henry Adams Bellows translation: Once were the gods | together met, And the goddesses came | and council held, And the far-famed ones | the truth would find, Why baleful dreams | to Baldr had come.",
"title": "Attestations"
},
{
"paragraph_id": 20,
"text": "Odin then rides to Hel to a Völva's grave and awakens her using magic. The Völva asks Odin, who she does not recognize, who he is, and Odin answers that he is Vegtam (\"Wanderer\"). Odin asks the Völva for whom are the benches covered in rings and the floor covered in gold. The Völva tells him that in their location mead is brewed for Baldr, and that she spoke unwillingly, so she will speak no more:",
"title": "Attestations"
},
{
"paragraph_id": 21,
"text": "Here for Baldr | the mead is brewed, The shining drink, | and a shield lies o'er it; But their hope is gone | from the mighty gods. Unwilling I spake, | and now would be still.",
"title": "Attestations"
},
{
"paragraph_id": 22,
"text": "Odin asks the Völva to not be silent and asks her who will kill Baldr. The Völva replies and says that Höðr will kill Baldr, and again says that she spoke unwillingly, and that she will speak no more:",
"title": "Attestations"
},
{
"paragraph_id": 23,
"text": "Hoth thither bears | the far-famed branch, He shall the bane | of Baldr become, And steal the life | from Othin's son. Unwilling I spake, | and now would be still.",
"title": "Attestations"
},
{
"paragraph_id": 24,
"text": "Odin again asks the Völva to not be silent and asks her who will avenge Baldr's death. The Völva replies that Váli will, when he will be one night old. Once again, she says that she will speak no more:",
"title": "Attestations"
},
{
"paragraph_id": 25,
"text": "Rind bears Vali | in Vestrsalir, And one night old | fights Othin's son; His hands he shall wash not, | his hair he shall comb not, Till the slayer of Baldr | he brings to the flames. Unwilling I spake, | and now would be still.",
"title": "Attestations"
},
{
"paragraph_id": 26,
"text": "Odin again asks the Völva to not be silent and says that he seeks to know who the women that will then weep be. The Völva realizes that Vegtam is Odin in disguise. Odin says that the Völva is not a Völva, and that she is the mother of three giants. The Völva tells Odin to ride back home proud, because she will speak to no more men until Loki escapes his bounds.",
"title": "Attestations"
},
{
"paragraph_id": 27,
"text": "In Gylfaginning, Baldr is described as follows:",
"title": "Attestations"
},
{
"paragraph_id": 28,
"text": "Apart from this description, Baldr is known primarily for the story of his death, which is seen as the first in a chain of events that will ultimately lead to the destruction of the gods at Ragnarök. According to Völuspá, Baldr will be reborn in the new world.",
"title": "Attestations"
},
{
"paragraph_id": 29,
"text": "Baldr had a dream of his own death and his mother, Frigg, had the same dream. Since dreams were usually prophetic, this depressed him, and so Frigg made every object on earth vow never to hurt Baldr. All objects made this vow, save for the mistletoe—a detail which has traditionally been explained with the idea that it was too unimportant and nonthreatening to bother asking it to make the vow, but which Merrill Kaplan has instead argued echoes the fact that young people were not eligible to swear legal oaths, which could make them a threat later in life.",
"title": "Attestations"
},
{
"paragraph_id": 30,
"text": "When Loki, the mischief-maker, heard of this, he made a magical spear from this plant (in some later versions, an arrow). He hurried to the place where the gods were indulging in their new pastime of hurling objects at Baldr, which would bounce off without harming him. Loki gave the spear to Baldr's brother, the blind god Höðr, who then inadvertently killed his brother with it (other versions suggest that Loki guided the arrow himself). For this act, Odin and the ásynja Rindr gave birth to Váli, who grew to adulthood within a day and slew Höðr.",
"title": "Attestations"
},
{
"paragraph_id": 31,
"text": "Baldr was ceremonially burnt upon his ship Hringhorni, the largest of all ships. On the pyre he was given the magical ring Draupnir. At first the gods were not able to push the ship out onto sea, and so they sent for Hyrrokin, a giantess, who came riding on a wolf and gave the ship such a push that fire flashed from the rollers and all the earth shook.",
"title": "Attestations"
},
{
"paragraph_id": 32,
"text": "As he was carried to the ship, Odin whispered something in his ear. The import of this speech was held to be unknowable, and the question of what was said was thus used as an unanswerable riddle by Odin in other sources, namely against the giant Vafthrudnir in the Eddic poem Vafthrudnismal and in the riddles of Gestumblindi in Hervarar saga.",
"title": "Attestations"
},
{
"paragraph_id": 33,
"text": "Upon seeing the corpse being carried to the ship, Nanna, his wife, died of grief. She was then placed on the funeral fire (perhaps a toned-down instance of Sati, also attested in the Arab traveller Ibn Fadlan’s account of a funeral among the Rus'), after which it was set on fire. Baldr's horse with all its trappings was also laid on the pyre.",
"title": "Attestations"
},
{
"paragraph_id": 34,
"text": "As the pyre was set on fire, Thor blessed it with his hammer Mjǫllnir. As he did a small dwarf named Litr came running before his feet. Thor then kicked him into the pyre.",
"title": "Attestations"
},
{
"paragraph_id": 35,
"text": "Upon Frigg's entreaties, delivered through the messenger Hermod, Hel promised to release Baldr from the underworld if all objects alive and dead would weep for him. All did, except a giantess, Þökk (often presumed to be the god Loki in disguise), who refused to mourn the slain god. Thus Baldr had to remain in the underworld, not to emerge until after Ragnarök, when he and his brother Höðr would be reconciled and rule the new earth together with Thor's sons.",
"title": "Attestations"
},
{
"paragraph_id": 36,
"text": "Besides these descriptions of Baldr, the Prose Edda also explicitly links him to the Anglo-Saxon Beldeg in its prologue.",
"title": "Attestations"
},
{
"paragraph_id": 37,
"text": "Writing during the end of the 12th century, the Danish historian Saxo Grammaticus tells the story of Baldr (recorded as Balderus) in a form that professes to be historical. According to him, Balderus and Høtherus were rival suitors for the hand of Nanna, daughter of Gewar, King of Norway. Balderus was a demigod and common steel could not wound his sacred body. The two rivals encountered each other in a terrific battle. Though Odin and Thor and the other gods fought for Balderus, he was defeated and fled away, and Høtherus married the princess.",
"title": "Attestations"
},
{
"paragraph_id": 38,
"text": "Nevertheless, Balderus took heart of grace and again met Høtherus in a stricken field. But he fared even worse than before. Høtherus dealt him a deadly wound with a magic sword, named Mistletoe, which he had received from Mimir, the satyr of the woods; after lingering three days in pain Balderus died of his injury and was buried with royal honours in a barrow.",
"title": "Attestations"
},
{
"paragraph_id": 39,
"text": "A Latin votive inscription from Utrecht, from the 3rd or 4th century C.E., has been theorized as containing the dative form Baldruo, pointing to a Latin nominative singular *Baldruus, which some have identified with the Norse/Germanic god, although both the reading and this interpretation have been questioned.",
"title": "Attestations"
},
{
"paragraph_id": 40,
"text": "In the Anglo-Saxon Chronicle Baldr is named as the ancestor of the monarchy of Kent, Bernicia, Deira, and Wessex through his supposed son Brond.",
"title": "Attestations"
},
{
"paragraph_id": 41,
"text": "There are a few old place names in Scandinavia that contain the name Baldr. The most certain and notable one is the (former) parish name Balleshol in Hedmark county, Norway: \"a Balldrshole\" 1356 (where the last element is hóll m \"mound; small hill\"). Others may be (in Norse forms) Baldrsberg in Vestfold county, Baldrsheimr in Hordaland county Baldrsnes in Sør-Trøndelag county—and (very uncertain) the Balsfjorden fjord and Balsfjord municipality in Troms county.",
"title": "Attestations"
},
{
"paragraph_id": 42,
"text": "In Copenhagen, there is also a Baldersgade, or \"Balder's Street\". A street in downtown Reykjavík is called Baldursgata (Baldur's Street).",
"title": "Attestations"
},
{
"paragraph_id": 43,
"text": "In Sweden there is a Baldersgatan (Balder's Street) in Stockholm. There is also Baldersnäs (Balder's isthmus), Baldersvik (Balder's bay), Balders udde (Balder's headland) and Baldersberg (Balder's mountain) at various places.",
"title": "Attestations"
},
{
"paragraph_id": 44,
"text": "Balder the Brave is a fictional character based on Baldr. He has appeared in comic books published by Marvel Comics as the half-brother of Thor, and son of Odin, ruler of the gods. Baldr is featured in a number of video games. In Ensemble Studios' 2002 video game Age of Mythology Baldr is one of nine minor gods Norse players can worship. Baldr (spelled Baldur in-game) is also the main antagonist in Santa Monica Studio's 2018 video game God of War. However, he differs greatly in the game from the Baldr depicted in Norse writings and traditional artistic depictions as he is much more aggressive, crude, and rugged in appearance.",
"title": "In popular culture"
}
] | Baldr is a god in Germanic mythology. In Norse mythology, he is a son of the god Odin and the goddess Frigg, and has numerous brothers, such as Thor and Váli. In wider Germanic mythology, the god was known in Old English as Bældæġ, and in Old High German as Balder, all ultimately stemming from the Proto-Germanic theonym *Balðraz. During the 12th century, Danish accounts by Saxo Grammaticus and other Danish Latin chroniclers recorded a euhemerized account of his story. Compiled in Iceland during the 13th century, but based on older Old Norse poetry, the Poetic Edda and the Prose Edda contain numerous references to the death of Baldr as both a great tragedy to the Æsir and a harbinger of Ragnarök. According to Gylfaginning, a book of Snorri Sturluson's Prose Edda, Baldr's wife is Nanna and their son is Forseti. Baldr had the greatest ship ever built, Hringhorni, and there is no place more beautiful than his hall, Breidablik. | 2001-09-02T09:31:29Z | 2023-12-03T12:24:39Z | [
"Template:Cite book",
"Template:Cite web",
"Template:Sister project links",
"Template:Norse mythology",
"Template:Short description",
"Template:Verse translation",
"Template:Redirect",
"Template:Use dmy dates",
"Template:Wikisource1911Enc",
"Template:ISBN",
"Template:Page needed",
"Template:IPA-non",
"Template:Lang",
"Template:Sfn",
"Template:Poemquote",
"Template:Reflist",
"Template:ASIN",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Baldr |
4,061 | Breidablik | Breiðablik (sometimes anglicised as Breithablik or Breidablik) is the home of Baldr in Nordic mythology.
The word Breiðablik has been variously translated as 'broad sheen', 'Broad gleam', 'Broad-gleaming' or 'the far-shining one',
The Eddic poem Grímnismál describes Breiðablik as the fair home of Baldr:
In Snorri Sturluson's Gylfaginning, Breiðablik is described in a list of places in heaven, identified by some scholars as Asgard:
Later in the work, when Snorri describes Baldr, he gives another description, citing Grímnismál, though he does not name the poem:
The name of Breiðablik has been noted to link with Baldr's attributes of light and beauty.
Similarities have been drawn between the description of Breiðablik in Grímnismál and Heorot in Beowulf, which are both free of 'baleful runes' (Old Norse: feicnstafi and Old English: fācenstafas respectively). In Beowulf, the lack of fācenstafas refers to the absence of crimes being committed, and therefore both halls have been proposed to be sanctuaries. | [
{
"paragraph_id": 0,
"text": "Breiðablik (sometimes anglicised as Breithablik or Breidablik) is the home of Baldr in Nordic mythology.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The word Breiðablik has been variously translated as 'broad sheen', 'Broad gleam', 'Broad-gleaming' or 'the far-shining one',",
"title": "Meaning"
},
{
"paragraph_id": 2,
"text": "The Eddic poem Grímnismál describes Breiðablik as the fair home of Baldr:",
"title": "Attestations"
},
{
"paragraph_id": 3,
"text": "In Snorri Sturluson's Gylfaginning, Breiðablik is described in a list of places in heaven, identified by some scholars as Asgard:",
"title": "Attestations"
},
{
"paragraph_id": 4,
"text": "Later in the work, when Snorri describes Baldr, he gives another description, citing Grímnismál, though he does not name the poem:",
"title": "Attestations"
},
{
"paragraph_id": 5,
"text": "The name of Breiðablik has been noted to link with Baldr's attributes of light and beauty.",
"title": "Interpretation and discussion"
},
{
"paragraph_id": 6,
"text": "Similarities have been drawn between the description of Breiðablik in Grímnismál and Heorot in Beowulf, which are both free of 'baleful runes' (Old Norse: feicnstafi and Old English: fācenstafas respectively). In Beowulf, the lack of fācenstafas refers to the absence of crimes being committed, and therefore both halls have been proposed to be sanctuaries.",
"title": "Interpretation and discussion"
}
] | Breiðablik is the home of Baldr in Nordic mythology. | 2001-08-17T19:03:15Z | 2023-12-30T08:25:49Z | [
"Template:Norse mythology",
"Template:Cite journal",
"Template:Short description",
"Template:Cite web",
"Template:Lang-non",
"Template:Lang-ang",
"Template:Reflist",
"Template:Cite book",
"Template:Lang",
"Template:Sfn",
"Template:Refbegin",
"Template:Refend",
"Template:About",
"Template:Unreferenced section"
] | https://en.wikipedia.org/wiki/Breidablik |
4,062 | Bilskirnir | Bilskirnir (Old Norse "lightning-crack") is the hall of the god Thor in Norse mythology. Here he lives with his wife Sif and their children. According to Grímnismál, the hall is the greatest of buildings and contains 540 rooms, located in Asgard, as are all the dwellings of the gods, in the kingdom of Þrúðheimr (or Þrúðvangar according to Gylfaginning and Ynglinga saga). | [
{
"paragraph_id": 0,
"text": "Bilskirnir (Old Norse \"lightning-crack\") is the hall of the god Thor in Norse mythology. Here he lives with his wife Sif and their children. According to Grímnismál, the hall is the greatest of buildings and contains 540 rooms, located in Asgard, as are all the dwellings of the gods, in the kingdom of Þrúðheimr (or Þrúðvangar according to Gylfaginning and Ynglinga saga).",
"title": ""
},
{
"paragraph_id": 1,
"text": "",
"title": "References"
}
] | Bilskirnir is the hall of the god Thor in Norse mythology. Here he lives with his wife Sif and their children. According to Grímnismál, the hall is the greatest of buildings and contains 540 rooms, located in Asgard, as are all the dwellings of the gods, in the kingdom of Þrúðheimr. | 2021-06-01T11:35:32Z | [
"Template:Short description",
"Template:Reflist",
"Template:ISBN",
"Template:Þórr",
"Template:Norse-myth-stub"
] | https://en.wikipedia.org/wiki/Bilskirnir |
|
4,063 | Brísingamen | In Norse mythology, Brísingamen (or Brísinga men) is the torc or necklace of the goddess Freyja. The name is an Old Norse compound brísinga-men whose second element is men "(ornamental) neck-ring (of precious metal), torc". The etymology of the first element is uncertain. It has been derived from Old Norse brísingr, a poetic term for "fire" or "amber" mentioned in the anonymous versified word-lists (þulur) appended to many manuscripts of the Prose Edda, making Brísingamen "gleaming torc", "sunny torc", or the like. However, Brísingr can also be an ethnonym, in which case Brísinga men is "torque of the Brísings"; the Old English parallel in Beowulf supports this derivation, though who the Brísings (Old Norse Brísingar) may have been remains unknown.
Brísingamen is referred to in the Anglo-Saxon epic Beowulf as Brosinga mene. The brief mention in Beowulf is as follows (trans. by Howell Chickering, 1977):
[S]ince Hama bore off to the shining city the Brosings' necklace, Gem-figured filigree. He gained the hatred Of Eormanric the Goth, chose eternal reward.
The Beowulf poet is clearly referring to the legends about Theoderic the Great. The Þiðrekssaga tells that the warrior Heime (Háma in Old English) takes sides against Ermanaric ("Eormanric"), king of the Goths, and has to flee his kingdom after robbing him; later in life, Hama enters a monastery and gives them all his stolen treasure. However, this saga makes no mention of the great necklace.
In the poem Þrymskviða of the Poetic Edda, Þrymr, the king of the jǫtnar, steals Thor's hammer, Mjölnir. Freyja lends Loki her falcon cloak to search for it; but upon returning, Loki tells Freyja that Þrymr has hidden the hammer and demanded to marry her in return. Freyja is so wrathful that all the Æsir’s halls beneath her are shaken and the necklace Brísingamen breaks off from her neck. Later Thor borrows Brísingamen when he dresses up as Freyja to go to the wedding at Jǫtunheimr.
Húsdrápa, a skaldic poem partially preserved in the Prose Edda, relates the story of the theft of Brísingamen by Loki. One day when Freyja wakes up and finds Brísingamen missing, she enlists the help of Heimdallr to help her search for it. Eventually they find the thief, who turns out to be Loki who has transformed himself into a seal. Heimdallr turns into a seal as well and fights Loki. After a lengthy battle at Singasteinn, Heimdallr wins and returns Brísingamen to Freyja.
Snorri Sturluson quoted this old poem in Skáldskaparmál, saying that because of this legend Heimdallr is called "Seeker of Freyja's Necklace" (Skáldskaparmál, section 8) and Loki is called "Thief of Brísingamen" (Skáldskaparmál, section 16). A similar story appears in the later Sörla þáttr, where Heimdallr does not appear.
Sörla þáttr is a short story in the later and extended version of the Saga of Olaf Tryggvason in the manuscript of the Flateyjarbók, which was written and compiled by two Christian priests, Jon Thordson and Magnus Thorhalson, in the late 14th century. In the end of the story, the arrival of Christianity dissolves the old curse that traditionally was to endure until Ragnarök.
Freyja was a human in Asia and was the favorite concubine of Odin, King of Asialand. When this woman wanted to buy a golden necklace (no name given) forged by four dwarves (named Dvalinn, Alfrik, Berlingr, and Grer), she offered them gold and silver but they replied that they would only sell it to her if she would lie a night by each of them. She came home afterward with the necklace and kept silent as if nothing happened. But a man called Loki somehow knew it, and came to tell Odin. King Odin commanded Loki to steal the necklace, so Loki turned into a fly to sneak into Freyja's bower and stole it. When Freyja found her necklace missing, she came to ask king Odin. In exchange for it, Odin ordered her to make two kings, each served by twenty kings, fight forever unless some christened men so brave would dare to enter the battle and slay them. She said yes, and got that necklace back. Under the spell, king Högni and king Heðinn battled for one hundred and forty-three years, as soon as they fell down they had to stand up again and fight on. But in the end, the Christian lord Olaf Tryggvason, who has a great fate and luck, arrived with his christened men, and whoever slain by a Christian would stay dead. Thus the pagan curse was finally dissolved by the arrival of Christianity. After that, the noble man, king Olaf, went back to his realm.
The battle of Högni and Heðinn is recorded in several medieval sources, including the skaldic poem Ragnarsdrápa, Skáldskaparmál (section 49), and Gesta Danorum: king Högni's daughter, Hildr, is kidnapped by king Heðinn. When Högni comes to fight Heðinn on an island, Hildr comes to offer her father a necklace on behalf of Heðinn for peace; but the two kings still battle, and Hildr resurrects the fallen to make them fight until Ragnarök. None of these earlier sources mentions Freyja or king Olaf Tryggvason, the historical figure who Christianized Norway and Iceland in the 10th Century.
A Völva was buried c. 1000 with considerable splendour in Hagebyhöga in Östergötland, Sweden. In addition to being buried with her wand, she had received great riches which included horses, a wagon and an Arabian bronze pitcher. There was also a silver pendant, which represents a woman with a broad necklace around her neck. This kind of necklace was only worn by the most prominent women during the Iron Age and some have interpreted it as Freyja's necklace Brísingamen. The pendant may represent Freyja herself.
Alan Garner wrote a children's fantasy novel called The Weirdstone of Brisingamen, published in 1960, about an enchanted teardrop bracelet.
Diana Paxson's novel Brisingamen features Freyja and her bracelet.
Black Phoenix Alchemy Lab has a perfumed oil scent named Brisingamen.
Freyja's necklace Brisingamen features prominently in Betsy Tobin's novel Iceland, where the necklace is seen to have significant protective powers.
The Brisingamen feature as a major item in Joel Rosenberg's Keepers of the Hidden Ways series of books. In it, there are seven jewels that were created for the necklace by the Dwarfs and given to the Norse goddess. She in turn eventually split them up into the seven separate jewels and hid them throughout the realm, as together they hold the power to shape the universe by its holder. The book's plot is about discovering one of them and deciding what to do with the power they allow while avoiding Loki and other Norse characters.
In Christopher Paolini's The Inheritance Cycle, the word "brisingr" means fire. This is probably a distillation of the word brisinga.
Ursula Le Guin's short story Semley's Necklace, the first part of her novel Rocannon's World, is a retelling of the Brisingamen story on an alien planet.
Brisingamen is represented as a card in the Yu-Gi-Oh! Trading Card Game, "Nordic Relic Brisingamen".
Brisingamen was part of MMORPG Ragnarok Online lore, which is ranked as "God item". The game is heavily based from Norse mythology.
In the Firefly Online game, one of the planets of the Himinbjörg system (which features planets named after figures from Germanic mythology) is named Brisingamen. It is third from the star, and has moons named Freya, Beowulf, and Alberich.
The Brisingamen is an item that can be found and equipped in the video game, Castlevania: Lament of Innocence.
In the French comics Freaks' Squeele, the character of Valkyrie accesses her costume change ability by touching a decorative torque necklace affixed to her forehead, named Brizingamen. | [
{
"paragraph_id": 0,
"text": "In Norse mythology, Brísingamen (or Brísinga men) is the torc or necklace of the goddess Freyja. The name is an Old Norse compound brísinga-men whose second element is men \"(ornamental) neck-ring (of precious metal), torc\". The etymology of the first element is uncertain. It has been derived from Old Norse brísingr, a poetic term for \"fire\" or \"amber\" mentioned in the anonymous versified word-lists (þulur) appended to many manuscripts of the Prose Edda, making Brísingamen \"gleaming torc\", \"sunny torc\", or the like. However, Brísingr can also be an ethnonym, in which case Brísinga men is \"torque of the Brísings\"; the Old English parallel in Beowulf supports this derivation, though who the Brísings (Old Norse Brísingar) may have been remains unknown.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Brísingamen is referred to in the Anglo-Saxon epic Beowulf as Brosinga mene. The brief mention in Beowulf is as follows (trans. by Howell Chickering, 1977):",
"title": "Attestations"
},
{
"paragraph_id": 2,
"text": "[S]ince Hama bore off to the shining city the Brosings' necklace, Gem-figured filigree. He gained the hatred Of Eormanric the Goth, chose eternal reward.",
"title": "Attestations"
},
{
"paragraph_id": 3,
"text": "The Beowulf poet is clearly referring to the legends about Theoderic the Great. The Þiðrekssaga tells that the warrior Heime (Háma in Old English) takes sides against Ermanaric (\"Eormanric\"), king of the Goths, and has to flee his kingdom after robbing him; later in life, Hama enters a monastery and gives them all his stolen treasure. However, this saga makes no mention of the great necklace.",
"title": "Attestations"
},
{
"paragraph_id": 4,
"text": "In the poem Þrymskviða of the Poetic Edda, Þrymr, the king of the jǫtnar, steals Thor's hammer, Mjölnir. Freyja lends Loki her falcon cloak to search for it; but upon returning, Loki tells Freyja that Þrymr has hidden the hammer and demanded to marry her in return. Freyja is so wrathful that all the Æsir’s halls beneath her are shaken and the necklace Brísingamen breaks off from her neck. Later Thor borrows Brísingamen when he dresses up as Freyja to go to the wedding at Jǫtunheimr.",
"title": "Attestations"
},
{
"paragraph_id": 5,
"text": "Húsdrápa, a skaldic poem partially preserved in the Prose Edda, relates the story of the theft of Brísingamen by Loki. One day when Freyja wakes up and finds Brísingamen missing, she enlists the help of Heimdallr to help her search for it. Eventually they find the thief, who turns out to be Loki who has transformed himself into a seal. Heimdallr turns into a seal as well and fights Loki. After a lengthy battle at Singasteinn, Heimdallr wins and returns Brísingamen to Freyja.",
"title": "Attestations"
},
{
"paragraph_id": 6,
"text": "Snorri Sturluson quoted this old poem in Skáldskaparmál, saying that because of this legend Heimdallr is called \"Seeker of Freyja's Necklace\" (Skáldskaparmál, section 8) and Loki is called \"Thief of Brísingamen\" (Skáldskaparmál, section 16). A similar story appears in the later Sörla þáttr, where Heimdallr does not appear.",
"title": "Attestations"
},
{
"paragraph_id": 7,
"text": "Sörla þáttr is a short story in the later and extended version of the Saga of Olaf Tryggvason in the manuscript of the Flateyjarbók, which was written and compiled by two Christian priests, Jon Thordson and Magnus Thorhalson, in the late 14th century. In the end of the story, the arrival of Christianity dissolves the old curse that traditionally was to endure until Ragnarök.",
"title": "Attestations"
},
{
"paragraph_id": 8,
"text": "Freyja was a human in Asia and was the favorite concubine of Odin, King of Asialand. When this woman wanted to buy a golden necklace (no name given) forged by four dwarves (named Dvalinn, Alfrik, Berlingr, and Grer), she offered them gold and silver but they replied that they would only sell it to her if she would lie a night by each of them. She came home afterward with the necklace and kept silent as if nothing happened. But a man called Loki somehow knew it, and came to tell Odin. King Odin commanded Loki to steal the necklace, so Loki turned into a fly to sneak into Freyja's bower and stole it. When Freyja found her necklace missing, she came to ask king Odin. In exchange for it, Odin ordered her to make two kings, each served by twenty kings, fight forever unless some christened men so brave would dare to enter the battle and slay them. She said yes, and got that necklace back. Under the spell, king Högni and king Heðinn battled for one hundred and forty-three years, as soon as they fell down they had to stand up again and fight on. But in the end, the Christian lord Olaf Tryggvason, who has a great fate and luck, arrived with his christened men, and whoever slain by a Christian would stay dead. Thus the pagan curse was finally dissolved by the arrival of Christianity. After that, the noble man, king Olaf, went back to his realm.",
"title": "Attestations"
},
{
"paragraph_id": 9,
"text": "The battle of Högni and Heðinn is recorded in several medieval sources, including the skaldic poem Ragnarsdrápa, Skáldskaparmál (section 49), and Gesta Danorum: king Högni's daughter, Hildr, is kidnapped by king Heðinn. When Högni comes to fight Heðinn on an island, Hildr comes to offer her father a necklace on behalf of Heðinn for peace; but the two kings still battle, and Hildr resurrects the fallen to make them fight until Ragnarök. None of these earlier sources mentions Freyja or king Olaf Tryggvason, the historical figure who Christianized Norway and Iceland in the 10th Century.",
"title": "Attestations"
},
{
"paragraph_id": 10,
"text": "A Völva was buried c. 1000 with considerable splendour in Hagebyhöga in Östergötland, Sweden. In addition to being buried with her wand, she had received great riches which included horses, a wagon and an Arabian bronze pitcher. There was also a silver pendant, which represents a woman with a broad necklace around her neck. This kind of necklace was only worn by the most prominent women during the Iron Age and some have interpreted it as Freyja's necklace Brísingamen. The pendant may represent Freyja herself.",
"title": "Archaeological record"
},
{
"paragraph_id": 11,
"text": "Alan Garner wrote a children's fantasy novel called The Weirdstone of Brisingamen, published in 1960, about an enchanted teardrop bracelet.",
"title": "Modern influence"
},
{
"paragraph_id": 12,
"text": "Diana Paxson's novel Brisingamen features Freyja and her bracelet.",
"title": "Modern influence"
},
{
"paragraph_id": 13,
"text": "Black Phoenix Alchemy Lab has a perfumed oil scent named Brisingamen.",
"title": "Modern influence"
},
{
"paragraph_id": 14,
"text": "Freyja's necklace Brisingamen features prominently in Betsy Tobin's novel Iceland, where the necklace is seen to have significant protective powers.",
"title": "Modern influence"
},
{
"paragraph_id": 15,
"text": "The Brisingamen feature as a major item in Joel Rosenberg's Keepers of the Hidden Ways series of books. In it, there are seven jewels that were created for the necklace by the Dwarfs and given to the Norse goddess. She in turn eventually split them up into the seven separate jewels and hid them throughout the realm, as together they hold the power to shape the universe by its holder. The book's plot is about discovering one of them and deciding what to do with the power they allow while avoiding Loki and other Norse characters.",
"title": "Modern influence"
},
{
"paragraph_id": 16,
"text": "In Christopher Paolini's The Inheritance Cycle, the word \"brisingr\" means fire. This is probably a distillation of the word brisinga.",
"title": "Modern influence"
},
{
"paragraph_id": 17,
"text": "Ursula Le Guin's short story Semley's Necklace, the first part of her novel Rocannon's World, is a retelling of the Brisingamen story on an alien planet.",
"title": "Modern influence"
},
{
"paragraph_id": 18,
"text": "Brisingamen is represented as a card in the Yu-Gi-Oh! Trading Card Game, \"Nordic Relic Brisingamen\".",
"title": "Modern influence"
},
{
"paragraph_id": 19,
"text": "Brisingamen was part of MMORPG Ragnarok Online lore, which is ranked as \"God item\". The game is heavily based from Norse mythology.",
"title": "Modern influence"
},
{
"paragraph_id": 20,
"text": "In the Firefly Online game, one of the planets of the Himinbjörg system (which features planets named after figures from Germanic mythology) is named Brisingamen. It is third from the star, and has moons named Freya, Beowulf, and Alberich.",
"title": "Modern influence"
},
{
"paragraph_id": 21,
"text": "The Brisingamen is an item that can be found and equipped in the video game, Castlevania: Lament of Innocence.",
"title": "Modern influence"
},
{
"paragraph_id": 22,
"text": "In the French comics Freaks' Squeele, the character of Valkyrie accesses her costume change ability by touching a decorative torque necklace affixed to her forehead, named Brizingamen.",
"title": "Modern influence"
}
] | In Norse mythology, Brísingamen is the torc or necklace of the goddess Freyja. The name is an Old Norse compound brísinga-men whose second element is men "(ornamental) neck-ring, torc". The etymology of the first element is uncertain. It has been derived from Old Norse brísingr, a poetic term for "fire" or "amber" mentioned in the anonymous versified word-lists (þulur) appended to many manuscripts of the Prose Edda, making Brísingamen "gleaming torc", "sunny torc", or the like. However, Brísingr can also be an ethnonym, in which case Brísinga men is "torque of the Brísings"; the Old English parallel in Beowulf supports this derivation, though who the Brísings may have been remains unknown. | 2001-08-17T19:20:34Z | 2023-09-15T17:11:10Z | [
"Template:Reflist",
"Template:Cite book",
"Template:ISBN",
"Template:Norse mythology",
"Template:Short description",
"Template:Poem quote",
"Template:Blockquote",
"Template:Circa"
] | https://en.wikipedia.org/wiki/Br%C3%ADsingamen |
4,064 | Borsuk–Ulam theorem | In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center.
Formally: if f : S n → R n {\displaystyle f:S^{n}\to \mathbb {R} ^{n}} is continuous then there exists an x ∈ S n {\displaystyle x\in S^{n}} such that: f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} .
The case n = 1 {\displaystyle n=1} can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case.
The case n = 2 {\displaystyle n=2} is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space.
The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that S n {\displaystyle S^{n}} is the n-sphere and B n {\displaystyle B^{n}} is the n-ball:
According to Matoušek (2003, p. 25), the first historical mention of the statement of the Borsuk–Ulam theorem appears in Lyusternik & Shnirel'man (1930). The first proof was given by Karol Borsuk (1933), where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by Steinlein (1985).
The following statements are equivalent to the Borsuk–Ulam theorem.
A function g {\displaystyle g} is called odd (aka antipodal or antipode-preserving) if for every x {\displaystyle x} : g ( − x ) = − g ( x ) {\displaystyle g(-x)=-g(x)} .
The Borsuk–Ulam theorem is equivalent to the following statement: A continuous odd function from an n-sphere into Euclidean n-space has a zero. PROOF:
Define a retraction as a function h : S n → S n − 1 . {\displaystyle h:S^{n}\to S^{n-1}.} The Borsuk–Ulam theorem is equivalent to the following claim: there is no continuous odd retraction.
Proof: If the theorem is correct, then every continuous odd function from S n {\displaystyle S^{n}} must include 0 in its range. However, 0 ∉ S n − 1 {\displaystyle 0\notin S^{n-1}} so there cannot be a continuous odd function whose range is S n − 1 {\displaystyle S^{n-1}} .
Conversely, if it is incorrect, then there is a continuous odd function g : S n → R n {\displaystyle g:S^{n}\to {\mathbb {R}}^{n}} with no zeroes. Then we can construct another odd function h : S n → S n − 1 {\displaystyle h:S^{n}\to S^{n-1}} by:
since g {\displaystyle g} has no zeroes, h {\displaystyle h} is well-defined and continuous. Thus we have a continuous odd retraction.
The 1-dimensional case can easily be proved using the intermediate value theorem (IVT).
Let g {\displaystyle g} be the odd real-valued continuous function on a circle defined by g ( x ) = f ( x ) − f ( − x ) {\displaystyle g(x)=f(x)-f(-x)} . Pick an arbitrary x {\displaystyle x} . If g ( x ) = 0 {\displaystyle g(x)=0} then we are done. Otherwise, without loss of generality, g ( x ) > 0. {\displaystyle g(x)>0.} But g ( − x ) < 0. {\displaystyle g(-x)<0.} Hence, by the IVT, there is a point y {\displaystyle y} between x {\displaystyle x} and − x {\displaystyle -x} at which g ( y ) = 0 {\displaystyle g(y)=0} .
Assume that h : S n → S n − 1 {\displaystyle h:S^{n}\to S^{n-1}} is an odd continuous function with n > 2 {\displaystyle n>2} (the case n = 1 {\displaystyle n=1} is treated above, the case n = 2 {\displaystyle n=2} can be handled using basic covering theory). By passing to orbits under the antipodal action, we then get an induced continuous function h ′ : R P n → R P n − 1 {\displaystyle h':\mathbb {RP} ^{n}\to \mathbb {RP} ^{n-1}} between real projective spaces, which induces an isomorphism on fundamental groups. By the Hurewicz theorem, the induced ring homomorphism on cohomology with F 2 {\displaystyle \mathbb {F} _{2}} coefficients [where F 2 {\displaystyle \mathbb {F} _{2}} denotes the field with two elements],
sends b {\displaystyle b} to a {\displaystyle a} . But then we get that b n = 0 {\displaystyle b^{n}=0} is sent to a n ≠ 0 {\displaystyle a^{n}\neq 0} , a contradiction.
One can also show the stronger statement that any odd map S n − 1 → S n − 1 {\displaystyle S^{n-1}\to S^{n-1}} has odd degree and then deduce the theorem from this result.
The Borsuk–Ulam theorem can be proved from Tucker's lemma.
Let g : S n → R n {\displaystyle g:S^{n}\to \mathbb {R} ^{n}} be a continuous odd function. Because g is continuous on a compact domain, it is uniformly continuous. Therefore, for every ϵ > 0 {\displaystyle \epsilon >0} , there is a δ > 0 {\displaystyle \delta >0} such that, for every two points of S n {\displaystyle S_{n}} which are within δ {\displaystyle \delta } of each other, their images under g are within ϵ {\displaystyle \epsilon } of each other.
Define a triangulation of S n {\displaystyle S_{n}} with edges of length at most δ {\displaystyle \delta } . Label each vertex v {\displaystyle v} of the triangulation with a label l ( v ) ∈ ± 1 , ± 2 , … , ± n {\displaystyle l(v)\in {\pm 1,\pm 2,\ldots ,\pm n}} in the following way:
Because g is odd, the labeling is also odd: l ( − v ) = − l ( v ) {\displaystyle l(-v)=-l(v)} . Hence, by Tucker's lemma, there are two adjacent vertices u , v {\displaystyle u,v} with opposite labels. Assume w.l.o.g. that the labels are l ( u ) = 1 , l ( v ) = − 1 {\displaystyle l(u)=1,l(v)=-1} . By the definition of l, this means that in both g ( u ) {\displaystyle g(u)} and g ( v ) {\displaystyle g(v)} , coordinate #1 is the largest coordinate: in g ( u ) {\displaystyle g(u)} this coordinate is positive while in g ( v ) {\displaystyle g(v)} it is negative. By the construction of the triangulation, the distance between g ( u ) {\displaystyle g(u)} and g ( v ) {\displaystyle g(v)} is at most ϵ {\displaystyle \epsilon } , so in particular | g ( u ) 1 − g ( v ) 1 | = | g ( u ) 1 | + | g ( v ) 1 | ≤ ϵ {\displaystyle |g(u)_{1}-g(v)_{1}|=|g(u)_{1}|+|g(v)_{1}|\leq \epsilon } (since g ( u ) 1 {\displaystyle g(u)_{1}} and g ( v ) 1 {\displaystyle g(v)_{1}} have opposite signs) and so | g ( u ) 1 | ≤ ϵ {\displaystyle |g(u)_{1}|\leq \epsilon } . But since the largest coordinate of g ( u ) {\displaystyle g(u)} is coordinate #1, this means that | g ( u ) k | ≤ ϵ {\displaystyle |g(u)_{k}|\leq \epsilon } for each 1 ≤ k ≤ n {\displaystyle 1\leq k\leq n} . So | g ( u ) | ≤ c n ϵ {\displaystyle |g(u)|\leq c_{n}\epsilon } , where c n {\displaystyle c_{n}} is some constant depending on n {\displaystyle n} and the norm | ⋅ | {\displaystyle |\cdot |} which you have chosen.
The above is true for every ϵ > 0 {\displaystyle \epsilon >0} ; since S n {\displaystyle S_{n}} is compact there must hence be a point u in which | g ( u ) | = 0 {\displaystyle |g(u)|=0} .
Above we showed how to prove the Borsuk–Ulam theorem from Tucker's lemma. The converse is also true: it is possible to prove Tucker's lemma from the Borsuk–Ulam theorem. Therefore, these two theorems are equivalent. There are several fixed-point theorems which come in three equivalent variants: an algebraic topology variant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in the top row can be deduced from the one below it in the same column. | [
{
"paragraph_id": 0,
"text": "In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Formally: if f : S n → R n {\\displaystyle f:S^{n}\\to \\mathbb {R} ^{n}} is continuous then there exists an x ∈ S n {\\displaystyle x\\in S^{n}} such that: f ( − x ) = f ( x ) {\\displaystyle f(-x)=f(x)} .",
"title": ""
},
{
"paragraph_id": 2,
"text": "The case n = 1 {\\displaystyle n=1} can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The case n = 2 {\\displaystyle n=2} is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that S n {\\displaystyle S^{n}} is the n-sphere and B n {\\displaystyle B^{n}} is the n-ball:",
"title": ""
},
{
"paragraph_id": 5,
"text": "According to Matoušek (2003, p. 25), the first historical mention of the statement of the Borsuk–Ulam theorem appears in Lyusternik & Shnirel'man (1930). The first proof was given by Karol Borsuk (1933), where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by Steinlein (1985).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The following statements are equivalent to the Borsuk–Ulam theorem.",
"title": "Equivalent statements"
},
{
"paragraph_id": 7,
"text": "A function g {\\displaystyle g} is called odd (aka antipodal or antipode-preserving) if for every x {\\displaystyle x} : g ( − x ) = − g ( x ) {\\displaystyle g(-x)=-g(x)} .",
"title": "Equivalent statements"
},
{
"paragraph_id": 8,
"text": "The Borsuk–Ulam theorem is equivalent to the following statement: A continuous odd function from an n-sphere into Euclidean n-space has a zero. PROOF:",
"title": "Equivalent statements"
},
{
"paragraph_id": 9,
"text": "Define a retraction as a function h : S n → S n − 1 . {\\displaystyle h:S^{n}\\to S^{n-1}.} The Borsuk–Ulam theorem is equivalent to the following claim: there is no continuous odd retraction.",
"title": "Equivalent statements"
},
{
"paragraph_id": 10,
"text": "Proof: If the theorem is correct, then every continuous odd function from S n {\\displaystyle S^{n}} must include 0 in its range. However, 0 ∉ S n − 1 {\\displaystyle 0\\notin S^{n-1}} so there cannot be a continuous odd function whose range is S n − 1 {\\displaystyle S^{n-1}} .",
"title": "Equivalent statements"
},
{
"paragraph_id": 11,
"text": "Conversely, if it is incorrect, then there is a continuous odd function g : S n → R n {\\displaystyle g:S^{n}\\to {\\mathbb {R}}^{n}} with no zeroes. Then we can construct another odd function h : S n → S n − 1 {\\displaystyle h:S^{n}\\to S^{n-1}} by:",
"title": "Equivalent statements"
},
{
"paragraph_id": 12,
"text": "since g {\\displaystyle g} has no zeroes, h {\\displaystyle h} is well-defined and continuous. Thus we have a continuous odd retraction.",
"title": "Equivalent statements"
},
{
"paragraph_id": 13,
"text": "The 1-dimensional case can easily be proved using the intermediate value theorem (IVT).",
"title": "Proofs"
},
{
"paragraph_id": 14,
"text": "Let g {\\displaystyle g} be the odd real-valued continuous function on a circle defined by g ( x ) = f ( x ) − f ( − x ) {\\displaystyle g(x)=f(x)-f(-x)} . Pick an arbitrary x {\\displaystyle x} . If g ( x ) = 0 {\\displaystyle g(x)=0} then we are done. Otherwise, without loss of generality, g ( x ) > 0. {\\displaystyle g(x)>0.} But g ( − x ) < 0. {\\displaystyle g(-x)<0.} Hence, by the IVT, there is a point y {\\displaystyle y} between x {\\displaystyle x} and − x {\\displaystyle -x} at which g ( y ) = 0 {\\displaystyle g(y)=0} .",
"title": "Proofs"
},
{
"paragraph_id": 15,
"text": "Assume that h : S n → S n − 1 {\\displaystyle h:S^{n}\\to S^{n-1}} is an odd continuous function with n > 2 {\\displaystyle n>2} (the case n = 1 {\\displaystyle n=1} is treated above, the case n = 2 {\\displaystyle n=2} can be handled using basic covering theory). By passing to orbits under the antipodal action, we then get an induced continuous function h ′ : R P n → R P n − 1 {\\displaystyle h':\\mathbb {RP} ^{n}\\to \\mathbb {RP} ^{n-1}} between real projective spaces, which induces an isomorphism on fundamental groups. By the Hurewicz theorem, the induced ring homomorphism on cohomology with F 2 {\\displaystyle \\mathbb {F} _{2}} coefficients [where F 2 {\\displaystyle \\mathbb {F} _{2}} denotes the field with two elements],",
"title": "Proofs"
},
{
"paragraph_id": 16,
"text": "sends b {\\displaystyle b} to a {\\displaystyle a} . But then we get that b n = 0 {\\displaystyle b^{n}=0} is sent to a n ≠ 0 {\\displaystyle a^{n}\\neq 0} , a contradiction.",
"title": "Proofs"
},
{
"paragraph_id": 17,
"text": "One can also show the stronger statement that any odd map S n − 1 → S n − 1 {\\displaystyle S^{n-1}\\to S^{n-1}} has odd degree and then deduce the theorem from this result.",
"title": "Proofs"
},
{
"paragraph_id": 18,
"text": "The Borsuk–Ulam theorem can be proved from Tucker's lemma.",
"title": "Proofs"
},
{
"paragraph_id": 19,
"text": "Let g : S n → R n {\\displaystyle g:S^{n}\\to \\mathbb {R} ^{n}} be a continuous odd function. Because g is continuous on a compact domain, it is uniformly continuous. Therefore, for every ϵ > 0 {\\displaystyle \\epsilon >0} , there is a δ > 0 {\\displaystyle \\delta >0} such that, for every two points of S n {\\displaystyle S_{n}} which are within δ {\\displaystyle \\delta } of each other, their images under g are within ϵ {\\displaystyle \\epsilon } of each other.",
"title": "Proofs"
},
{
"paragraph_id": 20,
"text": "Define a triangulation of S n {\\displaystyle S_{n}} with edges of length at most δ {\\displaystyle \\delta } . Label each vertex v {\\displaystyle v} of the triangulation with a label l ( v ) ∈ ± 1 , ± 2 , … , ± n {\\displaystyle l(v)\\in {\\pm 1,\\pm 2,\\ldots ,\\pm n}} in the following way:",
"title": "Proofs"
},
{
"paragraph_id": 21,
"text": "Because g is odd, the labeling is also odd: l ( − v ) = − l ( v ) {\\displaystyle l(-v)=-l(v)} . Hence, by Tucker's lemma, there are two adjacent vertices u , v {\\displaystyle u,v} with opposite labels. Assume w.l.o.g. that the labels are l ( u ) = 1 , l ( v ) = − 1 {\\displaystyle l(u)=1,l(v)=-1} . By the definition of l, this means that in both g ( u ) {\\displaystyle g(u)} and g ( v ) {\\displaystyle g(v)} , coordinate #1 is the largest coordinate: in g ( u ) {\\displaystyle g(u)} this coordinate is positive while in g ( v ) {\\displaystyle g(v)} it is negative. By the construction of the triangulation, the distance between g ( u ) {\\displaystyle g(u)} and g ( v ) {\\displaystyle g(v)} is at most ϵ {\\displaystyle \\epsilon } , so in particular | g ( u ) 1 − g ( v ) 1 | = | g ( u ) 1 | + | g ( v ) 1 | ≤ ϵ {\\displaystyle |g(u)_{1}-g(v)_{1}|=|g(u)_{1}|+|g(v)_{1}|\\leq \\epsilon } (since g ( u ) 1 {\\displaystyle g(u)_{1}} and g ( v ) 1 {\\displaystyle g(v)_{1}} have opposite signs) and so | g ( u ) 1 | ≤ ϵ {\\displaystyle |g(u)_{1}|\\leq \\epsilon } . But since the largest coordinate of g ( u ) {\\displaystyle g(u)} is coordinate #1, this means that | g ( u ) k | ≤ ϵ {\\displaystyle |g(u)_{k}|\\leq \\epsilon } for each 1 ≤ k ≤ n {\\displaystyle 1\\leq k\\leq n} . So | g ( u ) | ≤ c n ϵ {\\displaystyle |g(u)|\\leq c_{n}\\epsilon } , where c n {\\displaystyle c_{n}} is some constant depending on n {\\displaystyle n} and the norm | ⋅ | {\\displaystyle |\\cdot |} which you have chosen.",
"title": "Proofs"
},
{
"paragraph_id": 22,
"text": "The above is true for every ϵ > 0 {\\displaystyle \\epsilon >0} ; since S n {\\displaystyle S_{n}} is compact there must hence be a point u in which | g ( u ) | = 0 {\\displaystyle |g(u)|=0} .",
"title": "Proofs"
},
{
"paragraph_id": 23,
"text": "Above we showed how to prove the Borsuk–Ulam theorem from Tucker's lemma. The converse is also true: it is possible to prove Tucker's lemma from the Borsuk–Ulam theorem. Therefore, these two theorems are equivalent. There are several fixed-point theorems which come in three equivalent variants: an algebraic topology variant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in the top row can be deduced from the one below it in the same column.",
"title": "Equivalent results"
}
] | In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center. Formally: if f : S n → R n is continuous then there exists an x ∈ S n such that: f = f . The case n = 1 can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case. The case n = 2 is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space. The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that S n is the n-sphere and B n is the n-ball: If g : S n → R n is a continuous odd function, then there exists an x ∈ S n such that: g = 0 .
If g : B n → R n is a continuous function which is odd on S n − 1 , then there exists an x ∈ B n such that: g = 0 . | 2001-08-27T15:29:51Z | 2023-12-30T21:01:21Z | [
"Template:Short description",
"Template:Harvtxt",
"Template:Analogous fixed-point theorems",
"Template:Reflist",
"Template:Cite thesis",
"Template:ISBN",
"Template:Harvs",
"Template:Cite journal",
"Template:Springer",
"Template:Cite web",
"Template:Cite book",
"Template:Youtube"
] | https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam_theorem |
4,067 | Bragi | Bragi (/ˈbrɑːɡi/; Old Norse: [ˈbrɑɣe]) is the skaldic god of poetry in Norse mythology.
The theonym Bragi probably stems from the masculine noun bragr, which can be translated in Old Norse as 'poetry' (cf. Icelandic bragur 'poem, melody, wise') or as 'the first, noblest' (cf. poetic Old Norse bragnar 'chiefs, men', bragningr 'king'). It is unclear whether the theonym semantically derives from the first meaning or the second.
A connection has been also suggested with the Old Norse bragarfull, the cup drunk in solemn occasions with the taking of vows. The word is usually taken to semantically derive from the second meaning of bragr ('first one, noblest'). A relation with the Old English term brego ('lord, prince') remains uncertain.
Bragi regularly appears as a personal name in Old Norse and Old Swedish sources, which according to linguist Jan de Vries might indicate the secondary character of the god's name.
Snorri Sturluson writes in the Gylfaginning after describing Odin, Thor, and Baldr:
One is called Bragi: he is renowned for wisdom, and most of all for fluency of speech and skill with words. He knows most of skaldship, and after him skaldship is called bragr, and from his name that one is called bragr-man or -woman, who possesses eloquence surpassing others, of women or of men. His wife is Iðunn.
In Skáldskaparmál Snorri writes:
How should one periphrase Bragi? By calling him husband of Iðunn, first maker of poetry, and the long-bearded god (after his name, a man who has a great beard is called Beard-Bragi), and son of Odin.
That Bragi is Odin's son is clearly mentioned only here and in some versions of a list of the sons of Odin (see Sons of Odin). But "wish-son" in stanza 16 of the Lokasenna could mean "Odin's son" and is translated by Hollander as Odin's kin. Bragi's mother is possibly the giantess Gunnlod. If Bragi's mother is Frigg, then Frigg is somewhat dismissive of Bragi in the Lokasenna in stanza 27 when Frigg complains that if she had a son in Ægir's hall as brave as Baldr then Loki would have to fight for his life.
In that poem Bragi at first forbids Loki to enter the hall but is overruled by Odin. Loki then gives a greeting to all gods and goddesses who are in the hall save to Bragi. Bragi generously offers his sword, horse, and an arm ring as peace gift but Loki only responds by accusing Bragi of cowardice, of being the most afraid to fight of any of the Æsir and Elves within the hall. Bragi responds that if they were outside the hall, he would have Loki's head, but Loki only repeats the accusation. When Bragi's wife Iðunn attempts to calm Bragi, Loki accuses her of embracing her brother's slayer, a reference to matters that have not survived. It may be that Bragi had slain Iðunn's brother.
A passage in the Poetic Edda poem Sigrdrífumál describes runes being graven on the sun, on the ear of one of the sun-horses and on the hoofs of the other, on Sleipnir's teeth, on bear's paw, on eagle's beak, on wolf's claw, and on several other things including on Bragi's tongue. Then the runes are shaved off and the shavings are mixed with mead and sent abroad so that Æsir have some, Elves have some, Vanir have some, and Men have some, these being speech runes and birth runes, ale runes, and magic runes. The meaning of this is obscure.
The first part of Snorri Sturluson's Skáldskaparmál is a dialogue between Ægir and Bragi about the nature of poetry, particularly skaldic poetry. Bragi tells the origin of the mead of poetry from the blood of Kvasir and how Odin obtained this mead. He then goes on to discuss various poetic metaphors known as kennings.
Snorri Sturluson clearly distinguishes the god Bragi from the mortal skald Bragi Boddason, whom he often mentions separately. The appearance of Bragi in the Lokasenna indicates that if these two Bragis were originally the same, they have become separated for that author also, or that chronology has become very muddled and Bragi Boddason has been relocated to mythological time. Compare the appearance of the Welsh Taliesin in the second branch of the Mabinogi. Legendary chronology sometimes does become muddled. Whether Bragi the god originally arose as a deified version of Bragi Boddason was much debated in the 19th century, especially by the scholars Eugen Mogk and Sophus Bugge. The debate remains undecided.
In the poem Eiríksmál Odin, in Valhalla, hears the coming of the dead Norwegian king Eric Bloodaxe and his host, and bids the heroes Sigmund and Sinfjötli rise to greet him. Bragi is then mentioned, questioning how Odin knows that it is Eric and why Odin has let such a king die. In the poem Hákonarmál, Hákon the Good is taken to Valhalla by the valkyrie Göndul and Odin sends Hermóðr and Bragi to greet him. In these poems Bragi could be either a god or a dead hero in Valhalla. Attempting to decide is further confused because Hermóðr also seems to be sometimes the name of a god and sometimes the name of a hero. That Bragi was also the first to speak to Loki in the Lokasenna as Loki attempted to enter the hall might be a parallel. It might have been useful and customary that a man of great eloquence and versed in poetry should greet those entering a hall. He is also depicted in tenth-century court poetry of helping to prepare Valhalla for new arrivals and welcoming the kings who have been slain in battle to the hall of Odin.
In the Prose Edda Snorri Sturluson quotes many stanzas attributed to Bragi Boddason the old (Bragi Boddason inn gamli), a Norwegian court poet who served several Swedish kings, Ragnar Lodbrok, Östen Beli and Björn at Hauge who reigned in the first half of the 9th century. This Bragi was reckoned as the first skaldic poet, and was certainly the earliest skaldic poet then remembered by name whose verse survived in memory.
Snorri especially quotes passages from Bragi's Ragnarsdrápa, a poem supposedly composed in honor of the famous legendary Viking Ragnar Lodbrok ('Hairy-breeches') describing the images on a decorated shield which Ragnar had given to Bragi. The images included Thor's fishing for Jörmungandr, Gefjun's ploughing of Zealand from the soil of Sweden, the attack of Hamdir and Sorli against King Jörmunrekk, and the never-ending battle between Hedin and Högni.
Bragi son of Hálfdan the Old is mentioned only in the Skjáldskaparmál. This Bragi is the sixth of the second of two groups of nine sons fathered by King Hálfdan the Old on Alvig the Wise, daughter of King Eymund of Hólmgard. This second group of sons are all eponymous ancestors of legendary families of the north. Snorri says:
Bragi, from whom the Bragnings are sprung (that is the race of Hálfdan the Generous).
Of the Bragnings as a race and of Hálfdan the Generous nothing else is known. However, Bragning is often, like some others of these dynastic names, used in poetry as a general word for 'king' or 'ruler'.
In the eddic poem Helgakviða Hundingsbana II, Bragi Högnason, his brother Dag, and his sister Sigrún were children of Högne, the king of East Götaland. The poem relates how Sigmund's son Helgi Hundingsbane agreed to take Sigrún daughter of Högni as his wife against her unwilling betrothal to Hodbrodd son of Granmar the king of Södermanland. In the subsequent battle of Frekastein (probably one of the 300 hill forts of Södermanland, as stein meant "hill fort") against Högni and Granmar, all the chieftains on Granmar's side are slain, including Bragi, except for Bragi's brother Dag.
In the 2002 Ensemble Studios game Age of Mythology, Bragi is one of nine minor gods Norse players can worship. | [
{
"paragraph_id": 0,
"text": "Bragi (/ˈbrɑːɡi/; Old Norse: [ˈbrɑɣe]) is the skaldic god of poetry in Norse mythology.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The theonym Bragi probably stems from the masculine noun bragr, which can be translated in Old Norse as 'poetry' (cf. Icelandic bragur 'poem, melody, wise') or as 'the first, noblest' (cf. poetic Old Norse bragnar 'chiefs, men', bragningr 'king'). It is unclear whether the theonym semantically derives from the first meaning or the second.",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "A connection has been also suggested with the Old Norse bragarfull, the cup drunk in solemn occasions with the taking of vows. The word is usually taken to semantically derive from the second meaning of bragr ('first one, noblest'). A relation with the Old English term brego ('lord, prince') remains uncertain.",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Bragi regularly appears as a personal name in Old Norse and Old Swedish sources, which according to linguist Jan de Vries might indicate the secondary character of the god's name.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "Snorri Sturluson writes in the Gylfaginning after describing Odin, Thor, and Baldr:",
"title": "Attestations"
},
{
"paragraph_id": 5,
"text": "One is called Bragi: he is renowned for wisdom, and most of all for fluency of speech and skill with words. He knows most of skaldship, and after him skaldship is called bragr, and from his name that one is called bragr-man or -woman, who possesses eloquence surpassing others, of women or of men. His wife is Iðunn.",
"title": "Attestations"
},
{
"paragraph_id": 6,
"text": "In Skáldskaparmál Snorri writes:",
"title": "Attestations"
},
{
"paragraph_id": 7,
"text": "How should one periphrase Bragi? By calling him husband of Iðunn, first maker of poetry, and the long-bearded god (after his name, a man who has a great beard is called Beard-Bragi), and son of Odin.",
"title": "Attestations"
},
{
"paragraph_id": 8,
"text": "That Bragi is Odin's son is clearly mentioned only here and in some versions of a list of the sons of Odin (see Sons of Odin). But \"wish-son\" in stanza 16 of the Lokasenna could mean \"Odin's son\" and is translated by Hollander as Odin's kin. Bragi's mother is possibly the giantess Gunnlod. If Bragi's mother is Frigg, then Frigg is somewhat dismissive of Bragi in the Lokasenna in stanza 27 when Frigg complains that if she had a son in Ægir's hall as brave as Baldr then Loki would have to fight for his life.",
"title": "Attestations"
},
{
"paragraph_id": 9,
"text": "In that poem Bragi at first forbids Loki to enter the hall but is overruled by Odin. Loki then gives a greeting to all gods and goddesses who are in the hall save to Bragi. Bragi generously offers his sword, horse, and an arm ring as peace gift but Loki only responds by accusing Bragi of cowardice, of being the most afraid to fight of any of the Æsir and Elves within the hall. Bragi responds that if they were outside the hall, he would have Loki's head, but Loki only repeats the accusation. When Bragi's wife Iðunn attempts to calm Bragi, Loki accuses her of embracing her brother's slayer, a reference to matters that have not survived. It may be that Bragi had slain Iðunn's brother.",
"title": "Attestations"
},
{
"paragraph_id": 10,
"text": "A passage in the Poetic Edda poem Sigrdrífumál describes runes being graven on the sun, on the ear of one of the sun-horses and on the hoofs of the other, on Sleipnir's teeth, on bear's paw, on eagle's beak, on wolf's claw, and on several other things including on Bragi's tongue. Then the runes are shaved off and the shavings are mixed with mead and sent abroad so that Æsir have some, Elves have some, Vanir have some, and Men have some, these being speech runes and birth runes, ale runes, and magic runes. The meaning of this is obscure.",
"title": "Attestations"
},
{
"paragraph_id": 11,
"text": "The first part of Snorri Sturluson's Skáldskaparmál is a dialogue between Ægir and Bragi about the nature of poetry, particularly skaldic poetry. Bragi tells the origin of the mead of poetry from the blood of Kvasir and how Odin obtained this mead. He then goes on to discuss various poetic metaphors known as kennings.",
"title": "Attestations"
},
{
"paragraph_id": 12,
"text": "Snorri Sturluson clearly distinguishes the god Bragi from the mortal skald Bragi Boddason, whom he often mentions separately. The appearance of Bragi in the Lokasenna indicates that if these two Bragis were originally the same, they have become separated for that author also, or that chronology has become very muddled and Bragi Boddason has been relocated to mythological time. Compare the appearance of the Welsh Taliesin in the second branch of the Mabinogi. Legendary chronology sometimes does become muddled. Whether Bragi the god originally arose as a deified version of Bragi Boddason was much debated in the 19th century, especially by the scholars Eugen Mogk and Sophus Bugge. The debate remains undecided.",
"title": "Attestations"
},
{
"paragraph_id": 13,
"text": "In the poem Eiríksmál Odin, in Valhalla, hears the coming of the dead Norwegian king Eric Bloodaxe and his host, and bids the heroes Sigmund and Sinfjötli rise to greet him. Bragi is then mentioned, questioning how Odin knows that it is Eric and why Odin has let such a king die. In the poem Hákonarmál, Hákon the Good is taken to Valhalla by the valkyrie Göndul and Odin sends Hermóðr and Bragi to greet him. In these poems Bragi could be either a god or a dead hero in Valhalla. Attempting to decide is further confused because Hermóðr also seems to be sometimes the name of a god and sometimes the name of a hero. That Bragi was also the first to speak to Loki in the Lokasenna as Loki attempted to enter the hall might be a parallel. It might have been useful and customary that a man of great eloquence and versed in poetry should greet those entering a hall. He is also depicted in tenth-century court poetry of helping to prepare Valhalla for new arrivals and welcoming the kings who have been slain in battle to the hall of Odin.",
"title": "Attestations"
},
{
"paragraph_id": 14,
"text": "In the Prose Edda Snorri Sturluson quotes many stanzas attributed to Bragi Boddason the old (Bragi Boddason inn gamli), a Norwegian court poet who served several Swedish kings, Ragnar Lodbrok, Östen Beli and Björn at Hauge who reigned in the first half of the 9th century. This Bragi was reckoned as the first skaldic poet, and was certainly the earliest skaldic poet then remembered by name whose verse survived in memory.",
"title": "Skalds named Bragi"
},
{
"paragraph_id": 15,
"text": "Snorri especially quotes passages from Bragi's Ragnarsdrápa, a poem supposedly composed in honor of the famous legendary Viking Ragnar Lodbrok ('Hairy-breeches') describing the images on a decorated shield which Ragnar had given to Bragi. The images included Thor's fishing for Jörmungandr, Gefjun's ploughing of Zealand from the soil of Sweden, the attack of Hamdir and Sorli against King Jörmunrekk, and the never-ending battle between Hedin and Högni.",
"title": "Skalds named Bragi"
},
{
"paragraph_id": 16,
"text": "Bragi son of Hálfdan the Old is mentioned only in the Skjáldskaparmál. This Bragi is the sixth of the second of two groups of nine sons fathered by King Hálfdan the Old on Alvig the Wise, daughter of King Eymund of Hólmgard. This second group of sons are all eponymous ancestors of legendary families of the north. Snorri says:",
"title": "Skalds named Bragi"
},
{
"paragraph_id": 17,
"text": "Bragi, from whom the Bragnings are sprung (that is the race of Hálfdan the Generous).",
"title": "Skalds named Bragi"
},
{
"paragraph_id": 18,
"text": "Of the Bragnings as a race and of Hálfdan the Generous nothing else is known. However, Bragning is often, like some others of these dynastic names, used in poetry as a general word for 'king' or 'ruler'.",
"title": "Skalds named Bragi"
},
{
"paragraph_id": 19,
"text": "In the eddic poem Helgakviða Hundingsbana II, Bragi Högnason, his brother Dag, and his sister Sigrún were children of Högne, the king of East Götaland. The poem relates how Sigmund's son Helgi Hundingsbane agreed to take Sigrún daughter of Högni as his wife against her unwilling betrothal to Hodbrodd son of Granmar the king of Södermanland. In the subsequent battle of Frekastein (probably one of the 300 hill forts of Södermanland, as stein meant \"hill fort\") against Högni and Granmar, all the chieftains on Granmar's side are slain, including Bragi, except for Bragi's brother Dag.",
"title": "Skalds named Bragi"
},
{
"paragraph_id": 20,
"text": "In the 2002 Ensemble Studios game Age of Mythology, Bragi is one of nine minor gods Norse players can worship.",
"title": "In popular culture"
}
] | Bragi is the skaldic god of poetry in Norse mythology. | 2023-07-19T19:57:04Z | [
"Template:IPA-non",
"Template:Reflist",
"Template:IPAc-en",
"Template:Use dmy dates",
"Template:Sfn",
"Template:Blockquote",
"Template:ISBN",
"Template:Norse mythology",
"Template:Short description",
"Template:Cite web",
"Template:For",
"Template:Cite book",
"Template:Commons category-inline",
"Template:Main"
] | https://en.wikipedia.org/wiki/Bragi |
|
4,068 | Blaise Pascal | Blaise Pascal (/pæˈskæl/ pass-KAL, also UK: /-ˈskɑːl, ˈpæskəl, -skæl/ -KAHL, PASS-kəl, -kal, US: /pɑːˈskɑːl/ pahs-KAHL; French: [blɛz paskal]; 19 June 1623 – 19 August 1662) was a French mathematician, physicist, inventor, philosopher, and Catholic writer.
Pascal was a child prodigy who was educated by his father, a tax collector in Rouen. His earliest mathematical work was on conic sections; he wrote a significant treatise on the subject of projective geometry at the age of 16. He later corresponded with Pierre de Fermat on probability theory, strongly influencing the development of modern economics and social science. In 1642, while still a teenager, he started some pioneering work on calculating machines (called Pascal's calculators and later Pascalines), establishing him as one of the first two inventors of the mechanical calculator.
Like his contemporary René Descartes, Pascal was also a pioneer in the natural and applied sciences. Pascal wrote in defense of the scientific method and produced several controversial results. He made important contributions to the study of fluids, and clarified the concepts of pressure and vacuum by generalising the work of Evangelista Torricelli. Following Torricelli and Galileo Galilei, he rebutted the likes of Aristotle and Descartes who insisted that nature abhors a vacuum in 1647.
In 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing influential works on philosophy and theology. His two most famous works date from this period: the Lettres provinciales and the Pensées, the former set in the conflict between Jansenists and Jesuits. The latter contains Pascal's wager, known in the original as the Discourse on the Machine, a fideistic probabilistic argument for God's existence. In that year, he also wrote an important treatise on the arithmetical triangle. Between 1658 and 1659, he wrote on the cycloid and its use in calculating the volume of solids.
Throughout his life, Pascal was in frail health, especially after the age of 18; he died just two months after his 39th birthday.
Pascal was born in Clermont-Ferrand, which is in France's Auvergne region, by the Massif Central. He lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal (1588–1651), who also had an interest in science and mathematics, was a local judge and member of the "Noblesse de Robe". Pascal had two sisters, the younger Jacqueline and the elder Gilberte.
In 1631, five years after the death of his wife, Étienne Pascal moved with his children to Paris. The newly arrived family soon hired Louise Delfault, a maid who eventually became a key member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, particularly his son Blaise. The young Pascal showed an amazing aptitude for mathematics and science.
Particularly of interest to Pascal was a work of Desargues on conic sections. Following Desargues' thinking, the 16-year-old Pascal produced, as a means of proof, a short treatise on what was called the Mystic Hexagram, Essai pour les coniques (Essay on Conics) and sent it — his first serious work of mathematics — to Père Mersenne in Paris; it is known still today as Pascal's theorem. It states that if a hexagon is inscribed in a circle (or conic) then the three intersection points of opposite sides lie on a line (called the Pascal line).
Pascal's work was so precocious that René Descartes was convinced that Pascal's father had written it. When assured by Mersenne that it was, indeed, the product of the son and not the father, Descartes dismissed it with a sniff: "I do not find it strange that he has offered demonstrations about conics more appropriate than those of the ancients," adding, "but other matters related to this subject can be proposed that would scarcely occur to a 16-year-old child."
In France at that time offices and positions could be—and were—bought and sold. In 1631, Étienne sold his position as second president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, and enjoy, Paris, but in 1638 Cardinal Richelieu, desperate for money to carry on the Thirty Years' War, defaulted on the government's bonds. Suddenly Étienne Pascal's worth had dropped from nearly 66,000 livres to less than 7,300.
Like so many others, Étienne was eventually forced to flee Paris because of his opposition to the fiscal policies of Richelieu, leaving his three children in the care of his neighbour Madame Sainctot, a great beauty with an infamous past who kept one of the most glittering and intellectual salons in all France. It was only when Jacqueline performed well in a children's play with Richelieu in attendance that Étienne was pardoned. In time, Étienne was back in good graces with the Cardinal and in 1639 had been appointed the king's commissioner of taxes in the city of Rouen—a city whose tax records, thanks to uprisings, were in utter chaos.
In 1642, in an effort to ease his father's endless, exhausting calculations, and recalculations, of taxes owed and paid (into which work the young Pascal had been recruited), Pascal, not yet 19, constructed a mechanical calculator capable of addition and subtraction, called Pascal's calculator or the Pascaline. Of the eight Pascalines known to have survived, four are held by the Musée des Arts et Métiers in Paris and one more by the Zwinger museum in Dresden, Germany, exhibit two of his original mechanical calculators.
Although these machines are pioneering forerunners to a further 400 years of development of mechanical methods of calculation, and in a sense to the later field of computer engineering, the calculator failed to be a great commercial success. Partly because it was still quite cumbersome to use in practice, but probably primarily because it was extraordinarily expensive, the Pascaline became little more than a toy, and a status symbol, for the very rich both in France and elsewhere in Europe. Pascal continued to make improvements to his design through the next decade, and he refers to some 50 machines that were built to his design. He built 20 finished machines over the following 10 years.
Pascal's development of probability theory was his most influential contribution to mathematics. Originally applied to gambling, today it is extremely important in economics, especially in actuarial science. John Ross writes, "Probability theory and the discoveries following it changed the way we regard uncertainty, risk, decision-making, and an individual's and society's ability to influence the course of future events." However, Pascal and Fermat, though doing important early work in probability theory, did not develop the field very far. Christiaan Huygens, learning of the subject from the correspondence of Pascal and Fermat, wrote the first book on the subject. Later figures who continued the development of the theory include Abraham de Moivre and Pierre-Simon Laplace.
In 1654, prompted by his friend the Chevalier de Méré, he corresponded with Pierre de Fermat on the subject of gambling problems, and from that collaboration was born the mathematical theory of probabilities. The specific problem was that of two players who want to finish a game early and, given the current circumstances of the game, want to divide the stakes fairly, based on the chance each has of winning the game from that point. From this discussion, the notion of expected value was introduced. Pascal later (in the Pensées) used a probabilistic argument, Pascal's wager, to justify belief in God and a virtuous life. The work done by Fermat and Pascal into the calculus of probabilities laid important groundwork for Leibniz' formulation of the calculus.
Pascal's Traité du triangle arithmétique, written in 1654 but published posthumously in 1665, described a convenient tabular presentation for binomial coefficients which he called the arithmetical triangle, but is now called Pascal's triangle. The triangle can also be represented:
He defined the numbers in the triangle by recursion: Call the number in the (m + 1)th row and (n + 1)th column tmn. Then tmn = tm–1,n + tm,n–1, for m = 0, 1, 2, ... and n = 0, 1, 2, ... The boundary conditions are tm,−1 = 0, t−1,n = 0 for m = 1, 2, 3, ... and n = 1, 2, 3, ... The generator t00 = 1. Pascal concluded with the proof,
In the same treatise, Pascal gave an explicit statement of the principle of mathematical induction. In 1654, he proved Pascal's identity relating the sums of the p-th powers of the first n positive integers for p = 0, 1, 2, ..., k.
That same year, Pascal had a religious experience, and mostly gave up work in mathematics.
In 1658, Pascal, while suffering from a toothache, began considering several problems concerning the cycloid. His toothache disappeared, and he took this as a heavenly sign to proceed with his research. Eight days later he had completed his essay and, to publicize the results, proposed a contest.
Pascal proposed three questions relating to the center of gravity, area and volume of the cycloid, with the winner or winners to receive prizes of 20 and 40 Spanish doubloons. Pascal, Gilles de Roberval and Pierre de Carcavi were the judges, and neither of the two submissions (by John Wallis and Antoine de Lalouvère) were judged to be adequate. While the contest was ongoing, Christopher Wren sent Pascal a proposal for a proof of the rectification of the cycloid; Roberval claimed promptly that he had known of the proof for years. Wallis published Wren's proof (crediting Wren) in Wallis's Tractus Duo, giving Wren priority for the first published proof.
Pascal contributed to several fields in physics, most notably the fields of fluid mechanics and pressure. In honour of his scientific contributions, the name Pascal has been given to the SI unit of pressure and Pascal's law (an important principle of hydrostatics). He introduced a primitive form of roulette and the roulette wheel in his search for a perpetual motion machine.
His work in the fields of hydrodynamics and hydrostatics centered on the principles of hydraulic fluids. His inventions include the hydraulic press (using hydraulic pressure to multiply force) and the syringe. He proved that hydrostatic pressure depends not on the weight of the fluid but on the elevation difference. He demonstrated this principle by attaching a thin tube to a barrel full of water and filling the tube with water up to the level of the third floor of a building. This caused the barrel to leak, in what became known as Pascal's barrel experiment.
By 1647, Pascal had learned of Evangelista Torricelli's experimentation with barometers. Having replicated an experiment that involved placing a tube filled with mercury upside down in a bowl of mercury, Pascal questioned what force kept some mercury in the tube and what filled the space above the mercury in the tube. At the time, most scientists including Descartes believed in a plenum, i. e. some invisible matter filled all of space, rather than a vacuum. "Nature abhors a vacuum." This was based on the Aristotelian notion that everything in motion was a substance, moved by another substance. Furthermore, light passed through the glass tube, suggesting a substance such as aether rather than vacuum filled the space.
Following more experimentation in this vein, in 1647 Pascal produced Experiences nouvelles touchant le vide ("New experiments with the vacuum"), which detailed basic rules describing to what degree various liquids could be supported by air pressure. It also provided reasons why it was indeed a vacuum above the column of liquid in a barometer tube. This work was followed by Récit de la grande expérience de l'équilibre des liqueurs ("Account of the great experiment on equilibrium in liquids") published in 1648.
The Torricellian vacuum found that air pressure is equal to the weight of 30 inches of mercury. If air has a finite weight, Earth's atmosphere must have a maximum height. Pascal reasoned that if true, air pressure on a high mountain must be less than at a lower altitude. He lived near the Puy de Dôme mountain, 4,790 feet (1,460 m) tall, but his health was poor so could not climb it. On 19 September 1648, after many months of Pascal's friendly but insistent prodding, Florin Périer, husband of Pascal's elder sister Gilberte, was finally able to carry out the fact-finding mission vital to Pascal's theory. The account, written by Périer, reads:
The weather was chancy last Saturday...[but] around five o'clock that morning...the Puy-de-Dôme was visible...so I decided to give it a try. Several important people of the city of Clermont had asked me to let them know when I would make the ascent...I was delighted to have them with me in this great work...
...at eight o'clock we met in the gardens of the Minim Fathers, which has the lowest elevation in town....First I poured 16 pounds of quicksilver...into a vessel...then took several glass tubes...each four feet long and hermetically sealed at one end and opened at the other...then placed them in the vessel [of quicksilver]...I found the quick silver stood at 26" and 3+1⁄2 lines above the quicksilver in the vessel...I repeated the experiment two more times while standing in the same spot...[they] produced the same result each time...
I attached one of the tubes to the vessel and marked the height of the quicksilver and...asked Father Chastin, one of the Minim Brothers...to watch if any changes should occur through the day...Taking the other tube and a portion of the quick silver...I walked to the top of Puy-de-Dôme, about 500 fathoms higher than the monastery, where upon experiment...found that the quicksilver reached a height of only 23" and 2 lines...I repeated the experiment five times with care...each at different points on the summit...found the same height of quicksilver...in each case...
Pascal replicated the experiment in Paris by carrying a barometer up to the top of the bell tower at the church of Saint-Jacques-de-la-Boucherie, a height of about 50 metres. The mercury dropped two lines. He found with both experiments that an ascent of 7 fathoms lowers the mercury by half a line. Note: Pascal used pouce and ligne for "inch" and "line", and toise for "fathom".
In a reply to Étienne Noël, who believed in the plenum, Pascal wrote, echoing contemporary notions of science and falsifiability: "In order to show that a hypothesis is evident, it does not suffice that all the phenomena follow from it; instead, if it leads to something contrary to a single one of the phenomena, that suffices to establish its falsity."
Blaise Pascal Chairs are given to outstanding international scientists to conduct their research in the Ile de France region.
In the winter of 1646, Pascal's 58-year-old father broke his hip when he slipped and fell on an icy street of Rouen; given the man's age and the state of medicine in the 17th century, a broken hip could be a very serious condition, perhaps even fatal. Rouen was home to two of the finest doctors in France, Deslandes and de la Bouteillerie. The elder Pascal "would not let anyone other than these men attend him...It was a good choice, for the old man survived and was able to walk again..." However treatment and rehabilitation took three months, during which time La Bouteillerie and Deslandes had become regular visitors.
Both men were followers of Jean Guillebert, proponent of a splinter group from Catholic teaching known as Jansenism. This still fairly small sect was making surprising inroads into the French Catholic community at that time. It espoused rigorous Augustinism. Blaise spoke with the doctors frequently, and after their successful treatment of his father, borrowed from them works by Jansenist authors. In this period, Pascal experienced a sort of "first conversion" and began to write on theological subjects in the course of the following year.
Pascal fell away from this initial religious engagement and experienced a few years of what some biographers have called his "worldly period" (1648–54). His father died in 1651 and left his inheritance to Pascal and his sister Jacqueline, for whom Pascal acted as conservator. Jacqueline announced that she would soon become a postulant in the Jansenist convent of Port-Royal. Pascal was deeply affected and very sad, not because of her choice, but because of his chronic poor health; he needed her just as she had needed him.
Suddenly there was war in the Pascal household. Blaise pleaded with Jacqueline not to leave, but she was adamant. He commanded her to stay, but that didn't work, either. At the heart of this was...Blaise's fear of abandonment...if Jacqueline entered Port-Royal, she would have to leave her inheritance behind...[but] nothing would change her mind.
By the end of October in 1651, a truce had been reached between brother and sister. In return for a healthy annual stipend, Jacqueline signed over her part of the inheritance to her brother. Gilberte had already been given her inheritance in the form of a dowry. In early January, Jacqueline left for Port-Royal. On that day, according to Gilberte concerning her brother, "He retired very sadly to his rooms without seeing Jacqueline, who was waiting in the little parlor..." In early June 1653, after what must have seemed like endless badgering from Jacqueline, Pascal formally signed over the whole of his sister's inheritance to Port-Royal, which, to him, "had begun to smell like a cult." With two-thirds of his father's estate now gone, the 29-year-old Pascal was now consigned to genteel poverty.
For a while, Pascal pursued the life of a bachelor. During visits to his sister at Port-Royal in 1654, he displayed contempt for affairs of the world but was not drawn to God.
On the 23 of November, 1654, between 10:30 and 12:30 at night, Pascal had an intense religious experience and immediately wrote a brief note to himself which began: "Fire. God of Abraham, God of Isaac, God of Jacob, not of the philosophers and the scholars..." and concluded by quoting Psalm 119:16: "I will not forget thy word. Amen." He seems to have carefully sewn this document into his coat and always transferred it when he changed clothes; a servant discovered it only by chance after his death. This piece is now known as the Memorial. The story of a carriage accident as having led to the experience described in the Memorial is disputed by some scholars. His belief and religious commitment revitalized, Pascal visited the older of two convents at Port-Royal for a two-week retreat in January 1655. For the next four years, he regularly travelled between Port-Royal and Paris. It was at this point immediately after his conversion when he began writing his first major literary work on religion, the Provincial Letters.
In literature, Pascal is regarded as one of the most important authors of the French Classical Period and is read today as one of the greatest masters of French prose. His use of satire and wit influenced later polemicists.
Beginning in 1656–57, Pascal published his memorable attack on casuistry, a popular ethical method used by Catholic thinkers in the early modern period (especially the Jesuits, and in particular Antonio Escobar). Pascal denounced casuistry as the mere use of complex reasoning to justify moral laxity and all sorts of sins. The 18-letter series was published between 1656 and 1657 under the pseudonym Louis de Montalte and incensed Louis XIV. The king ordered that the book be shredded and burnt in 1660. In 1661, in the midsts of the formulary controversy, the Jansenist school at Port-Royal was condemned and closed down; those involved with the school had to sign a 1656 papal bull condemning the teachings of Jansen as heretical. The final letter from Pascal, in 1657, had defied Alexander VII himself. Even Pope Alexander, while publicly opposing them, nonetheless was persuaded by Pascal's arguments.
Aside from their religious influence, the Provincial Letters were popular as a literary work. Pascal's use of humor, mockery, and vicious satire in his arguments made the letters ripe for public consumption, and influenced the prose of later French writers like Voltaire and Jean-Jacques Rousseau.
It is in the Provincial Letters that Pascal made his oft-quoted apology for writing a long letter, as he had not had time to write a shorter one. From Letter XVI, as translated by Thomas M'Crie: 'Reverend fathers, my letters were not wont either to be so prolix, or to follow so closely on one another. Want of time must plead my excuse for both of these faults. The present letter is a very long one, simply because I had no leisure to make it shorter.'
Charles Perrault wrote of the Letters: "Everything is there—purity of language, nobility of thought, solidity in reasoning, finesse in raillery, and throughout an agrément not to be found anywhere else."
Pascal is arguably best known as a philosopher, considered by some the second greatest French mind behind René Descartes. He was a dualist following Descartes. However, he is also remembered for his opposition to both the rationalism of the likes of Descartes and simultaneous opposition to the main countervailing epistemology, empiricism, preferring fideism.
He cared above all about the philosophy of religion. Pascalian theology has grown out of his perspective that humans are, according to Wood, "born into a duplicitous world that shapes us into duplicitous subjects and so we find it easy to reject God continually and deceive ourselves about our own sinfulness".
Pascal's major contribution to the philosophy of mathematics came with his De l'Esprit géométrique ("Of the Geometrical Spirit"), originally written as a preface to a geometry textbook for one of the famous Petites écoles de Port-Royal ("Little Schools of Port-Royal"). The work was unpublished until over a century after his death. Here, Pascal looked into the issue of discovering truths, arguing that the ideal of such a method would be to found all propositions on already established truths. At the same time, however, he claimed this was impossible because such established truths would require other truths to back them up—first principles, therefore, cannot be reached. Based on this, Pascal argued that the procedure used in geometry was as perfect as possible, with certain principles assumed and other propositions developed from them. Nevertheless, there was no way to know the assumed principles to be true.
Pascal also used De l'Esprit géométrique to develop a theory of definition. He distinguished between definitions which are conventional labels defined by the writer and definitions which are within the language and understood by everyone because they naturally designate their referent. The second type would be characteristic of the philosophy of essentialism. Pascal claimed that only definitions of the first type were important to science and mathematics, arguing that those fields should adopt the philosophy of formalism as formulated by Descartes.
In De l'Art de persuader ("On the Art of Persuasion"), Pascal looked deeper into geometry's axiomatic method, specifically the question of how people come to be convinced of the axioms upon which later conclusions are based. Pascal agreed with Montaigne that achieving certainty in these axioms and conclusions through human methods is impossible. He asserted that these principles can be grasped only through intuition, and that this fact underscored the necessity for submission to God in searching out truths.
Man is only a reed, the weakest in nature, but he is a thinking reed.
Pascal's most influential theological work, referred to posthumously as the Pensées ("Thoughts") is widely considered to be a masterpiece, and a landmark in French prose. When commenting on one particular section (Thought #72), Sainte-Beuve praised it as the finest pages in the French language. Will Durant hailed the Pensées as "the most eloquent book in French prose".
The Pensées was not completed before his death. It was to have been a sustained and coherent examination and defense of the Christian faith, with the original title Apologie de la religion Chrétienne ("Defense of the Christian Religion"). The first version of the numerous scraps of paper found after his death appeared in print as a book in 1669 titled Pensées de M. Pascal sur la religion, et sur quelques autres sujets ("Thoughts of M. Pascal on religion, and on some other subjects") and soon thereafter became a classic.
One of the Apologie's main strategies was to use the contradictory philosophies of Pyrrhonism and Stoicism, personalized by Montaigne on one hand, and Epictetus on the other, in order to bring the unbeliever to such despair and confusion that he would embrace God.
T. S. Eliot described him during this phase of his life as "a man of the world among ascetics, and an ascetic among men of the world." Pascal's ascetic lifestyle derived from a belief that it was natural and necessary for a person to suffer. In 1659, Pascal fell seriously ill. During his last years, he frequently tried to reject the ministrations of his doctors, saying, "Sickness is the natural state of Christians."
Louis XIV suppressed the Jansenist movement at Port-Royal in 1661. In response, Pascal wrote one of his final works, Écrit sur la signature du formulaire ("Writ on the Signing of the Form"), exhorting the Jansenists not to give in. Later that year, his sister Jacqueline died, which convinced Pascal to cease his polemics on Jansenism. Pascal's last major achievement, returning to his mechanical genius, was inaugurating perhaps the first bus line, the carrosses à cinq sols, moving passengers within Paris in a carriage with many seats. Pascal also designated the operation principles which were later used to plan public transportation: The carriages had a fixed route, fixed price, and left even if there were no passengers. It is widely considered that the idea of public transportation was well ahead of time. The lines were not commercially successful, and the last one closed by 1675.
In 1662, Pascal's illness became more violent, and his emotional condition had severely worsened since his sister's death. Aware that his health was fading quickly, he sought a move to the hospital for incurable diseases, but his doctors declared that he was too unstable to be carried. In Paris on 18 August 1662, Pascal went into convulsions and received extreme unction. He died the next morning, his last words being "May God never abandon me," and was buried in the cemetery of Saint-Étienne-du-Mont.
An autopsy performed after his death revealed grave problems with his stomach and other organs of his abdomen, along with damage to his brain. Despite the autopsy, the cause of his poor health was never precisely determined, though speculation focuses on tuberculosis, stomach cancer, or a combination of the two. The headaches which affected Pascal are generally attributed to his brain lesion.
One of the Universities of Clermont-Ferrand, France – Université Blaise Pascal – is named after him. Établissement scolaire français Blaise-Pascal in Lubumbashi, Democratic Republic of the Congo is named after Pascal.
The 1969 Eric Rohmer film My Night at Maud's is based on the work of Pascal. Roberto Rossellini directed a filmed biopic, Blaise Pascal, which originally aired on Italian television in 1971. Pascal was a subject of the first edition of the 1984 BBC Two documentary, Sea of Faith, presented by Don Cupitt. The chameleon in the film Tangled is named for Pascal.
A programming language is named for Pascal. In 2014, Nvidia announced its new Pascal microarchitecture, which is named for Pascal. The first graphics cards featuring Pascal were released in 2016.
The 2017 game Nier: Automata has multiple characters named after famous philosophers; one of these is a sentient pacifistic machine named Pascal, who serves as a major supporting character. Pascal creates a village for machines to live peacefully with the androids they are at war with and acts as a parental figure for other machines trying to adapt to their newly-found individuality.
The otter in the Animal Crossing series is named for Pascal.
Minor planet 4500 Pascal is named in his honor.
Pope Paul VI, in encyclical Populorum progressio, issued in 1967, quotes Pascal's Pensées:
True humanism points the way toward God and acknowledges the task to which we are called, the task which offers us the real meaning of human life. Man is not the ultimate measure of man. Man becomes truly man only by passing beyond himself. In the words of Pascal: "Man infinitely surpasses man.
In 2023, Pope Francis released an apostolic letter, Sublimitas et miseria hominis, dedicated to Blaise Pascal, in commemoration of the fourth centenary of his birth. | [
{
"paragraph_id": 0,
"text": "Blaise Pascal (/pæˈskæl/ pass-KAL, also UK: /-ˈskɑːl, ˈpæskəl, -skæl/ -KAHL, PASS-kəl, -kal, US: /pɑːˈskɑːl/ pahs-KAHL; French: [blɛz paskal]; 19 June 1623 – 19 August 1662) was a French mathematician, physicist, inventor, philosopher, and Catholic writer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Pascal was a child prodigy who was educated by his father, a tax collector in Rouen. His earliest mathematical work was on conic sections; he wrote a significant treatise on the subject of projective geometry at the age of 16. He later corresponded with Pierre de Fermat on probability theory, strongly influencing the development of modern economics and social science. In 1642, while still a teenager, he started some pioneering work on calculating machines (called Pascal's calculators and later Pascalines), establishing him as one of the first two inventors of the mechanical calculator.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Like his contemporary René Descartes, Pascal was also a pioneer in the natural and applied sciences. Pascal wrote in defense of the scientific method and produced several controversial results. He made important contributions to the study of fluids, and clarified the concepts of pressure and vacuum by generalising the work of Evangelista Torricelli. Following Torricelli and Galileo Galilei, he rebutted the likes of Aristotle and Descartes who insisted that nature abhors a vacuum in 1647.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing influential works on philosophy and theology. His two most famous works date from this period: the Lettres provinciales and the Pensées, the former set in the conflict between Jansenists and Jesuits. The latter contains Pascal's wager, known in the original as the Discourse on the Machine, a fideistic probabilistic argument for God's existence. In that year, he also wrote an important treatise on the arithmetical triangle. Between 1658 and 1659, he wrote on the cycloid and its use in calculating the volume of solids.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Throughout his life, Pascal was in frail health, especially after the age of 18; he died just two months after his 39th birthday.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Pascal was born in Clermont-Ferrand, which is in France's Auvergne region, by the Massif Central. He lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal (1588–1651), who also had an interest in science and mathematics, was a local judge and member of the \"Noblesse de Robe\". Pascal had two sisters, the younger Jacqueline and the elder Gilberte.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "In 1631, five years after the death of his wife, Étienne Pascal moved with his children to Paris. The newly arrived family soon hired Louise Delfault, a maid who eventually became a key member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, particularly his son Blaise. The young Pascal showed an amazing aptitude for mathematics and science.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "Particularly of interest to Pascal was a work of Desargues on conic sections. Following Desargues' thinking, the 16-year-old Pascal produced, as a means of proof, a short treatise on what was called the Mystic Hexagram, Essai pour les coniques (Essay on Conics) and sent it — his first serious work of mathematics — to Père Mersenne in Paris; it is known still today as Pascal's theorem. It states that if a hexagon is inscribed in a circle (or conic) then the three intersection points of opposite sides lie on a line (called the Pascal line).",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "Pascal's work was so precocious that René Descartes was convinced that Pascal's father had written it. When assured by Mersenne that it was, indeed, the product of the son and not the father, Descartes dismissed it with a sniff: \"I do not find it strange that he has offered demonstrations about conics more appropriate than those of the ancients,\" adding, \"but other matters related to this subject can be proposed that would scarcely occur to a 16-year-old child.\"",
"title": "Life"
},
{
"paragraph_id": 9,
"text": "In France at that time offices and positions could be—and were—bought and sold. In 1631, Étienne sold his position as second president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, and enjoy, Paris, but in 1638 Cardinal Richelieu, desperate for money to carry on the Thirty Years' War, defaulted on the government's bonds. Suddenly Étienne Pascal's worth had dropped from nearly 66,000 livres to less than 7,300.",
"title": "Life"
},
{
"paragraph_id": 10,
"text": "Like so many others, Étienne was eventually forced to flee Paris because of his opposition to the fiscal policies of Richelieu, leaving his three children in the care of his neighbour Madame Sainctot, a great beauty with an infamous past who kept one of the most glittering and intellectual salons in all France. It was only when Jacqueline performed well in a children's play with Richelieu in attendance that Étienne was pardoned. In time, Étienne was back in good graces with the Cardinal and in 1639 had been appointed the king's commissioner of taxes in the city of Rouen—a city whose tax records, thanks to uprisings, were in utter chaos.",
"title": "Life"
},
{
"paragraph_id": 11,
"text": "In 1642, in an effort to ease his father's endless, exhausting calculations, and recalculations, of taxes owed and paid (into which work the young Pascal had been recruited), Pascal, not yet 19, constructed a mechanical calculator capable of addition and subtraction, called Pascal's calculator or the Pascaline. Of the eight Pascalines known to have survived, four are held by the Musée des Arts et Métiers in Paris and one more by the Zwinger museum in Dresden, Germany, exhibit two of his original mechanical calculators.",
"title": "Life"
},
{
"paragraph_id": 12,
"text": "Although these machines are pioneering forerunners to a further 400 years of development of mechanical methods of calculation, and in a sense to the later field of computer engineering, the calculator failed to be a great commercial success. Partly because it was still quite cumbersome to use in practice, but probably primarily because it was extraordinarily expensive, the Pascaline became little more than a toy, and a status symbol, for the very rich both in France and elsewhere in Europe. Pascal continued to make improvements to his design through the next decade, and he refers to some 50 machines that were built to his design. He built 20 finished machines over the following 10 years.",
"title": "Life"
},
{
"paragraph_id": 13,
"text": "Pascal's development of probability theory was his most influential contribution to mathematics. Originally applied to gambling, today it is extremely important in economics, especially in actuarial science. John Ross writes, \"Probability theory and the discoveries following it changed the way we regard uncertainty, risk, decision-making, and an individual's and society's ability to influence the course of future events.\" However, Pascal and Fermat, though doing important early work in probability theory, did not develop the field very far. Christiaan Huygens, learning of the subject from the correspondence of Pascal and Fermat, wrote the first book on the subject. Later figures who continued the development of the theory include Abraham de Moivre and Pierre-Simon Laplace.",
"title": "Life"
},
{
"paragraph_id": 14,
"text": "In 1654, prompted by his friend the Chevalier de Méré, he corresponded with Pierre de Fermat on the subject of gambling problems, and from that collaboration was born the mathematical theory of probabilities. The specific problem was that of two players who want to finish a game early and, given the current circumstances of the game, want to divide the stakes fairly, based on the chance each has of winning the game from that point. From this discussion, the notion of expected value was introduced. Pascal later (in the Pensées) used a probabilistic argument, Pascal's wager, to justify belief in God and a virtuous life. The work done by Fermat and Pascal into the calculus of probabilities laid important groundwork for Leibniz' formulation of the calculus.",
"title": "Life"
},
{
"paragraph_id": 15,
"text": "Pascal's Traité du triangle arithmétique, written in 1654 but published posthumously in 1665, described a convenient tabular presentation for binomial coefficients which he called the arithmetical triangle, but is now called Pascal's triangle. The triangle can also be represented:",
"title": "Life"
},
{
"paragraph_id": 16,
"text": "He defined the numbers in the triangle by recursion: Call the number in the (m + 1)th row and (n + 1)th column tmn. Then tmn = tm–1,n + tm,n–1, for m = 0, 1, 2, ... and n = 0, 1, 2, ... The boundary conditions are tm,−1 = 0, t−1,n = 0 for m = 1, 2, 3, ... and n = 1, 2, 3, ... The generator t00 = 1. Pascal concluded with the proof,",
"title": "Life"
},
{
"paragraph_id": 17,
"text": "In the same treatise, Pascal gave an explicit statement of the principle of mathematical induction. In 1654, he proved Pascal's identity relating the sums of the p-th powers of the first n positive integers for p = 0, 1, 2, ..., k.",
"title": "Life"
},
{
"paragraph_id": 18,
"text": "That same year, Pascal had a religious experience, and mostly gave up work in mathematics.",
"title": "Life"
},
{
"paragraph_id": 19,
"text": "In 1658, Pascal, while suffering from a toothache, began considering several problems concerning the cycloid. His toothache disappeared, and he took this as a heavenly sign to proceed with his research. Eight days later he had completed his essay and, to publicize the results, proposed a contest.",
"title": "Life"
},
{
"paragraph_id": 20,
"text": "Pascal proposed three questions relating to the center of gravity, area and volume of the cycloid, with the winner or winners to receive prizes of 20 and 40 Spanish doubloons. Pascal, Gilles de Roberval and Pierre de Carcavi were the judges, and neither of the two submissions (by John Wallis and Antoine de Lalouvère) were judged to be adequate. While the contest was ongoing, Christopher Wren sent Pascal a proposal for a proof of the rectification of the cycloid; Roberval claimed promptly that he had known of the proof for years. Wallis published Wren's proof (crediting Wren) in Wallis's Tractus Duo, giving Wren priority for the first published proof.",
"title": "Life"
},
{
"paragraph_id": 21,
"text": "Pascal contributed to several fields in physics, most notably the fields of fluid mechanics and pressure. In honour of his scientific contributions, the name Pascal has been given to the SI unit of pressure and Pascal's law (an important principle of hydrostatics). He introduced a primitive form of roulette and the roulette wheel in his search for a perpetual motion machine.",
"title": "Life"
},
{
"paragraph_id": 22,
"text": "His work in the fields of hydrodynamics and hydrostatics centered on the principles of hydraulic fluids. His inventions include the hydraulic press (using hydraulic pressure to multiply force) and the syringe. He proved that hydrostatic pressure depends not on the weight of the fluid but on the elevation difference. He demonstrated this principle by attaching a thin tube to a barrel full of water and filling the tube with water up to the level of the third floor of a building. This caused the barrel to leak, in what became known as Pascal's barrel experiment.",
"title": "Life"
},
{
"paragraph_id": 23,
"text": "By 1647, Pascal had learned of Evangelista Torricelli's experimentation with barometers. Having replicated an experiment that involved placing a tube filled with mercury upside down in a bowl of mercury, Pascal questioned what force kept some mercury in the tube and what filled the space above the mercury in the tube. At the time, most scientists including Descartes believed in a plenum, i. e. some invisible matter filled all of space, rather than a vacuum. \"Nature abhors a vacuum.\" This was based on the Aristotelian notion that everything in motion was a substance, moved by another substance. Furthermore, light passed through the glass tube, suggesting a substance such as aether rather than vacuum filled the space.",
"title": "Life"
},
{
"paragraph_id": 24,
"text": "Following more experimentation in this vein, in 1647 Pascal produced Experiences nouvelles touchant le vide (\"New experiments with the vacuum\"), which detailed basic rules describing to what degree various liquids could be supported by air pressure. It also provided reasons why it was indeed a vacuum above the column of liquid in a barometer tube. This work was followed by Récit de la grande expérience de l'équilibre des liqueurs (\"Account of the great experiment on equilibrium in liquids\") published in 1648.",
"title": "Life"
},
{
"paragraph_id": 25,
"text": "The Torricellian vacuum found that air pressure is equal to the weight of 30 inches of mercury. If air has a finite weight, Earth's atmosphere must have a maximum height. Pascal reasoned that if true, air pressure on a high mountain must be less than at a lower altitude. He lived near the Puy de Dôme mountain, 4,790 feet (1,460 m) tall, but his health was poor so could not climb it. On 19 September 1648, after many months of Pascal's friendly but insistent prodding, Florin Périer, husband of Pascal's elder sister Gilberte, was finally able to carry out the fact-finding mission vital to Pascal's theory. The account, written by Périer, reads:",
"title": "Life"
},
{
"paragraph_id": 26,
"text": "The weather was chancy last Saturday...[but] around five o'clock that morning...the Puy-de-Dôme was visible...so I decided to give it a try. Several important people of the city of Clermont had asked me to let them know when I would make the ascent...I was delighted to have them with me in this great work...",
"title": "Life"
},
{
"paragraph_id": 27,
"text": "...at eight o'clock we met in the gardens of the Minim Fathers, which has the lowest elevation in town....First I poured 16 pounds of quicksilver...into a vessel...then took several glass tubes...each four feet long and hermetically sealed at one end and opened at the other...then placed them in the vessel [of quicksilver]...I found the quick silver stood at 26\" and 3+1⁄2 lines above the quicksilver in the vessel...I repeated the experiment two more times while standing in the same spot...[they] produced the same result each time...",
"title": "Life"
},
{
"paragraph_id": 28,
"text": "I attached one of the tubes to the vessel and marked the height of the quicksilver and...asked Father Chastin, one of the Minim Brothers...to watch if any changes should occur through the day...Taking the other tube and a portion of the quick silver...I walked to the top of Puy-de-Dôme, about 500 fathoms higher than the monastery, where upon experiment...found that the quicksilver reached a height of only 23\" and 2 lines...I repeated the experiment five times with care...each at different points on the summit...found the same height of quicksilver...in each case...",
"title": "Life"
},
{
"paragraph_id": 29,
"text": "Pascal replicated the experiment in Paris by carrying a barometer up to the top of the bell tower at the church of Saint-Jacques-de-la-Boucherie, a height of about 50 metres. The mercury dropped two lines. He found with both experiments that an ascent of 7 fathoms lowers the mercury by half a line. Note: Pascal used pouce and ligne for \"inch\" and \"line\", and toise for \"fathom\".",
"title": "Life"
},
{
"paragraph_id": 30,
"text": "In a reply to Étienne Noël, who believed in the plenum, Pascal wrote, echoing contemporary notions of science and falsifiability: \"In order to show that a hypothesis is evident, it does not suffice that all the phenomena follow from it; instead, if it leads to something contrary to a single one of the phenomena, that suffices to establish its falsity.\"",
"title": "Life"
},
{
"paragraph_id": 31,
"text": "Blaise Pascal Chairs are given to outstanding international scientists to conduct their research in the Ile de France region.",
"title": "Life"
},
{
"paragraph_id": 32,
"text": "In the winter of 1646, Pascal's 58-year-old father broke his hip when he slipped and fell on an icy street of Rouen; given the man's age and the state of medicine in the 17th century, a broken hip could be a very serious condition, perhaps even fatal. Rouen was home to two of the finest doctors in France, Deslandes and de la Bouteillerie. The elder Pascal \"would not let anyone other than these men attend him...It was a good choice, for the old man survived and was able to walk again...\" However treatment and rehabilitation took three months, during which time La Bouteillerie and Deslandes had become regular visitors.",
"title": "Life"
},
{
"paragraph_id": 33,
"text": "Both men were followers of Jean Guillebert, proponent of a splinter group from Catholic teaching known as Jansenism. This still fairly small sect was making surprising inroads into the French Catholic community at that time. It espoused rigorous Augustinism. Blaise spoke with the doctors frequently, and after their successful treatment of his father, borrowed from them works by Jansenist authors. In this period, Pascal experienced a sort of \"first conversion\" and began to write on theological subjects in the course of the following year.",
"title": "Life"
},
{
"paragraph_id": 34,
"text": "Pascal fell away from this initial religious engagement and experienced a few years of what some biographers have called his \"worldly period\" (1648–54). His father died in 1651 and left his inheritance to Pascal and his sister Jacqueline, for whom Pascal acted as conservator. Jacqueline announced that she would soon become a postulant in the Jansenist convent of Port-Royal. Pascal was deeply affected and very sad, not because of her choice, but because of his chronic poor health; he needed her just as she had needed him.",
"title": "Life"
},
{
"paragraph_id": 35,
"text": "Suddenly there was war in the Pascal household. Blaise pleaded with Jacqueline not to leave, but she was adamant. He commanded her to stay, but that didn't work, either. At the heart of this was...Blaise's fear of abandonment...if Jacqueline entered Port-Royal, she would have to leave her inheritance behind...[but] nothing would change her mind.",
"title": "Life"
},
{
"paragraph_id": 36,
"text": "By the end of October in 1651, a truce had been reached between brother and sister. In return for a healthy annual stipend, Jacqueline signed over her part of the inheritance to her brother. Gilberte had already been given her inheritance in the form of a dowry. In early January, Jacqueline left for Port-Royal. On that day, according to Gilberte concerning her brother, \"He retired very sadly to his rooms without seeing Jacqueline, who was waiting in the little parlor...\" In early June 1653, after what must have seemed like endless badgering from Jacqueline, Pascal formally signed over the whole of his sister's inheritance to Port-Royal, which, to him, \"had begun to smell like a cult.\" With two-thirds of his father's estate now gone, the 29-year-old Pascal was now consigned to genteel poverty.",
"title": "Life"
},
{
"paragraph_id": 37,
"text": "For a while, Pascal pursued the life of a bachelor. During visits to his sister at Port-Royal in 1654, he displayed contempt for affairs of the world but was not drawn to God.",
"title": "Life"
},
{
"paragraph_id": 38,
"text": "On the 23 of November, 1654, between 10:30 and 12:30 at night, Pascal had an intense religious experience and immediately wrote a brief note to himself which began: \"Fire. God of Abraham, God of Isaac, God of Jacob, not of the philosophers and the scholars...\" and concluded by quoting Psalm 119:16: \"I will not forget thy word. Amen.\" He seems to have carefully sewn this document into his coat and always transferred it when he changed clothes; a servant discovered it only by chance after his death. This piece is now known as the Memorial. The story of a carriage accident as having led to the experience described in the Memorial is disputed by some scholars. His belief and religious commitment revitalized, Pascal visited the older of two convents at Port-Royal for a two-week retreat in January 1655. For the next four years, he regularly travelled between Port-Royal and Paris. It was at this point immediately after his conversion when he began writing his first major literary work on religion, the Provincial Letters.",
"title": "Life"
},
{
"paragraph_id": 39,
"text": "In literature, Pascal is regarded as one of the most important authors of the French Classical Period and is read today as one of the greatest masters of French prose. His use of satire and wit influenced later polemicists.",
"title": "Life"
},
{
"paragraph_id": 40,
"text": "Beginning in 1656–57, Pascal published his memorable attack on casuistry, a popular ethical method used by Catholic thinkers in the early modern period (especially the Jesuits, and in particular Antonio Escobar). Pascal denounced casuistry as the mere use of complex reasoning to justify moral laxity and all sorts of sins. The 18-letter series was published between 1656 and 1657 under the pseudonym Louis de Montalte and incensed Louis XIV. The king ordered that the book be shredded and burnt in 1660. In 1661, in the midsts of the formulary controversy, the Jansenist school at Port-Royal was condemned and closed down; those involved with the school had to sign a 1656 papal bull condemning the teachings of Jansen as heretical. The final letter from Pascal, in 1657, had defied Alexander VII himself. Even Pope Alexander, while publicly opposing them, nonetheless was persuaded by Pascal's arguments.",
"title": "Life"
},
{
"paragraph_id": 41,
"text": "Aside from their religious influence, the Provincial Letters were popular as a literary work. Pascal's use of humor, mockery, and vicious satire in his arguments made the letters ripe for public consumption, and influenced the prose of later French writers like Voltaire and Jean-Jacques Rousseau.",
"title": "Life"
},
{
"paragraph_id": 42,
"text": "It is in the Provincial Letters that Pascal made his oft-quoted apology for writing a long letter, as he had not had time to write a shorter one. From Letter XVI, as translated by Thomas M'Crie: 'Reverend fathers, my letters were not wont either to be so prolix, or to follow so closely on one another. Want of time must plead my excuse for both of these faults. The present letter is a very long one, simply because I had no leisure to make it shorter.'",
"title": "Life"
},
{
"paragraph_id": 43,
"text": "Charles Perrault wrote of the Letters: \"Everything is there—purity of language, nobility of thought, solidity in reasoning, finesse in raillery, and throughout an agrément not to be found anywhere else.\"",
"title": "Life"
},
{
"paragraph_id": 44,
"text": "Pascal is arguably best known as a philosopher, considered by some the second greatest French mind behind René Descartes. He was a dualist following Descartes. However, he is also remembered for his opposition to both the rationalism of the likes of Descartes and simultaneous opposition to the main countervailing epistemology, empiricism, preferring fideism.",
"title": "Life"
},
{
"paragraph_id": 45,
"text": "He cared above all about the philosophy of religion. Pascalian theology has grown out of his perspective that humans are, according to Wood, \"born into a duplicitous world that shapes us into duplicitous subjects and so we find it easy to reject God continually and deceive ourselves about our own sinfulness\".",
"title": "Life"
},
{
"paragraph_id": 46,
"text": "Pascal's major contribution to the philosophy of mathematics came with his De l'Esprit géométrique (\"Of the Geometrical Spirit\"), originally written as a preface to a geometry textbook for one of the famous Petites écoles de Port-Royal (\"Little Schools of Port-Royal\"). The work was unpublished until over a century after his death. Here, Pascal looked into the issue of discovering truths, arguing that the ideal of such a method would be to found all propositions on already established truths. At the same time, however, he claimed this was impossible because such established truths would require other truths to back them up—first principles, therefore, cannot be reached. Based on this, Pascal argued that the procedure used in geometry was as perfect as possible, with certain principles assumed and other propositions developed from them. Nevertheless, there was no way to know the assumed principles to be true.",
"title": "Life"
},
{
"paragraph_id": 47,
"text": "Pascal also used De l'Esprit géométrique to develop a theory of definition. He distinguished between definitions which are conventional labels defined by the writer and definitions which are within the language and understood by everyone because they naturally designate their referent. The second type would be characteristic of the philosophy of essentialism. Pascal claimed that only definitions of the first type were important to science and mathematics, arguing that those fields should adopt the philosophy of formalism as formulated by Descartes.",
"title": "Life"
},
{
"paragraph_id": 48,
"text": "In De l'Art de persuader (\"On the Art of Persuasion\"), Pascal looked deeper into geometry's axiomatic method, specifically the question of how people come to be convinced of the axioms upon which later conclusions are based. Pascal agreed with Montaigne that achieving certainty in these axioms and conclusions through human methods is impossible. He asserted that these principles can be grasped only through intuition, and that this fact underscored the necessity for submission to God in searching out truths.",
"title": "Life"
},
{
"paragraph_id": 49,
"text": "Man is only a reed, the weakest in nature, but he is a thinking reed.",
"title": "Life"
},
{
"paragraph_id": 50,
"text": "Pascal's most influential theological work, referred to posthumously as the Pensées (\"Thoughts\") is widely considered to be a masterpiece, and a landmark in French prose. When commenting on one particular section (Thought #72), Sainte-Beuve praised it as the finest pages in the French language. Will Durant hailed the Pensées as \"the most eloquent book in French prose\".",
"title": "Life"
},
{
"paragraph_id": 51,
"text": "The Pensées was not completed before his death. It was to have been a sustained and coherent examination and defense of the Christian faith, with the original title Apologie de la religion Chrétienne (\"Defense of the Christian Religion\"). The first version of the numerous scraps of paper found after his death appeared in print as a book in 1669 titled Pensées de M. Pascal sur la religion, et sur quelques autres sujets (\"Thoughts of M. Pascal on religion, and on some other subjects\") and soon thereafter became a classic.",
"title": "Life"
},
{
"paragraph_id": 52,
"text": "One of the Apologie's main strategies was to use the contradictory philosophies of Pyrrhonism and Stoicism, personalized by Montaigne on one hand, and Epictetus on the other, in order to bring the unbeliever to such despair and confusion that he would embrace God.",
"title": "Life"
},
{
"paragraph_id": 53,
"text": "T. S. Eliot described him during this phase of his life as \"a man of the world among ascetics, and an ascetic among men of the world.\" Pascal's ascetic lifestyle derived from a belief that it was natural and necessary for a person to suffer. In 1659, Pascal fell seriously ill. During his last years, he frequently tried to reject the ministrations of his doctors, saying, \"Sickness is the natural state of Christians.\"",
"title": "Life"
},
{
"paragraph_id": 54,
"text": "Louis XIV suppressed the Jansenist movement at Port-Royal in 1661. In response, Pascal wrote one of his final works, Écrit sur la signature du formulaire (\"Writ on the Signing of the Form\"), exhorting the Jansenists not to give in. Later that year, his sister Jacqueline died, which convinced Pascal to cease his polemics on Jansenism. Pascal's last major achievement, returning to his mechanical genius, was inaugurating perhaps the first bus line, the carrosses à cinq sols, moving passengers within Paris in a carriage with many seats. Pascal also designated the operation principles which were later used to plan public transportation: The carriages had a fixed route, fixed price, and left even if there were no passengers. It is widely considered that the idea of public transportation was well ahead of time. The lines were not commercially successful, and the last one closed by 1675.",
"title": "Life"
},
{
"paragraph_id": 55,
"text": "In 1662, Pascal's illness became more violent, and his emotional condition had severely worsened since his sister's death. Aware that his health was fading quickly, he sought a move to the hospital for incurable diseases, but his doctors declared that he was too unstable to be carried. In Paris on 18 August 1662, Pascal went into convulsions and received extreme unction. He died the next morning, his last words being \"May God never abandon me,\" and was buried in the cemetery of Saint-Étienne-du-Mont.",
"title": "Life"
},
{
"paragraph_id": 56,
"text": "An autopsy performed after his death revealed grave problems with his stomach and other organs of his abdomen, along with damage to his brain. Despite the autopsy, the cause of his poor health was never precisely determined, though speculation focuses on tuberculosis, stomach cancer, or a combination of the two. The headaches which affected Pascal are generally attributed to his brain lesion.",
"title": "Life"
},
{
"paragraph_id": 57,
"text": "One of the Universities of Clermont-Ferrand, France – Université Blaise Pascal – is named after him. Établissement scolaire français Blaise-Pascal in Lubumbashi, Democratic Republic of the Congo is named after Pascal.",
"title": "Legacy"
},
{
"paragraph_id": 58,
"text": "The 1969 Eric Rohmer film My Night at Maud's is based on the work of Pascal. Roberto Rossellini directed a filmed biopic, Blaise Pascal, which originally aired on Italian television in 1971. Pascal was a subject of the first edition of the 1984 BBC Two documentary, Sea of Faith, presented by Don Cupitt. The chameleon in the film Tangled is named for Pascal.",
"title": "Legacy"
},
{
"paragraph_id": 59,
"text": "A programming language is named for Pascal. In 2014, Nvidia announced its new Pascal microarchitecture, which is named for Pascal. The first graphics cards featuring Pascal were released in 2016.",
"title": "Legacy"
},
{
"paragraph_id": 60,
"text": "The 2017 game Nier: Automata has multiple characters named after famous philosophers; one of these is a sentient pacifistic machine named Pascal, who serves as a major supporting character. Pascal creates a village for machines to live peacefully with the androids they are at war with and acts as a parental figure for other machines trying to adapt to their newly-found individuality.",
"title": "Legacy"
},
{
"paragraph_id": 61,
"text": "The otter in the Animal Crossing series is named for Pascal.",
"title": "Legacy"
},
{
"paragraph_id": 62,
"text": "Minor planet 4500 Pascal is named in his honor.",
"title": "Legacy"
},
{
"paragraph_id": 63,
"text": "Pope Paul VI, in encyclical Populorum progressio, issued in 1967, quotes Pascal's Pensées:",
"title": "Legacy"
},
{
"paragraph_id": 64,
"text": "True humanism points the way toward God and acknowledges the task to which we are called, the task which offers us the real meaning of human life. Man is not the ultimate measure of man. Man becomes truly man only by passing beyond himself. In the words of Pascal: \"Man infinitely surpasses man.",
"title": "Legacy"
},
{
"paragraph_id": 65,
"text": "In 2023, Pope Francis released an apostolic letter, Sublimitas et miseria hominis, dedicated to Blaise Pascal, in commemoration of the fourth centenary of his birth.",
"title": "Legacy"
}
] | Blaise Pascal was a French mathematician, physicist, inventor, philosopher, and Catholic writer. Pascal was a child prodigy who was educated by his father, a tax collector in Rouen. His earliest mathematical work was on conic sections; he wrote a significant treatise on the subject of projective geometry at the age of 16. He later corresponded with Pierre de Fermat on probability theory, strongly influencing the development of modern economics and social science. In 1642, while still a teenager, he started some pioneering work on calculating machines, establishing him as one of the first two inventors of the mechanical calculator. Like his contemporary René Descartes, Pascal was also a pioneer in the natural and applied sciences. Pascal wrote in defense of the scientific method and produced several controversial results. He made important contributions to the study of fluids, and clarified the concepts of pressure and vacuum by generalising the work of Evangelista Torricelli. Following Torricelli and Galileo Galilei, he rebutted the likes of Aristotle and Descartes who insisted that nature abhors a vacuum in 1647. In 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing influential works on philosophy and theology. His two most famous works date from this period: the Lettres provinciales and the Pensées, the former set in the conflict between Jansenists and Jesuits. The latter contains Pascal's wager, known in the original as the Discourse on the Machine, a fideistic probabilistic argument for God's existence. In that year, he also wrote an important treatise on the arithmetical triangle. Between 1658 and 1659, he wrote on the cycloid and its use in calculating the volume of solids. Throughout his life, Pascal was in frail health, especially after the age of 18; he died just two months after his 39th birthday. | 2001-09-25T14:52:19Z | 2023-12-09T23:08:16Z | [
"Template:TCMDb title",
"Template:Sister project links",
"Template:ULBDD",
"Template:Blaise Pascal",
"Template:Short description",
"Template:Citation needed",
"Template:Blockquote",
"Template:Citation",
"Template:IPAc-en",
"Template:Lang",
"Template:Cite news",
"Template:Cite SEP",
"Template:Navboxes",
"Template:Pp-semi-indef",
"Template:Infobox person",
"Template:Cite book",
"Template:Librivox author",
"Template:Cite magazine",
"Template:Refend",
"Template:Cite IEP",
"Template:MathGenealogy",
"Template:Pp-move",
"Template:Respell",
"Template:Reflist",
"Template:Webarchive",
"Template:Cite CE1913",
"Template:MacTutor Biography",
"Template:Refbegin",
"Template:Internet Archive author",
"Template:Use dmy dates",
"Template:Clear",
"Template:'",
"Template:Circa",
"Template:Cite Merriam-Webster",
"Template:OL author",
"Template:Authority control",
"Template:Main",
"Template:NoteTag",
"Template:Cite encyclopedia",
"Template:Cite web",
"Template:Gutenberg author",
"Template:Sfn",
"Template:Isbn",
"Template:Cite journal",
"Template:Cite EB1911",
"Template:Catholic philosophy",
"Template:IPA-fr",
"Template:Convert",
"Template:Scientists whose names are used as SI units"
] | https://en.wikipedia.org/wiki/Blaise_Pascal |
4,069 | Brittonic languages | The Brittonic languages (also Brythonic or British Celtic; Welsh: ieithoedd Brythonaidd/Prydeinig; Cornish: yethow brythonek/predennek; Breton: yezhoù predenek) form one of the two branches of the Insular Celtic language family; the other is Goidelic. It comprises the extant languages Breton, Cornish, and Welsh. The name Brythonic was derived by Welsh Celticist John Rhys from the Welsh word Brython, meaning Ancient Britons as opposed to an Anglo-Saxon or Gael.
The Brittonic languages derive from the Common Brittonic language, spoken throughout Great Britain during the Iron Age and Roman period. In the 5th and 6th centuries emigrating Britons also took Brittonic speech to the continent, most significantly in Brittany and Britonia. During the next few centuries, in much of Britain the language was replaced by Old English and Scottish Gaelic, with the remaining Common Brittonic language splitting into regional dialects, eventually evolving into Welsh, Cornish, Breton, Cumbric, and probably Pictish. Welsh and Breton continue to be spoken as native languages, while a revival in Cornish has led to an increase in speakers of that language. Cumbric and Pictish are extinct, having been replaced by Goidelic and Anglic speech. The Isle of Man and Orkney may also have originally spoken a Brittonic language, but this was later supplanted by Goidelic on the Isle of Man and Norse on Orkney. There is also a community of Brittonic language speakers in Y Wladfa (the Welsh settlement in Patagonia).
The names "Brittonic" and "Brythonic" are scholarly conventions referring to the Celtic languages of Britain and to the ancestral language they originated from, designated Common Brittonic, in contrast to the Goidelic languages originating in Ireland. Both were created in the 19th century to avoid the ambiguity of earlier terms such as "British" and "Cymric". "Brythonic" was coined in 1879 by the Celticist John Rhys from the Welsh word Brython. "Brittonic", derived from "Briton" and also earlier spelled "Britonic" and "Britonnic", emerged later in the 19th century. It became more prominent through the 20th century, and was used in Kenneth H. Jackson's highly influential 1953 work on the topic, Language and History in Early Britain. Jackson noted that by that time "Brythonic" had become a dated term, and that "of late there has been an increasing tendency to use Brittonic instead." Today, "Brittonic" often replaces "Brythonic" in the literature. Rudolf Thurneysen used "Britannic" in his influential A Grammar of Old Irish, although this never became popular among subsequent scholars.
Comparable historical terms include the Medieval Latin lingua Britannica and sermo Britannicus and the Welsh Brythoneg. Some writers use "British" for the language and its descendants, although, due to the risk of confusion, others avoid it or use it only in a restricted sense. Jackson, and later John T. Koch, use "British" only for the early phase of the Common Brittonic language.
Before Jackson's work, "Brittonic" and "Brythonic" were often used for all the P-Celtic languages, including not just the varieties in Britain but those Continental Celtic languages that similarly experienced the evolution of the Proto-Celtic language element /kʷ/ to /p/. However, subsequent writers have tended to follow Jackson's scheme, rendering this use obsolete.
The name "Britain" itself comes from Latin: Britannia~Brittania, via Old French Bretaigne and Middle English Breteyne, possibly influenced by Old English Bryten(lond), probably also from Latin Brittania, ultimately an adaptation of the native word for the island, *Pritanī.
An early written reference to the British Isles may derive from the works of the Greek explorer Pytheas of Massalia; later Greek writers such as Diodorus of Sicily and Strabo who quote Pytheas' use of variants such as πρεττανική (Prettanikē), "The Britannic [land, island]", and νησοι βρεττανιαι (nēsoi brettaniai), "Britannic islands", with *Pretani being a Celtic word that might mean "the painted ones" or "the tattooed folk", referring to body decoration (see below).
Knowledge of the Brittonic languages comes from a variety of sources. The early language's information is obtained from coins, inscriptions, and comments by classical writers as well as place names and personal names recorded by them. For later languages, there is information from medieval writers and modern native speakers, together with place names. The names recorded in the Roman period are given in Rivet and Smith.
The Brittonic branch is also referred to as P-Celtic because linguistic reconstruction of the Brittonic reflex of the Proto-Indo-European phoneme *kʷ is p as opposed to Goidelic k. Such nomenclature usually implies acceptance of the P-Celtic and Q-Celtic hypothesis rather than the Insular Celtic hypothesis because the term includes certain Continental Celtic languages as well. (For a discussion, see Celtic languages.)
Other major characteristics include:
Initial s-:
Lenition:
Voiceless spirants:
Nasal assimilation:
The family tree of the Brittonic languages is as follows:
Brittonic languages in use today are Welsh, Cornish and Breton. Welsh and Breton have been spoken continuously since they formed. For all practical purposes Cornish died out during the 18th or 19th century, but a revival movement has more recently created small numbers of new speakers. Also notable are the extinct language Cumbric, and possibly the extinct Pictish. One view, advanced in the 1950s and based on apparently unintelligible ogham inscriptions, was that the Picts may have also used a non-Indo-European language. This view, while attracting broad popular appeal, has virtually no following in contemporary linguistic scholarship.
The modern Brittonic languages are generally considered to all derive from a common ancestral language termed Brittonic, British, Common Brittonic, Old Brittonic or Proto-Brittonic, which is thought to have developed from Proto-Celtic or early Insular Celtic by the 6th century BC.
A major archaeogenetics study uncovered a migration into southern Britain in the middle to late Bronze Age, during the 500-year period 1,300–800 BC. The newcomers were genetically most similar to ancient individuals from Gaul. During 1,000–875 BC, their genetic markers swiftly spread through southern Britain, but not northern Britain. The authors describe this as a "plausible vector for the spread of early Celtic languages into Britain". There was much less inward migration during the Iron Age, so it is likely that Celtic reached Britain before then. Barry Cunliffe suggests that a Goidelic branch of Celtic may already have been spoken in Britain, but that this middle Bronze Age migration would have introduced the Brittonic branch.
Brittonic languages were probably spoken before the Roman invasion throughout most of Great Britain, though the Isle of Man later had a Goidelic language, Manx. During the period of the Roman occupation of what is now England and Wales (AD 43 to c. 410), Common Brittonic borrowed a large stock of Latin words, both for concepts unfamiliar in the pre-urban society of Celtic Britain such as urbanization and new tactics of warfare as well as for rather more mundane words which displaced native terms (most notably, the word for "fish" in all the Brittonic languages derives from the Latin piscis rather than the native *ēskos – which may survive, however, in the Welsh name of the River Usk, Wysg). Approximately 800 of these Latin loan-words have survived in the three modern Brittonic languages. Pictish may have resisted Latin influence to a greater extent than the other Brittonic languages.
It is probable that at the start of the Post-Roman period Common Brittonic was differentiated into at least two major dialect groups – Southwestern and Western (also we may posit additional dialects, such as Eastern Brittonic, spoken in what is now the East of England, which have left little or no evidence). Between the end of the Roman occupation and the mid 6th century the two dialects began to diverge into recognizably separate varieties, the Western into Cumbric and Welsh and the Southwestern into Cornish and its closely related sister language Breton, which was carried to continental Armorica. Jackson showed that a few of the dialect distinctions between West and Southwest Brittonic go back a long way. New divergencies began around AD 500 but other changes that were shared occurred in the 6th century. Other common changes occurred in the 7th century onward and are possibly due to inherent tendencies. Thus the concept of a Common Brittonic language ends by AD 600. Substantial numbers of Britons certainly remained in the expanding area controlled by Anglo-Saxons, but over the fifth and sixth centuries they mostly adopted the English language.
The Brittonic languages spoken in what is now Scotland, the Isle of Man and what is now England began to be displaced in the 5th century through the settlement of Irish-speaking Gaels and Germanic peoples. Henry of Huntingdon wrote that Pictish was "no longer spoken" in c.1129.
The displacement of the languages of Brittonic descent was probably complete in all of Britain except Cornwall and Wales and the English counties bordering these areas such as Devon by the 11th century. Western Herefordshire continued to speak Welsh until the late nineteenth century, and isolated pockets of Shropshire speak Welsh today.
The regular consonantal sound changes from Proto-Celtic to Welsh, Cornish, and Breton are summarised in the following table. Where the graphemes have a different value from the corresponding IPA symbols, the IPA equivalent is indicated between slashes. V represents a vowel; C represents a consonant.
The principal legacy left behind in those territories from which the Brittonic languages were displaced is that of toponyms (place names) and hydronyms (names of rivers and other bodies of water). There are many Brittonic place names in lowland Scotland and in the parts of England where it is agreed that substantial Brittonic speakers remained (Brittonic names, apart from those of the former Romano-British towns, are scarce over most of England). Names derived (sometimes indirectly) from Brittonic include London, Penicuik, Perth, Aberdeen, York, Dorchester, Dover and Colchester. Brittonic elements found in England include bre- and bal- for hills, while some such as combe or coomb(e) for a small deep valley and tor for a hill are examples of Brittonic words that were borrowed into English. Others reflect the presence of Britons such as Dumbarton – from the Scottish Gaelic Dùn Breatainn meaning "Fort of the Britons", or Walton meaning a tun or settlement where the Wealh "Britons" still lived.
The number of Celtic river names in England generally increases from east to west, a map showing these being given by Jackson. These names include ones such as Avon, Chew, Frome, Axe, Brue and Exe, but also river names containing the elements "der-/dar-/dur-" and "-went" e.g. "Derwent, Darwen, Deer, Adur, Dour, Darent, Went". These names exhibit multiple different Celtic roots. One is *dubri- "water" [Bret. "dour", C. "dowr", W. "dŵr"], also found in the place-name "Dover" (attested in the Roman period as "Dubrīs"); this is the source of rivers named "Dour". Another is *deru̯o- "oak" or "true" [Bret. "derv", C. "derow", W. "derw"], coupled with 2 agent suffixes, *-ent- and *-iū; this is the origin of "Derwent", " Darent" and "Darwen" (attested in the Roman period as "Deru̯entiō"). The final root to be examined is "went". In Roman Britain, there were three tribal capitals named "U̯entā" (modern Winchester, Caerwent and Caistor St Edmunds), whose meaning was 'place, town'.
Some, including J. R. R. Tolkien, have argued that Celtic has acted as a substrate to English for both the lexicon and syntax. It is generally accepted that Brittonic effects on English are lexically few, aside from toponyms, consisting of a small number of domestic and geographical words, which 'may' include bin, brock, carr, comb, crag and tor. Another legacy may be the sheep-counting system Yan Tan Tethera in the north, in the traditionally Celtic areas of England such as Cumbria. Several Cornish mining words are still in use in English language mining terminology, such as costean, gunnies, and vug.
Those who argue against the theory of a more significant Brittonic influence than is widely accepted point out that many toponyms have no semantic continuation from the Brittonic language. A notable example is Avon which comes from the Celtic term for river abona or the Welsh term for river, afon, but was used by the English as a personal name. Likewise the River Ouse, Yorkshire contains the word usa which merely means 'water' and the name of the river Trent simply comes from the Welsh word for a trespasser (an over-flowing river).
It has been argued that the use of periphrastic constructions (using auxiliary verbs such as do and be in the continuous/progressive) in the English verb, which is more widespread than in the other Germanic languages, is traceable to Brittonic influence. Others, however, find this unlikely since many of these forms are only attested in the later Middle English period; these scholars claim a native English development rather than Celtic influence. Ian G. Roberts postulates Northern Germanic influence, despite such constructions not existing in Norse. Literary Welsh has the simple present Caraf = I love and the present stative (al. continuous/progressive) Yr wyf yn caru = I am loving, where the Brittonic syntax is partly mirrored in English (Note that I am loving comes from older I am a-loving, from still older ich am on luvende "I am in the process of loving"). In the Germanic sister languages of English there is only one form, for example ich liebe in German, though in colloquial usage in some German dialects, a progressive aspect form has evolved which is formally similar to those found in Celtic languages, and somewhat less similar to the Modern English form, e.g. "I am working" is ich bin am Arbeiten, literally: "I am on the working". The same structure is also found in modern Dutch (ik ben aan het werk), alongside other structures (e.g. ik zit te werken, lit. "I sit to working"). These parallel developments suggest that the English progressive is not necessarily due to Celtic influence; moreover, the native English development of the structure can be traced over 1000 years and more of English literature.
Some researchers (Filppula et al., 2001) argue that other elements of English syntax reflect Brittonic influences. For instance, in English tag questions, the form of the tag depends on the verb form in the main statement (aren't I?, isn't he?, won't we? etc.). The German nicht wahr? and the French n'est-ce pas?, by contrast, are fixed forms which can be used with almost any main statement. It has been claimed that the English system has been borrowed from Brittonic, since Welsh tag questions vary in almost exactly the same way.
Far more notable, but less well known, are Brittonic influences on Scottish Gaelic, though Scottish and Irish Gaelic, with their wider range of preposition-based periphrastic constructions, suggest that such constructions descend from their common Celtic heritage. Scottish Gaelic contains several P-Celtic loanwords, but, as there is a far greater overlap in terms of Celtic vocabulary, than with English, it is not always possible to disentangle P- and Q-Celtic words. However, some common words such as monadh = Welsh mynydd, Cumbric *monidh are particularly evident.
The Brittonic influence on Scots Gaelic is often indicated by considering Irish language usage, which is not likely to have been influenced so much by Brittonic. In particular, the word srath (anglicised as "Strath") is a native Goidelic word, but its usage appears to have been modified by the Brittonic cognate ystrad whose meaning is slightly different. The effect on Irish has been the loan from British of many Latin-derived words. This has been associated with the Christianisation of Ireland from Britain. | [
{
"paragraph_id": 0,
"text": "The Brittonic languages (also Brythonic or British Celtic; Welsh: ieithoedd Brythonaidd/Prydeinig; Cornish: yethow brythonek/predennek; Breton: yezhoù predenek) form one of the two branches of the Insular Celtic language family; the other is Goidelic. It comprises the extant languages Breton, Cornish, and Welsh. The name Brythonic was derived by Welsh Celticist John Rhys from the Welsh word Brython, meaning Ancient Britons as opposed to an Anglo-Saxon or Gael.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Brittonic languages derive from the Common Brittonic language, spoken throughout Great Britain during the Iron Age and Roman period. In the 5th and 6th centuries emigrating Britons also took Brittonic speech to the continent, most significantly in Brittany and Britonia. During the next few centuries, in much of Britain the language was replaced by Old English and Scottish Gaelic, with the remaining Common Brittonic language splitting into regional dialects, eventually evolving into Welsh, Cornish, Breton, Cumbric, and probably Pictish. Welsh and Breton continue to be spoken as native languages, while a revival in Cornish has led to an increase in speakers of that language. Cumbric and Pictish are extinct, having been replaced by Goidelic and Anglic speech. The Isle of Man and Orkney may also have originally spoken a Brittonic language, but this was later supplanted by Goidelic on the Isle of Man and Norse on Orkney. There is also a community of Brittonic language speakers in Y Wladfa (the Welsh settlement in Patagonia).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The names \"Brittonic\" and \"Brythonic\" are scholarly conventions referring to the Celtic languages of Britain and to the ancestral language they originated from, designated Common Brittonic, in contrast to the Goidelic languages originating in Ireland. Both were created in the 19th century to avoid the ambiguity of earlier terms such as \"British\" and \"Cymric\". \"Brythonic\" was coined in 1879 by the Celticist John Rhys from the Welsh word Brython. \"Brittonic\", derived from \"Briton\" and also earlier spelled \"Britonic\" and \"Britonnic\", emerged later in the 19th century. It became more prominent through the 20th century, and was used in Kenneth H. Jackson's highly influential 1953 work on the topic, Language and History in Early Britain. Jackson noted that by that time \"Brythonic\" had become a dated term, and that \"of late there has been an increasing tendency to use Brittonic instead.\" Today, \"Brittonic\" often replaces \"Brythonic\" in the literature. Rudolf Thurneysen used \"Britannic\" in his influential A Grammar of Old Irish, although this never became popular among subsequent scholars.",
"title": "Name"
},
{
"paragraph_id": 3,
"text": "Comparable historical terms include the Medieval Latin lingua Britannica and sermo Britannicus and the Welsh Brythoneg. Some writers use \"British\" for the language and its descendants, although, due to the risk of confusion, others avoid it or use it only in a restricted sense. Jackson, and later John T. Koch, use \"British\" only for the early phase of the Common Brittonic language.",
"title": "Name"
},
{
"paragraph_id": 4,
"text": "Before Jackson's work, \"Brittonic\" and \"Brythonic\" were often used for all the P-Celtic languages, including not just the varieties in Britain but those Continental Celtic languages that similarly experienced the evolution of the Proto-Celtic language element /kʷ/ to /p/. However, subsequent writers have tended to follow Jackson's scheme, rendering this use obsolete.",
"title": "Name"
},
{
"paragraph_id": 5,
"text": "The name \"Britain\" itself comes from Latin: Britannia~Brittania, via Old French Bretaigne and Middle English Breteyne, possibly influenced by Old English Bryten(lond), probably also from Latin Brittania, ultimately an adaptation of the native word for the island, *Pritanī.",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "An early written reference to the British Isles may derive from the works of the Greek explorer Pytheas of Massalia; later Greek writers such as Diodorus of Sicily and Strabo who quote Pytheas' use of variants such as πρεττανική (Prettanikē), \"The Britannic [land, island]\", and νησοι βρεττανιαι (nēsoi brettaniai), \"Britannic islands\", with *Pretani being a Celtic word that might mean \"the painted ones\" or \"the tattooed folk\", referring to body decoration (see below).",
"title": "Name"
},
{
"paragraph_id": 7,
"text": "Knowledge of the Brittonic languages comes from a variety of sources. The early language's information is obtained from coins, inscriptions, and comments by classical writers as well as place names and personal names recorded by them. For later languages, there is information from medieval writers and modern native speakers, together with place names. The names recorded in the Roman period are given in Rivet and Smith.",
"title": "Evidence"
},
{
"paragraph_id": 8,
"text": "The Brittonic branch is also referred to as P-Celtic because linguistic reconstruction of the Brittonic reflex of the Proto-Indo-European phoneme *kʷ is p as opposed to Goidelic k. Such nomenclature usually implies acceptance of the P-Celtic and Q-Celtic hypothesis rather than the Insular Celtic hypothesis because the term includes certain Continental Celtic languages as well. (For a discussion, see Celtic languages.)",
"title": "Characteristics"
},
{
"paragraph_id": 9,
"text": "Other major characteristics include:",
"title": "Characteristics"
},
{
"paragraph_id": 10,
"text": "Initial s-:",
"title": "Characteristics"
},
{
"paragraph_id": 11,
"text": "Lenition:",
"title": "Characteristics"
},
{
"paragraph_id": 12,
"text": "Voiceless spirants:",
"title": "Characteristics"
},
{
"paragraph_id": 13,
"text": "Nasal assimilation:",
"title": "Characteristics"
},
{
"paragraph_id": 14,
"text": "The family tree of the Brittonic languages is as follows:",
"title": "Classification"
},
{
"paragraph_id": 15,
"text": "Brittonic languages in use today are Welsh, Cornish and Breton. Welsh and Breton have been spoken continuously since they formed. For all practical purposes Cornish died out during the 18th or 19th century, but a revival movement has more recently created small numbers of new speakers. Also notable are the extinct language Cumbric, and possibly the extinct Pictish. One view, advanced in the 1950s and based on apparently unintelligible ogham inscriptions, was that the Picts may have also used a non-Indo-European language. This view, while attracting broad popular appeal, has virtually no following in contemporary linguistic scholarship.",
"title": "Classification"
},
{
"paragraph_id": 16,
"text": "The modern Brittonic languages are generally considered to all derive from a common ancestral language termed Brittonic, British, Common Brittonic, Old Brittonic or Proto-Brittonic, which is thought to have developed from Proto-Celtic or early Insular Celtic by the 6th century BC.",
"title": "History and origins"
},
{
"paragraph_id": 17,
"text": "A major archaeogenetics study uncovered a migration into southern Britain in the middle to late Bronze Age, during the 500-year period 1,300–800 BC. The newcomers were genetically most similar to ancient individuals from Gaul. During 1,000–875 BC, their genetic markers swiftly spread through southern Britain, but not northern Britain. The authors describe this as a \"plausible vector for the spread of early Celtic languages into Britain\". There was much less inward migration during the Iron Age, so it is likely that Celtic reached Britain before then. Barry Cunliffe suggests that a Goidelic branch of Celtic may already have been spoken in Britain, but that this middle Bronze Age migration would have introduced the Brittonic branch.",
"title": "History and origins"
},
{
"paragraph_id": 18,
"text": "Brittonic languages were probably spoken before the Roman invasion throughout most of Great Britain, though the Isle of Man later had a Goidelic language, Manx. During the period of the Roman occupation of what is now England and Wales (AD 43 to c. 410), Common Brittonic borrowed a large stock of Latin words, both for concepts unfamiliar in the pre-urban society of Celtic Britain such as urbanization and new tactics of warfare as well as for rather more mundane words which displaced native terms (most notably, the word for \"fish\" in all the Brittonic languages derives from the Latin piscis rather than the native *ēskos – which may survive, however, in the Welsh name of the River Usk, Wysg). Approximately 800 of these Latin loan-words have survived in the three modern Brittonic languages. Pictish may have resisted Latin influence to a greater extent than the other Brittonic languages.",
"title": "History and origins"
},
{
"paragraph_id": 19,
"text": "It is probable that at the start of the Post-Roman period Common Brittonic was differentiated into at least two major dialect groups – Southwestern and Western (also we may posit additional dialects, such as Eastern Brittonic, spoken in what is now the East of England, which have left little or no evidence). Between the end of the Roman occupation and the mid 6th century the two dialects began to diverge into recognizably separate varieties, the Western into Cumbric and Welsh and the Southwestern into Cornish and its closely related sister language Breton, which was carried to continental Armorica. Jackson showed that a few of the dialect distinctions between West and Southwest Brittonic go back a long way. New divergencies began around AD 500 but other changes that were shared occurred in the 6th century. Other common changes occurred in the 7th century onward and are possibly due to inherent tendencies. Thus the concept of a Common Brittonic language ends by AD 600. Substantial numbers of Britons certainly remained in the expanding area controlled by Anglo-Saxons, but over the fifth and sixth centuries they mostly adopted the English language.",
"title": "History and origins"
},
{
"paragraph_id": 20,
"text": "The Brittonic languages spoken in what is now Scotland, the Isle of Man and what is now England began to be displaced in the 5th century through the settlement of Irish-speaking Gaels and Germanic peoples. Henry of Huntingdon wrote that Pictish was \"no longer spoken\" in c.1129.",
"title": "History and origins"
},
{
"paragraph_id": 21,
"text": "The displacement of the languages of Brittonic descent was probably complete in all of Britain except Cornwall and Wales and the English counties bordering these areas such as Devon by the 11th century. Western Herefordshire continued to speak Welsh until the late nineteenth century, and isolated pockets of Shropshire speak Welsh today.",
"title": "History and origins"
},
{
"paragraph_id": 22,
"text": "The regular consonantal sound changes from Proto-Celtic to Welsh, Cornish, and Breton are summarised in the following table. Where the graphemes have a different value from the corresponding IPA symbols, the IPA equivalent is indicated between slashes. V represents a vowel; C represents a consonant.",
"title": "History and origins"
},
{
"paragraph_id": 23,
"text": "The principal legacy left behind in those territories from which the Brittonic languages were displaced is that of toponyms (place names) and hydronyms (names of rivers and other bodies of water). There are many Brittonic place names in lowland Scotland and in the parts of England where it is agreed that substantial Brittonic speakers remained (Brittonic names, apart from those of the former Romano-British towns, are scarce over most of England). Names derived (sometimes indirectly) from Brittonic include London, Penicuik, Perth, Aberdeen, York, Dorchester, Dover and Colchester. Brittonic elements found in England include bre- and bal- for hills, while some such as combe or coomb(e) for a small deep valley and tor for a hill are examples of Brittonic words that were borrowed into English. Others reflect the presence of Britons such as Dumbarton – from the Scottish Gaelic Dùn Breatainn meaning \"Fort of the Britons\", or Walton meaning a tun or settlement where the Wealh \"Britons\" still lived.",
"title": "Remnants in England, Scotland and Ireland"
},
{
"paragraph_id": 24,
"text": "The number of Celtic river names in England generally increases from east to west, a map showing these being given by Jackson. These names include ones such as Avon, Chew, Frome, Axe, Brue and Exe, but also river names containing the elements \"der-/dar-/dur-\" and \"-went\" e.g. \"Derwent, Darwen, Deer, Adur, Dour, Darent, Went\". These names exhibit multiple different Celtic roots. One is *dubri- \"water\" [Bret. \"dour\", C. \"dowr\", W. \"dŵr\"], also found in the place-name \"Dover\" (attested in the Roman period as \"Dubrīs\"); this is the source of rivers named \"Dour\". Another is *deru̯o- \"oak\" or \"true\" [Bret. \"derv\", C. \"derow\", W. \"derw\"], coupled with 2 agent suffixes, *-ent- and *-iū; this is the origin of \"Derwent\", \" Darent\" and \"Darwen\" (attested in the Roman period as \"Deru̯entiō\"). The final root to be examined is \"went\". In Roman Britain, there were three tribal capitals named \"U̯entā\" (modern Winchester, Caerwent and Caistor St Edmunds), whose meaning was 'place, town'.",
"title": "Remnants in England, Scotland and Ireland"
},
{
"paragraph_id": 25,
"text": "Some, including J. R. R. Tolkien, have argued that Celtic has acted as a substrate to English for both the lexicon and syntax. It is generally accepted that Brittonic effects on English are lexically few, aside from toponyms, consisting of a small number of domestic and geographical words, which 'may' include bin, brock, carr, comb, crag and tor. Another legacy may be the sheep-counting system Yan Tan Tethera in the north, in the traditionally Celtic areas of England such as Cumbria. Several Cornish mining words are still in use in English language mining terminology, such as costean, gunnies, and vug.",
"title": "Remnants in England, Scotland and Ireland"
},
{
"paragraph_id": 26,
"text": "Those who argue against the theory of a more significant Brittonic influence than is widely accepted point out that many toponyms have no semantic continuation from the Brittonic language. A notable example is Avon which comes from the Celtic term for river abona or the Welsh term for river, afon, but was used by the English as a personal name. Likewise the River Ouse, Yorkshire contains the word usa which merely means 'water' and the name of the river Trent simply comes from the Welsh word for a trespasser (an over-flowing river).",
"title": "Remnants in England, Scotland and Ireland"
},
{
"paragraph_id": 27,
"text": "It has been argued that the use of periphrastic constructions (using auxiliary verbs such as do and be in the continuous/progressive) in the English verb, which is more widespread than in the other Germanic languages, is traceable to Brittonic influence. Others, however, find this unlikely since many of these forms are only attested in the later Middle English period; these scholars claim a native English development rather than Celtic influence. Ian G. Roberts postulates Northern Germanic influence, despite such constructions not existing in Norse. Literary Welsh has the simple present Caraf = I love and the present stative (al. continuous/progressive) Yr wyf yn caru = I am loving, where the Brittonic syntax is partly mirrored in English (Note that I am loving comes from older I am a-loving, from still older ich am on luvende \"I am in the process of loving\"). In the Germanic sister languages of English there is only one form, for example ich liebe in German, though in colloquial usage in some German dialects, a progressive aspect form has evolved which is formally similar to those found in Celtic languages, and somewhat less similar to the Modern English form, e.g. \"I am working\" is ich bin am Arbeiten, literally: \"I am on the working\". The same structure is also found in modern Dutch (ik ben aan het werk), alongside other structures (e.g. ik zit te werken, lit. \"I sit to working\"). These parallel developments suggest that the English progressive is not necessarily due to Celtic influence; moreover, the native English development of the structure can be traced over 1000 years and more of English literature.",
"title": "Remnants in England, Scotland and Ireland"
},
{
"paragraph_id": 28,
"text": "Some researchers (Filppula et al., 2001) argue that other elements of English syntax reflect Brittonic influences. For instance, in English tag questions, the form of the tag depends on the verb form in the main statement (aren't I?, isn't he?, won't we? etc.). The German nicht wahr? and the French n'est-ce pas?, by contrast, are fixed forms which can be used with almost any main statement. It has been claimed that the English system has been borrowed from Brittonic, since Welsh tag questions vary in almost exactly the same way.",
"title": "Remnants in England, Scotland and Ireland"
},
{
"paragraph_id": 29,
"text": "Far more notable, but less well known, are Brittonic influences on Scottish Gaelic, though Scottish and Irish Gaelic, with their wider range of preposition-based periphrastic constructions, suggest that such constructions descend from their common Celtic heritage. Scottish Gaelic contains several P-Celtic loanwords, but, as there is a far greater overlap in terms of Celtic vocabulary, than with English, it is not always possible to disentangle P- and Q-Celtic words. However, some common words such as monadh = Welsh mynydd, Cumbric *monidh are particularly evident.",
"title": "Remnants in England, Scotland and Ireland"
},
{
"paragraph_id": 30,
"text": "The Brittonic influence on Scots Gaelic is often indicated by considering Irish language usage, which is not likely to have been influenced so much by Brittonic. In particular, the word srath (anglicised as \"Strath\") is a native Goidelic word, but its usage appears to have been modified by the Brittonic cognate ystrad whose meaning is slightly different. The effect on Irish has been the loan from British of many Latin-derived words. This has been associated with the Christianisation of Ireland from Britain.",
"title": "Remnants in England, Scotland and Ireland"
}
] | The Brittonic languages form one of the two branches of the Insular Celtic language family; the other is Goidelic. It comprises the extant languages Breton, Cornish, and Welsh. The name Brythonic was derived by Welsh Celticist John Rhys from the Welsh word Brython, meaning Ancient Britons as opposed to an Anglo-Saxon or Gael. The Brittonic languages derive from the Common Brittonic language, spoken throughout Great Britain during the Iron Age and Roman period. In the 5th and 6th centuries emigrating Britons also took Brittonic speech to the continent, most significantly in Brittany and Britonia. During the next few centuries, in much of Britain the language was replaced by Old English and Scottish Gaelic, with the remaining Common Brittonic language splitting into regional dialects, eventually evolving into Welsh, Cornish, Breton, Cumbric, and probably Pictish. Welsh and Breton continue to be spoken as native languages, while a revival in Cornish has led to an increase in speakers of that language. Cumbric and Pictish are extinct, having been replaced by Goidelic and Anglic speech. The Isle of Man and Orkney may also have originally spoken a Brittonic language, but this was later supplanted by Goidelic on the Isle of Man and Norse on Orkney. There is also a community of Brittonic language speakers in Y Wladfa. | 2001-08-19T15:15:30Z | 2023-12-27T22:19:36Z | [
"Template:Authority control",
"Template:For",
"Template:Lang-kw",
"Template:Cite news",
"Template:Webarchive",
"Template:Lang-la",
"Template:Cite web",
"Template:IPA",
"Template:Legend",
"Template:Lang",
"Template:Cite journal",
"Template:Reflist",
"Template:Short description",
"Template:Use dmy dates",
"Template:Infobox language family",
"Template:Lang-br",
"Template:ISBN",
"Template:Celts",
"Template:Lang-cy",
"Template:PIE",
"Template:Circa",
"Template:By whom",
"Template:Main",
"Template:Cite book",
"Template:Wikiversity",
"Template:Celtic languages",
"Template:Further",
"Template:Cite encyclopedia",
"Template:Portal"
] | https://en.wikipedia.org/wiki/Brittonic_languages |
4,071 | Bronski Beat | Bronski Beat were a British synth-pop band formed in 1983 in London, England. The initial lineup, which recorded the majority of their hits, consisted of Jimmy Somerville (vocals), Steve Bronski (keyboards, percussion) and Larry Steinbachek (keyboards, percussion). Simon Davolls contributed backing vocals to many songs.
Bronski Beat achieved success in the mid-1980s, particularly with the 1984 single "Smalltown Boy", from their debut album, The Age of Consent. "Smalltown Boy" was their only US Billboard Hot 100 single. All members of the band were openly gay and their songs reflected this, often containing political commentary on gay issues.
Somerville left Bronski Beat in 1985, and went on to have success as lead singer of the Communards and as a solo artist. He was replaced by vocalist John Foster, with whom the band continued to have hits in the UK and Europe through 1986. Foster left Bronski Beat after their second album, and the band were joined by Jonathan Hellyer before dissolving in 1995.
Steve Bronski revived the band in 2016, recording new material with 1990s member Ian Donaldson. Steinbachek died later that year; Bronski died in 2021.
Bronski Beat formed in 1983 when Jimmy Somerville, Steve Bronski (both from Glasgow) and Larry Steinbachek (from Southend, Essex) shared a three-bedroom flat at Lancaster House in Brixton, London. Steinbachek had heard Somerville singing during the making of Framed Youth: The Revenge of the Teenage Perverts and suggested they make some music. They first performed publicly at an arts festival, September in the Pink. The trio were unhappy with the inoffensive nature of contemporary gay performers and sought to be more outspoken and political.
Bronski Beat signed a recording contract with London Records in 1984 after doing only nine live gigs. The band's debut single, "Smalltown Boy", about a gay teenager leaving his family and fleeing his home town, was a hit, peaking at No 3 in the UK Singles Chart, and topping charts in Belgium and the Netherlands. The single was accompanied by a promotional video directed by Bernard Rose, showing Somerville trying to befriend an attractive diver at a swimming pool, then being attacked by the diver's homophobic associates, being returned to his family by the police and having to leave home. (The police officer was played by Colin Bell, then the marketing manager of London Records.) "Smalltown Boy" reached 48 in the U.S. chart and peaked at 8 in Australia.
The follow-up single, "Why?", adopted a hi-NRG sound and was more lyrically focused on anti-gay prejudice. It also achieved Top 10 status in the UK, reaching 6, and was another Top 10 hit for the band in Australia, Switzerland, Germany, France and the Netherlands.
At the end of 1984, the trio released an album titled The Age of Consent. The inner sleeve listed the varying ages of consent for consensual gay sex in different nations around the world. At the time, the age of consent for sexual acts between men in the UK was 21 compared with 16 for heterosexual acts, with several other countries having more liberal laws on gay sex. The album peaked at 4 in the UK Albums Chart, 36 in the U.S., and 12 in Australia.
Around the same time, the band headlined "Pits and Perverts", a concert at the Electric Ballroom in London to raise funds for the Lesbians and Gays Support the Miners campaign. This event is featured in the film Pride.
The third single, released before Christmas 1984, was a revival of "It Ain't Necessarily So", the George and Ira Gershwin classic (from Porgy and Bess). The song questions the accuracy of biblical tales. It also reached the UK Top 20.
In 1985, the trio joined up with Marc Almond to record a version of Donna Summer's "I Feel Love". The full version was actually a medley that also incorporated snippets of Summer's "Love to Love You Baby" and John Leyton's "Johnny Remember Me". It was a big success, reaching 3 in the UK and equalling the chart achievement of "Smalltown Boy". Although the original had been one of Marc Almond's all-time favourite songs, he had never read the lyrics and thus incorrectly sang "What’ll it be, what’ll it be, you and me" instead of "Falling free, falling free, falling free" on the finished record.
The band and their producer Mike Thorne had gone back into the studio in early 1985 to record a new single, "Run from Love", and PolyGram (London Records' parent company at that time) had pressed a number of promo singles and 12" versions of the song and sent them to radio and record stores in the UK. However, the single was shelved as tensions in the band, both personal and political, resulted in Somerville leaving Bronski Beat in the summer of that year.
"Run from Love" was subsequently released in remix form on the Bronski Beat album Hundreds & Thousands, a collection of mostly remixes (LP) and B-sides (as bonus tracks on the CD version) as well as the hit "I Feel Love". Somerville went on to form the Communards with Richard Coles while the remaining members of Bronski Beat searched for a new vocalist.
Bronski Beat recruited John Foster as Somerville's replacement (Foster is credited as "Jon Jon"). A single, "Hit That Perfect Beat", was released in November 1985, reaching 3 in the UK. It repeated this success on the Australian chart and was also featured in the film Letter to Brezhnev. A second single, "C'mon C'mon", also charted in the UK Top 20 and an album, Truthdare Doubledare, released in May 1986, peaked at 18. The film Parting Glances (1986) included Bronski Beat songs "Love and Money", "Smalltown Boy" and "Why?". During this period, the band teamed up with producer Mark Cunningham on the first-ever BBC Children In Need single, a cover of David Bowie's "Heroes", released in 1986 under the name of The County Line.
Foster left the band in 1987. Following Foster's departure, Bronski Beat began work on their next album, Out and About. The tracks were recorded at Berry Street studios in London with engineer Brian Pugsley. Some of the song titles were "The Final Spin" and "Peace and Love". The latter track featured Strawberry Switchblade vocalist Rose McDowall and appeared on several internet sites in 2006. One of the other songs from the project called "European Boy" was recorded in 1987 by disco group Splash. The lead singer of Splash was former Tight Fit singer Steve Grant. Steinbachek and Bronski toured extensively with the new material with positive reviews, however the project was abandoned as the group was dropped by London Records. Also in 1987, Bronski Beat and Somerville performed at a reunion concert for "International AIDS Day", supported by New Order, at the Brixton Academy, London.
In 1989, Jonathan Hellyer became lead singer, and the band extensively toured the U.S. and Europe with back-up vocalist Annie Conway. They achieved one minor hit with the song "Cha Cha Heels", a one-off collaboration sung by American actress and singer Eartha Kitt, which peaked at 32 in the UK. The song was originally written for movie and recording star Divine, who was unable to record the song before his death in 1988. 1990–91 saw Bronski Beat release three further singles on the Zomba record label, "I'm Gonna Run Away", "One More Chance" and "What More Can I Say". The singles were produced by Mike Thorne.
Foster and Bronski Beat teamed up again in 1994, and released a techno "Tell Me Why '94" and an acoustic "Smalltown Boy '94" on the German record label, ZYX Music. The album Rainbow Nation was released the following year with Hellyer returning as lead vocalist, as Foster had dropped out of the project and Ian Donaldson was brought on board to do keyboards and programming. After a few years of touring, Bronski Beat then dissolved, with Steve Bronski going on to become a producer for other artists and Ian Donaldson becoming a successful DJ (Sordid Soundz). Larry Steinbachek became the musical director for Michael Laub's theatre company, 'Remote Control Productions'.
In 2007, Steve Bronski remixed the song "Stranger to None" by the UK alternative rock band, All Living Fear. Four different mixes were made, with one appearing on their retrospective album, Fifteen Years After. Bronski also remixed the track "Flowers in the Morning" by Northern Irish electronic band Electrobronze in 2007, changing the style of the song from classical to Hi-NRG disco.
In 2015, Steve Bronski teamed up as a one-off with Jessica James (aka Barbara Bush) and said that she reminded him of Divine, because of her look and Eartha Kitt-like sound. The one-off project was to cover the track he made in 1989.
In 2016, Steve Bronski again teamed up with Ian Donaldson, with the aim of bringing Bronski Beat back, enlisting a new singer, Stephen Granville. In 2017, the new Bronski Beat released a reworked version of "Age of Consent" entitled "Age of Reason". Out & About, the unreleased Bronski Beat album from 1987, was released digitally via Steve Bronski's website. The album features the original tracks plus remixes by Bronski.
On 12 January 2017, it was revealed that Steinbachek had died the previous month after a short battle with cancer, with his family and friends at his bedside. He was 56. Bronski died on 7 December 2021, at the age of 61, in a Central London flat fire. | [
{
"paragraph_id": 0,
"text": "Bronski Beat were a British synth-pop band formed in 1983 in London, England. The initial lineup, which recorded the majority of their hits, consisted of Jimmy Somerville (vocals), Steve Bronski (keyboards, percussion) and Larry Steinbachek (keyboards, percussion). Simon Davolls contributed backing vocals to many songs.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bronski Beat achieved success in the mid-1980s, particularly with the 1984 single \"Smalltown Boy\", from their debut album, The Age of Consent. \"Smalltown Boy\" was their only US Billboard Hot 100 single. All members of the band were openly gay and their songs reflected this, often containing political commentary on gay issues.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Somerville left Bronski Beat in 1985, and went on to have success as lead singer of the Communards and as a solo artist. He was replaced by vocalist John Foster, with whom the band continued to have hits in the UK and Europe through 1986. Foster left Bronski Beat after their second album, and the band were joined by Jonathan Hellyer before dissolving in 1995.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Steve Bronski revived the band in 2016, recording new material with 1990s member Ian Donaldson. Steinbachek died later that year; Bronski died in 2021.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Bronski Beat formed in 1983 when Jimmy Somerville, Steve Bronski (both from Glasgow) and Larry Steinbachek (from Southend, Essex) shared a three-bedroom flat at Lancaster House in Brixton, London. Steinbachek had heard Somerville singing during the making of Framed Youth: The Revenge of the Teenage Perverts and suggested they make some music. They first performed publicly at an arts festival, September in the Pink. The trio were unhappy with the inoffensive nature of contemporary gay performers and sought to be more outspoken and political.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Bronski Beat signed a recording contract with London Records in 1984 after doing only nine live gigs. The band's debut single, \"Smalltown Boy\", about a gay teenager leaving his family and fleeing his home town, was a hit, peaking at No 3 in the UK Singles Chart, and topping charts in Belgium and the Netherlands. The single was accompanied by a promotional video directed by Bernard Rose, showing Somerville trying to befriend an attractive diver at a swimming pool, then being attacked by the diver's homophobic associates, being returned to his family by the police and having to leave home. (The police officer was played by Colin Bell, then the marketing manager of London Records.) \"Smalltown Boy\" reached 48 in the U.S. chart and peaked at 8 in Australia.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The follow-up single, \"Why?\", adopted a hi-NRG sound and was more lyrically focused on anti-gay prejudice. It also achieved Top 10 status in the UK, reaching 6, and was another Top 10 hit for the band in Australia, Switzerland, Germany, France and the Netherlands.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "At the end of 1984, the trio released an album titled The Age of Consent. The inner sleeve listed the varying ages of consent for consensual gay sex in different nations around the world. At the time, the age of consent for sexual acts between men in the UK was 21 compared with 16 for heterosexual acts, with several other countries having more liberal laws on gay sex. The album peaked at 4 in the UK Albums Chart, 36 in the U.S., and 12 in Australia.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Around the same time, the band headlined \"Pits and Perverts\", a concert at the Electric Ballroom in London to raise funds for the Lesbians and Gays Support the Miners campaign. This event is featured in the film Pride.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The third single, released before Christmas 1984, was a revival of \"It Ain't Necessarily So\", the George and Ira Gershwin classic (from Porgy and Bess). The song questions the accuracy of biblical tales. It also reached the UK Top 20.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1985, the trio joined up with Marc Almond to record a version of Donna Summer's \"I Feel Love\". The full version was actually a medley that also incorporated snippets of Summer's \"Love to Love You Baby\" and John Leyton's \"Johnny Remember Me\". It was a big success, reaching 3 in the UK and equalling the chart achievement of \"Smalltown Boy\". Although the original had been one of Marc Almond's all-time favourite songs, he had never read the lyrics and thus incorrectly sang \"What’ll it be, what’ll it be, you and me\" instead of \"Falling free, falling free, falling free\" on the finished record.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The band and their producer Mike Thorne had gone back into the studio in early 1985 to record a new single, \"Run from Love\", and PolyGram (London Records' parent company at that time) had pressed a number of promo singles and 12\" versions of the song and sent them to radio and record stores in the UK. However, the single was shelved as tensions in the band, both personal and political, resulted in Somerville leaving Bronski Beat in the summer of that year.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "\"Run from Love\" was subsequently released in remix form on the Bronski Beat album Hundreds & Thousands, a collection of mostly remixes (LP) and B-sides (as bonus tracks on the CD version) as well as the hit \"I Feel Love\". Somerville went on to form the Communards with Richard Coles while the remaining members of Bronski Beat searched for a new vocalist.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Bronski Beat recruited John Foster as Somerville's replacement (Foster is credited as \"Jon Jon\"). A single, \"Hit That Perfect Beat\", was released in November 1985, reaching 3 in the UK. It repeated this success on the Australian chart and was also featured in the film Letter to Brezhnev. A second single, \"C'mon C'mon\", also charted in the UK Top 20 and an album, Truthdare Doubledare, released in May 1986, peaked at 18. The film Parting Glances (1986) included Bronski Beat songs \"Love and Money\", \"Smalltown Boy\" and \"Why?\". During this period, the band teamed up with producer Mark Cunningham on the first-ever BBC Children In Need single, a cover of David Bowie's \"Heroes\", released in 1986 under the name of The County Line.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Foster left the band in 1987. Following Foster's departure, Bronski Beat began work on their next album, Out and About. The tracks were recorded at Berry Street studios in London with engineer Brian Pugsley. Some of the song titles were \"The Final Spin\" and \"Peace and Love\". The latter track featured Strawberry Switchblade vocalist Rose McDowall and appeared on several internet sites in 2006. One of the other songs from the project called \"European Boy\" was recorded in 1987 by disco group Splash. The lead singer of Splash was former Tight Fit singer Steve Grant. Steinbachek and Bronski toured extensively with the new material with positive reviews, however the project was abandoned as the group was dropped by London Records. Also in 1987, Bronski Beat and Somerville performed at a reunion concert for \"International AIDS Day\", supported by New Order, at the Brixton Academy, London.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1989, Jonathan Hellyer became lead singer, and the band extensively toured the U.S. and Europe with back-up vocalist Annie Conway. They achieved one minor hit with the song \"Cha Cha Heels\", a one-off collaboration sung by American actress and singer Eartha Kitt, which peaked at 32 in the UK. The song was originally written for movie and recording star Divine, who was unable to record the song before his death in 1988. 1990–91 saw Bronski Beat release three further singles on the Zomba record label, \"I'm Gonna Run Away\", \"One More Chance\" and \"What More Can I Say\". The singles were produced by Mike Thorne.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Foster and Bronski Beat teamed up again in 1994, and released a techno \"Tell Me Why '94\" and an acoustic \"Smalltown Boy '94\" on the German record label, ZYX Music. The album Rainbow Nation was released the following year with Hellyer returning as lead vocalist, as Foster had dropped out of the project and Ian Donaldson was brought on board to do keyboards and programming. After a few years of touring, Bronski Beat then dissolved, with Steve Bronski going on to become a producer for other artists and Ian Donaldson becoming a successful DJ (Sordid Soundz). Larry Steinbachek became the musical director for Michael Laub's theatre company, 'Remote Control Productions'.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 2007, Steve Bronski remixed the song \"Stranger to None\" by the UK alternative rock band, All Living Fear. Four different mixes were made, with one appearing on their retrospective album, Fifteen Years After. Bronski also remixed the track \"Flowers in the Morning\" by Northern Irish electronic band Electrobronze in 2007, changing the style of the song from classical to Hi-NRG disco.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 2015, Steve Bronski teamed up as a one-off with Jessica James (aka Barbara Bush) and said that she reminded him of Divine, because of her look and Eartha Kitt-like sound. The one-off project was to cover the track he made in 1989.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 2016, Steve Bronski again teamed up with Ian Donaldson, with the aim of bringing Bronski Beat back, enlisting a new singer, Stephen Granville. In 2017, the new Bronski Beat released a reworked version of \"Age of Consent\" entitled \"Age of Reason\". Out & About, the unreleased Bronski Beat album from 1987, was released digitally via Steve Bronski's website. The album features the original tracks plus remixes by Bronski.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "On 12 January 2017, it was revealed that Steinbachek had died the previous month after a short battle with cancer, with his family and friends at his bedside. He was 56. Bronski died on 7 December 2021, at the age of 61, in a Central London flat fire.",
"title": "History"
}
] | Bronski Beat were a British synth-pop band formed in 1983 in London, England. The initial lineup, which recorded the majority of their hits, consisted of Jimmy Somerville (vocals), Steve Bronski and Larry Steinbachek. Simon Davolls contributed backing vocals to many songs. Bronski Beat achieved success in the mid-1980s, particularly with the 1984 single "Smalltown Boy", from their debut album, The Age of Consent. "Smalltown Boy" was their only US Billboard Hot 100 single. All members of the band were openly gay and their songs reflected this, often containing political commentary on gay issues. Somerville left Bronski Beat in 1985, and went on to have success as lead singer of the Communards and as a solo artist. He was replaced by vocalist John Foster, with whom the band continued to have hits in the UK and Europe through 1986. Foster left Bronski Beat after their second album, and the band were joined by Jonathan Hellyer before dissolving in 1995. Steve Bronski revived the band in 2016, recording new material with 1990s member Ian Donaldson. Steinbachek died later that year; Bronski died in 2021. | 2001-08-19T16:53:23Z | 2023-11-06T01:31:26Z | [
"Template:Use British English",
"Template:Use dmy dates",
"Template:Nom",
"Template:Cite news",
"Template:Discogs artist",
"Template:Authority control",
"Template:Infobox musical artist",
"Template:See also",
"Template:Citation needed",
"Template:Won",
"Template:Main",
"Template:Lang",
"Template:Bronski Beat",
"Template:End",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite book",
"Template:Jimmy Somerville",
"Template:Short description",
"Template:More citations needed",
"Template:Cite web",
"Template:IMDb name"
] | https://en.wikipedia.org/wiki/Bronski_Beat |
4,074 | Barrel (disambiguation) | A barrel is a cylindrical container, traditionally made with wooden material.
Barrel may also refer to: | [
{
"paragraph_id": 0,
"text": "A barrel is a cylindrical container, traditionally made with wooden material.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Barrel may also refer to:",
"title": ""
}
] | A barrel is a cylindrical container, traditionally made with wooden material. Barrel may also refer to: BARREL, a NASA mission
Barrel (album), a 1970 album by Lee Michaels
Barrel (horology), a watch component
Barrel (unit), several units of volume
Barrel (wine), for fermenting or ageing wine
Barrel (fastener), a simple hinge consisting of a barrel and a pivot
Gun barrel
the venturi of a carburetor
a component of a clarinet
a component of a snorkel
a tank in Harry Turtledove's books; see Victoria: An Empire Under the Sun
the outside of a low voltage DC connector
"The Barrel", a song by Aldous Harding from her 2019 album Designer | 2001-08-20T04:46:32Z | 2023-12-31T12:06:28Z | [
"Template:In title",
"Template:Disambiguation",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Barrel_(disambiguation) |
4,077 | Binary prefix | A binary prefix is a unit prefix that indicates a multiple of a unit of measurement by an integer power of two. The most commonly used binary prefixes are kibi (symbol Ki, meaning 2 = 1024), mebi (Mi, 2 = 1048576), and gibi (Gi, 2 = 1073741824). They are most often used in information technology as multipliers of bit and byte, when expressing the capacity of storage devices or the size of computer files.
The binary prefixes "kibi", "mebi", etc. were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard (Amendment 2). They were meant to replace the metric (SI) decimal power prefixes, such as "kilo" ("k", 10 = 1000), "mega" ("M", 10 = 1000000) and "giga" ("G", 10 = 1000000000), that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold 2 × 2 = 2097152 bytes, instead of 2 × 10 = 2000000.
On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds 10 × 10 = 10000000000 bytes, or a little more than that, but less than 10 × 2 = 10737418240 and a file whose size is listed as "2.3 GB" may have a size closer to 2.3 × 2 ≈ 2470000000 or to 2.3 × 10 = 2300000000, depending on the program or operating system providing that measurement. This kind of ambiguity is often confusing to computer system users and has resulted in lawsuits. The IEC 60027-2 binary prefixes have been incorporated in the ISO/IEC 80000 standard and are supported by other standards bodies, including the BIPM, which defines the SI system, the US NIST, and the European Union.
Prior to the 1999 IEC standard, some industry organizations, such as the Joint Electron Device Engineering Council (JEDEC), attempted to redefine the terms kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB in the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such as magnetic storage) continued using those same terms and symbols with the decimal meaning. Since then, the major standards organizations have expressly disapproved the use of SI prefixes to denote binary multiples, and recommended or mandated the use of the IEC prefixes for that purpose, but the use of SI prefixes has persisted in some fields.
While the binary prefixes are almost always used with the units of information, bits and bytes, they may be used with any other unit of measure, when convenient. For example, in signal processing one may need binary multiples of the frequency unit hertz (Hz), for example the kibihertz (KiHz), equal to 1024 Hz.
In 2022, the International Bureau of Weights and Measures (BIPM) adopted the decimal prefixes ronna for 1000 and quetta for 1000. In analogy to the existing binary prefixes, a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) suggested the prefixes robi (Ri, 1024) and quebi (Qi, 1024) for their binary counterparts, but as of 2022, no corresponding binary prefixes have been adopted.
The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kilo to nearly 27% for the quetta prefix. Although the prefixes ronna and quetta have been defined, as of 2022 no names have been officially assigned to the corresponding binary prefixes.
The original metric system adopted by France in 1795 included two binary prefixes named double- (2×) and demi- (1/2×). However, these were not retained when the SI prefixes were internationally adopted by the 11th CGPM conference in 1960.
Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10). For example, the IBM 701 (1952) used a binary methods and could address 2048 words of 36 bits each, while the IBM 702 (1953) used a decimal system, and could address ten thousand 7-bit words.
By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of states of their address lines map to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses.
While early documentation specified those memory sizes as exact numbers such as 4096, 8192, or 16384 units (usually words, bytes, or bits), computer professionals also started using the long-established metric system prefixes "kilo", "mega", "giga", etc., defined to be powers of 10, to mean instead the nearest powers of two; namely, 2 = 1024, 2 = 1024, 2 = 1024, etc. The corresponding metric prefix symbols ("k", "M", "G", etc.) where used with the same binary meanings. The symbol for 2 = 1024 could be written either in lower case ("k") or in uppercase ("K"). The latter was often used intentionally to indicate the binary rather than decimal meaning. This convention, which could not be extended to higher powers, was widely used in the documentation of the IBM 360 (1964) and of the IBM System/370 (1972), of the CDC 7600, of the DEC PDP-11/70 (1975) and of the DEC VAX-11/780 (1977).
In other documents, however, the metric prefixes and their symbols were used to denote powers of 10, but usually with the understanding that the values given were approximate, often truncated down. Thus, for example, a 1967 document by Control Data Corporation (CDC) abbreviated "2 = 64 × 1024 = 65536 words" as "65K words" (rather than "64K" or "66K"),, while the documentation of the HP 21MX real-time computer (1974) denoted 3 × 2 = 192 × 1024 = 196608 as "196K" and 2 = 1048576 as "1M".
These three possible meanings of "k" and "K" ("1024", "1000", or "approximately 1000") were used loosely around the same time, sometimes by the same company. The HP 3000 business computer (1973) could have "64K", "96K", or "128K" bytes of memory. The use of SI prefixes, and the use of "K" instead of "k" remained popular in computer-related publications well into the 21st century, although the ambiguity persisted. The correct meaning was often clear from the context; for instance, in a binary-addressed computer, the true memory size had to be either a power of 2, or a small integer multiple thereof. Thus a "512 megabyte" RAM module was generally understood to have 512 × 1024 = 536870912 bytes, rather than 512000000.
In specifying disk drive capacities, manufacturers have always used conventional decimal SI prefixes representing powers of 10. Storage in a rotating disk drive is organized in platters and tracks whose sizes and counts are determined by mechanical engineering constraints so that the capacity of a disk drive has hardly ever been a simple multiple of a power of 2. For example, the first commercially sold disk drive, the IBM 350 (1956), had 50 physical disk platters containing a total of 50000 sectors of 100 characters each, for a total quoted capacity of 5 million characters.
Moreover, since the 1960s, many disk drives used IBM's disk format, where each track was divided into blocks of user-specified size; and the block sizes were recorded on the disk, subtracting from the usable capacity. For example, the|IBM 3336]] disk pack was quoted to have a 200-megabyte capacity, achieved only with a single 13030-byte block in each of its 808 x 19 tracks.
Decimal megabytes were used for disk capacity by the CDC in 1974. The Seagate ST-412, one of several types installed in the IBM PC/XT, had a capacity of 10027008 bytes when formatted as 306 × 4 tracks and 32 256-byte sectors per track, which was quoted as "10 MB". Similarly, a "300 GB" hard drive can be expected to offer only slightly more than 300×10 = 300000000000, bytes, not 300 × 2 (which would be about 322×10 bytes or "322 GB"). The first terabyte (SI prefix, 1000000000000 bytes) hard disk drive was introduced in 2007. Decimal prefixes were generally used by information processing publications when comparing hard disk capacities.
Users must be aware that some programs and operating systems, such as Microsoft Windows and Classic Mac OS, may use "MB" and "GB" to denote binary prefixes even when displaying disk drive capacities. Thus, for example, the capacity of a "10 MB" (decimal "M") disk drive could be reported as "9.56 MB", and that of a "300 GB" drive as "279.4 GB". Good software and documentation should specify clearly whether "K", "M", "G" mean binary or decimal multipliers. Some operating systems, such as Mac OS X, Ubuntu, and Debian, may use "MB" and "GB" to denote decimal prefixes when displaying disk drive capacities.
Floppy disks used a variety of formats, and their capacities was usually specified with SI-like prefixes "K" and "M" with either decimal or binary meaning. The capacity of the disks was often specified without accounting for the internal formatting overhead, leading to more irregularities.
The early 8-inch diskette formats could contain less than a megabyte with the capacities of those devices specified in kilobytes, kilobits or megabits.
The 5.25-inch diskette sold with the IBM PC AT could hold 1200 × 1024 = 1228800 bytes, and thus was marketed as "1200 KB" with the binary sense of "KB". However, the capacity was also quoted "1.2 MB", which was a hybrid decimal and binary notation, since the "M" meant 1000 × 1024. The precise value was 1.2288 MB (decimal) or 1.171875 MiB (binary).
The 5.25-inch Apple Disk II had 256 bytes per sector, 13 sectors per track, 35 tracks per side, or a total capacity of 116480 bytes. It was later upgraded to 16 sectors per track, giving a total of 140 × 2 = 143360 bytes, which was described as "140KB" using the binary sense of "K".
The most recent version of the physical hardware, the "3.5-inch diskette" cartridge, had 720 512-byte blocks (single-sided). Since two blocks comprised 1024 bytes, the capacity was quoted "360 KB", with the binary sense of "K". On the other hand, the quoted capacity of "1.44 MB" of the High Density ("HD") version was again a hybrid decimal and binary notation, since it meant 1440 pairs of 512-byte sectors, or 1440 × 2 = 1474560 bytes. Some operating systems displayed the capacity of those disks using the binary sense of "MB", as "1.4 MB" (which would be 1.4 × 2 ≈ 1468000 bytes). User complaints forced both Apple and Microsoft to issue support bulletins explaining the discrepancy.
When specifying the capacities of optical compact discs, "megabyte" and "MB" usually mean 1024 bytes. Thus a "700-MB" (or "80-minute") CD has a nominal capacity of about 700 MiB, which is approximately 730 MB (decimal).
On the other hand, capacities of other optical disc storage media like DVD, Blu-ray Disc, HD DVD and magneto-optical (MO) have been generally specified in decimal gigabytes ("GB"), that is, 1000 bytes. In particular, a typical "4.7 GB" DVD has a nominal capacity of about 4.7 × 10 bytes, which is about 4.38 GiB.
Tape drive and media manufacturers have generally used SI decimal prefixes to specify the maximum capacity, although the actual capacity would depend on the block size used when recording.
Computer clock frequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the original IBM PC was 4.77 MHz, that is 4770000 Hz.
Similarly, digital information transfer rates are quoted using decimal prefixe. The Parallel ATA "100 MB/s" disk interface can transfer 100000000 bytes per second, and a "56 Kb/s" modem transmits 56000 bits per second. Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes. The standard sampling rate of music compact disks, quoted as 44.1 kHz, is indeed 44100 samples per second. A "1 Gb/s" Ethernet interface can receive or transmit up to 10 bits per second, or 125000000 bytes per second within each packet. A "56k" modem can encode or decode up to 56000 bits per second.
Decimal SI prefixes are also generally used for processor-memory data transfer speeds. A PCI-X bus with 66 MHz clock and 64 bits wide can transfer 66000000 64-bit words per second, or 4224000000 bit/s = 528000000 B/s, which is usually quoted as 528 MB/s. A PC3200 memory on a double data rate bus, transferring 8 bytes per cycle with a clock speed of 200 MHz has a bandwidth of 200000000 × 8 × 2 = 3200000000 B/s, which would be quoted as 3.2 GB/s.
The ambiguous usage of the prefixes "kilo ("K" or "k"), "mega" ("M"), and "giga" ("G"), as meaning both powers of 1000 or (in computer contexts) of 1024, has been recorded in popular dictionaries, and even in some obsolete standards, such as ANSI/IEEE 1084-1986 and ANSI/IEEE 1212-1991, IEEE 610.10-1994, and IEEE 100-2000. Some of these standards specifically limited the binary meaning to multiples of "byte" ("B") or "bit" ("b").
Before the IEC standard, several alternative proposals existed for unique binary prefixes, starting in the late 1960s. In 1996, Markus Kuhn proposed the extra prefix "di" and the symbol suffix or subscript "2" to mean "binary"; so that, for example, "one dikilobyte" would mean "1024 bytes", denoted "K2B" or "K2B".
In 1968, Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ to denote 1024, and so on. (At the time, memory size was small, and only K was in widespread use.) In the same year, Wallace Givens responded with a suggestion to use bK as an abbreviation for 1024 and bK2 or bK for 1024, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day. Bruce Alan Martin of Brookhaven National Laboratory proposed that, instead of prefixes, binary powers of two were indicated by the letter B followed by the exponent, similar to E in decimal scientific notation. Thus one would write 3B20 for 3 × 2. This convention is still used on some calculators to present binary floating point-numbers today.
In 1969, Donald Knuth, who uses decimal notation like 1 MB = 1000 kB, proposed that the powers of 1024 be designated as "large kilobytes" and "large megabytes", with abbreviations KKB and MMB. However, the use of double SI prefixes, although rejected by the BIPM, had already been given a multiplicative meaning; so that "1 MMB" could be understood as "(10) bytes, that is, "1 TB".
The ambiguous meanings of "kilo", "mega", "giga", etc., has caused significant consumer confusion, especially in the personal computer era. A common source of confusion was the discrepancy between the capacities of hard drives specified by manufacturers, using those prefixes in the decimal sense, and the numbers reported by operating systems and other software, that used them in the binary sense, such as the Apple in 1984. For example, a hard drive marketed as "1 TB" could be reported as having only "931 GB". The confusion was compounded by fact that RAM manufacturers used the binary sense too.
The different interpretations of disk size prefixes led to class action lawsuits against digital storage manufacturers. These cases involved both flash memory and hard disk drives.
Early cases (2004–2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging. Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices or defining MB as 1 million bytes and 1 GB as 1 billion bytes.
On 20 February 2004, Willem Vroegh filed a lawsuit against Lexar Media, Dane–Elec Memory, Fuji Photo Film USA, Eastman Kodak Company, Kingston Technology Company, Inc., Memorex Products, Inc.; PNY Technologies Inc., SanDisk Corporation, Verbatim Corporation, and Viking Interworks alleging that their descriptions of the capacity of their flash memory cards were false and misleading.
Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. "Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes." The plaintiffs wanted the defendants to use the customary values of 1024 for megabyte and 1024 for gigabyte. The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards.
The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites. The consumers could apply for "a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device".
On 7 July 2005, an action entitled Orin Safier v. Western Digital Corporation, et al. was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812. The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ.
Although Western Digital maintained that their usage of units is consistent with "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date.
Western Digital offered to compensate customers with a free download of backup and recovery software valued at US$30. They also paid $500000 in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit. The settlement called for Western Digital to add a disclaimer to their later packaging and advertising. Western Digital had this footnote in their settlement. "Apparently, Plaintiff believes that he could sue an egg company for fraud for labeling a carton of 12 eggs a 'dozen', because some bakers would view a 'dozen' as including 13 items."
A lawsuit (Cho v. Seagate Technology (US) Holdings, Inc., San Francisco Superior Court, Case No. CGC-06-453195) was filed against Seagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between 22 March 2001 and 26 September 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with free backup software or a 5% refund on the cost of the drives.
On 22 January 2020, the district court of the Northern District of California ruled in favor of the defendant, SanDisk, upholding its use of "GB" to mean 1000000000 bytes.
In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) proposed the prefixes "kibi" (short for "kilobinary"), "mebi" ("megabinary"), "gibi" ("gigabinary") and "tebi" ("terabinary"), with respective symbols "kb", "Mb", "Gb" and "Tb", for binary multipliers. The proposal suggested that the SI prefixes should be used only for powers of 10; so that a disk drive capacity of "500 gigabytes", "0.5 terabytes", "500 GB", or "0.5 TB" should all mean 500 × 10 bytes, exactly or approximately, rather than 500 × 2 (= 536870912000) or 0.5 × 2 (= 549755813888).
The proposal was not accepted by IUPAC at the time, but was taken up in 1996 by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). The prefixes "kibi", "mebi", "gibi" and "tebi" were retained, but with the symbols "Ki" (with capital "K"), "Mi", "Gi" and "Ti" respectively.
In January 1999, the IEC published this proposal, with additional prefixes "pebi" ("Pi") and "exbi" ("Ei"), as an international standard (IEC 60027-2 Amendment 2) The standard reaffirmed the BIPM's position that the SI prefixes should always denote powers of 10. The third edition of the standard, published in 2005, added prefixes "zebi" and "yobi", thus matching all then-defined SI prefixes with binary counterparts.
The harmonized ISO/IEC IEC 80000-13:2008 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities. In 2009, the prefixes kibi-, mebi-, etc. were defined by ISO 80000-1 in their own right, independently of the kibibyte, mebibyte, and so on.
The BIPM standard JCGM 200:2012 "International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent 1024 bits (2 bits), which is 1 kibibit."
The IEC 60027-2 standard recommended operating systems and other software were updated to use binary or decimal prefixes consistently, but incorrect usage of SI prefixes for binary multiples is still common. At the time, the IEEE decided that their standards would use the prefixes "kilo", etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis.
The IEC standard binary prefixes are supported by other standardization bodies and technical organizations.
The United States National Institute of Standards and Technology (NIST) supports the ISO/IEC standards for "Prefixes for binary multiples" and has a web page documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as bee. NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them.
As of 2014, the microelectronics industry standards body JEDEC describes the IEC prefixes in its online dictionary, but acknowledges that the SI prefixes and the symbols "K", "M" and "G" are still commonly used with the binary sense for memory sizes.
On 19 March 2005, the IEEE standard IEEE 1541-2002 ("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. as of April 2008, the IEEE Publications division does not require the use of IEC prefixes in its major magazines such as Spectrum or Computer.
The International Bureau of Weights and Measures (BIPM), which maintains the International System of Units (SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in the SI.
The Society of Automotive Engineers (SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not cite the IEC binary prefixes.
The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03. The European Union (EU) has required the use of the IEC binary prefixes since 2007.
Some computer industry participants, such as Hewlett-Packard (HP), and IBM have adopted or recommended IEC binary prefixes as part of their general documentation policies.
As of 2023, the use of SI prefixes with the binary meanings is still prevalent for specifying the capacity of the main memory of computers, of RAM, ROM, EPROM, and EEPROM chips and modules, and of the cache of computer processors. For example, a "512-megabyte" or "512 MB" memory module holds 512 MiB; that is, 512 × 2 bytes, not 512 × 10 bytes.
JEDEC continues to include the customary binary definitions of "kilo", "mega", and "giga" in the document Terms, Definitions, and Letter Symbols, and, as of 2010, still used those definitions in their memory standards.
On the other hand, the SI prefixes with powers of ten meanings are generally used for the capacity of external storage units, such as disk drives, solid state drives, and USB flash drives, except for some flash memory chips intended to be used as EEPROMs. However, some disk manufacturers have used the IEC prefixes to avoid confusion. The decimal meaning of SI prefixes is usually also intended in measurements of data transfer rates, and clock speeds.
Some operating systems and other software use either the IEC binary multiplier symbols ("Ki", "Mi", etc.) or the SI multiplier symbols ("k", "M", "G", etc.) with decimal meaning. Some programs, such as the Linux/GNU ls command, let the user choose between binary or decimal multipliers. However, some continue to use the SI symbols with the binary meanings, even when reporting disk or file sizes. Some programs may also use "K" instead of "k", with either meaning. | [
{
"paragraph_id": 0,
"text": "A binary prefix is a unit prefix that indicates a multiple of a unit of measurement by an integer power of two. The most commonly used binary prefixes are kibi (symbol Ki, meaning 2 = 1024), mebi (Mi, 2 = 1048576), and gibi (Gi, 2 = 1073741824). They are most often used in information technology as multipliers of bit and byte, when expressing the capacity of storage devices or the size of computer files.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The binary prefixes \"kibi\", \"mebi\", etc. were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard (Amendment 2). They were meant to replace the metric (SI) decimal power prefixes, such as \"kilo\" (\"k\", 10 = 1000), \"mega\" (\"M\", 10 = 1000000) and \"giga\" (\"G\", 10 = 1000000000), that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as \"2 megabytes\" or \"2 MB\" would hold 2 × 2 = 2097152 bytes, instead of 2 × 10 = 2000000.",
"title": ""
},
{
"paragraph_id": 2,
"text": "On the other hand, a hard disk whose capacity is specified by the manufacturer as \"10 gigabytes\" or \"10 GB\", holds 10 × 10 = 10000000000 bytes, or a little more than that, but less than 10 × 2 = 10737418240 and a file whose size is listed as \"2.3 GB\" may have a size closer to 2.3 × 2 ≈ 2470000000 or to 2.3 × 10 = 2300000000, depending on the program or operating system providing that measurement. This kind of ambiguity is often confusing to computer system users and has resulted in lawsuits. The IEC 60027-2 binary prefixes have been incorporated in the ISO/IEC 80000 standard and are supported by other standards bodies, including the BIPM, which defines the SI system, the US NIST, and the European Union.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Prior to the 1999 IEC standard, some industry organizations, such as the Joint Electron Device Engineering Council (JEDEC), attempted to redefine the terms kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB in the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such as magnetic storage) continued using those same terms and symbols with the decimal meaning. Since then, the major standards organizations have expressly disapproved the use of SI prefixes to denote binary multiples, and recommended or mandated the use of the IEC prefixes for that purpose, but the use of SI prefixes has persisted in some fields.",
"title": ""
},
{
"paragraph_id": 4,
"text": "While the binary prefixes are almost always used with the units of information, bits and bytes, they may be used with any other unit of measure, when convenient. For example, in signal processing one may need binary multiples of the frequency unit hertz (Hz), for example the kibihertz (KiHz), equal to 1024 Hz.",
"title": ""
},
{
"paragraph_id": 5,
"text": "",
"title": "Definitions"
},
{
"paragraph_id": 6,
"text": "In 2022, the International Bureau of Weights and Measures (BIPM) adopted the decimal prefixes ronna for 1000 and quetta for 1000. In analogy to the existing binary prefixes, a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) suggested the prefixes robi (Ri, 1024) and quebi (Qi, 1024) for their binary counterparts, but as of 2022, no corresponding binary prefixes have been adopted.",
"title": "Definitions"
},
{
"paragraph_id": 7,
"text": "The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kilo to nearly 27% for the quetta prefix. Although the prefixes ronna and quetta have been defined, as of 2022 no names have been officially assigned to the corresponding binary prefixes.",
"title": "Comparison of binary and decimal prefixes"
},
{
"paragraph_id": 8,
"text": "The original metric system adopted by France in 1795 included two binary prefixes named double- (2×) and demi- (1/2×). However, these were not retained when the SI prefixes were internationally adopted by the 11th CGPM conference in 1960.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10). For example, the IBM 701 (1952) used a binary methods and could address 2048 words of 36 bits each, while the IBM 702 (1953) used a decimal system, and could address ten thousand 7-bit words.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of states of their address lines map to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "While early documentation specified those memory sizes as exact numbers such as 4096, 8192, or 16384 units (usually words, bytes, or bits), computer professionals also started using the long-established metric system prefixes \"kilo\", \"mega\", \"giga\", etc., defined to be powers of 10, to mean instead the nearest powers of two; namely, 2 = 1024, 2 = 1024, 2 = 1024, etc. The corresponding metric prefix symbols (\"k\", \"M\", \"G\", etc.) where used with the same binary meanings. The symbol for 2 = 1024 could be written either in lower case (\"k\") or in uppercase (\"K\"). The latter was often used intentionally to indicate the binary rather than decimal meaning. This convention, which could not be extended to higher powers, was widely used in the documentation of the IBM 360 (1964) and of the IBM System/370 (1972), of the CDC 7600, of the DEC PDP-11/70 (1975) and of the DEC VAX-11/780 (1977).",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In other documents, however, the metric prefixes and their symbols were used to denote powers of 10, but usually with the understanding that the values given were approximate, often truncated down. Thus, for example, a 1967 document by Control Data Corporation (CDC) abbreviated \"2 = 64 × 1024 = 65536 words\" as \"65K words\" (rather than \"64K\" or \"66K\"),, while the documentation of the HP 21MX real-time computer (1974) denoted 3 × 2 = 192 × 1024 = 196608 as \"196K\" and 2 = 1048576 as \"1M\".",
"title": "History"
},
{
"paragraph_id": 13,
"text": "These three possible meanings of \"k\" and \"K\" (\"1024\", \"1000\", or \"approximately 1000\") were used loosely around the same time, sometimes by the same company. The HP 3000 business computer (1973) could have \"64K\", \"96K\", or \"128K\" bytes of memory. The use of SI prefixes, and the use of \"K\" instead of \"k\" remained popular in computer-related publications well into the 21st century, although the ambiguity persisted. The correct meaning was often clear from the context; for instance, in a binary-addressed computer, the true memory size had to be either a power of 2, or a small integer multiple thereof. Thus a \"512 megabyte\" RAM module was generally understood to have 512 × 1024 = 536870912 bytes, rather than 512000000.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In specifying disk drive capacities, manufacturers have always used conventional decimal SI prefixes representing powers of 10. Storage in a rotating disk drive is organized in platters and tracks whose sizes and counts are determined by mechanical engineering constraints so that the capacity of a disk drive has hardly ever been a simple multiple of a power of 2. For example, the first commercially sold disk drive, the IBM 350 (1956), had 50 physical disk platters containing a total of 50000 sectors of 100 characters each, for a total quoted capacity of 5 million characters.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Moreover, since the 1960s, many disk drives used IBM's disk format, where each track was divided into blocks of user-specified size; and the block sizes were recorded on the disk, subtracting from the usable capacity. For example, the|IBM 3336]] disk pack was quoted to have a 200-megabyte capacity, achieved only with a single 13030-byte block in each of its 808 x 19 tracks.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Decimal megabytes were used for disk capacity by the CDC in 1974. The Seagate ST-412, one of several types installed in the IBM PC/XT, had a capacity of 10027008 bytes when formatted as 306 × 4 tracks and 32 256-byte sectors per track, which was quoted as \"10 MB\". Similarly, a \"300 GB\" hard drive can be expected to offer only slightly more than 300×10 = 300000000000, bytes, not 300 × 2 (which would be about 322×10 bytes or \"322 GB\"). The first terabyte (SI prefix, 1000000000000 bytes) hard disk drive was introduced in 2007. Decimal prefixes were generally used by information processing publications when comparing hard disk capacities.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Users must be aware that some programs and operating systems, such as Microsoft Windows and Classic Mac OS, may use \"MB\" and \"GB\" to denote binary prefixes even when displaying disk drive capacities. Thus, for example, the capacity of a \"10 MB\" (decimal \"M\") disk drive could be reported as \"9.56 MB\", and that of a \"300 GB\" drive as \"279.4 GB\". Good software and documentation should specify clearly whether \"K\", \"M\", \"G\" mean binary or decimal multipliers. Some operating systems, such as Mac OS X, Ubuntu, and Debian, may use \"MB\" and \"GB\" to denote decimal prefixes when displaying disk drive capacities.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Floppy disks used a variety of formats, and their capacities was usually specified with SI-like prefixes \"K\" and \"M\" with either decimal or binary meaning. The capacity of the disks was often specified without accounting for the internal formatting overhead, leading to more irregularities.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The early 8-inch diskette formats could contain less than a megabyte with the capacities of those devices specified in kilobytes, kilobits or megabits.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The 5.25-inch diskette sold with the IBM PC AT could hold 1200 × 1024 = 1228800 bytes, and thus was marketed as \"1200 KB\" with the binary sense of \"KB\". However, the capacity was also quoted \"1.2 MB\", which was a hybrid decimal and binary notation, since the \"M\" meant 1000 × 1024. The precise value was 1.2288 MB (decimal) or 1.171875 MiB (binary).",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The 5.25-inch Apple Disk II had 256 bytes per sector, 13 sectors per track, 35 tracks per side, or a total capacity of 116480 bytes. It was later upgraded to 16 sectors per track, giving a total of 140 × 2 = 143360 bytes, which was described as \"140KB\" using the binary sense of \"K\".",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The most recent version of the physical hardware, the \"3.5-inch diskette\" cartridge, had 720 512-byte blocks (single-sided). Since two blocks comprised 1024 bytes, the capacity was quoted \"360 KB\", with the binary sense of \"K\". On the other hand, the quoted capacity of \"1.44 MB\" of the High Density (\"HD\") version was again a hybrid decimal and binary notation, since it meant 1440 pairs of 512-byte sectors, or 1440 × 2 = 1474560 bytes. Some operating systems displayed the capacity of those disks using the binary sense of \"MB\", as \"1.4 MB\" (which would be 1.4 × 2 ≈ 1468000 bytes). User complaints forced both Apple and Microsoft to issue support bulletins explaining the discrepancy.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "When specifying the capacities of optical compact discs, \"megabyte\" and \"MB\" usually mean 1024 bytes. Thus a \"700-MB\" (or \"80-minute\") CD has a nominal capacity of about 700 MiB, which is approximately 730 MB (decimal).",
"title": "History"
},
{
"paragraph_id": 24,
"text": "On the other hand, capacities of other optical disc storage media like DVD, Blu-ray Disc, HD DVD and magneto-optical (MO) have been generally specified in decimal gigabytes (\"GB\"), that is, 1000 bytes. In particular, a typical \"4.7 GB\" DVD has a nominal capacity of about 4.7 × 10 bytes, which is about 4.38 GiB.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Tape drive and media manufacturers have generally used SI decimal prefixes to specify the maximum capacity, although the actual capacity would depend on the block size used when recording.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Computer clock frequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the original IBM PC was 4.77 MHz, that is 4770000 Hz.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Similarly, digital information transfer rates are quoted using decimal prefixe. The Parallel ATA \"100 MB/s\" disk interface can transfer 100000000 bytes per second, and a \"56 Kb/s\" modem transmits 56000 bits per second. Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes. The standard sampling rate of music compact disks, quoted as 44.1 kHz, is indeed 44100 samples per second. A \"1 Gb/s\" Ethernet interface can receive or transmit up to 10 bits per second, or 125000000 bytes per second within each packet. A \"56k\" modem can encode or decode up to 56000 bits per second.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Decimal SI prefixes are also generally used for processor-memory data transfer speeds. A PCI-X bus with 66 MHz clock and 64 bits wide can transfer 66000000 64-bit words per second, or 4224000000 bit/s = 528000000 B/s, which is usually quoted as 528 MB/s. A PC3200 memory on a double data rate bus, transferring 8 bytes per cycle with a clock speed of 200 MHz has a bandwidth of 200000000 × 8 × 2 = 3200000000 B/s, which would be quoted as 3.2 GB/s.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The ambiguous usage of the prefixes \"kilo (\"K\" or \"k\"), \"mega\" (\"M\"), and \"giga\" (\"G\"), as meaning both powers of 1000 or (in computer contexts) of 1024, has been recorded in popular dictionaries, and even in some obsolete standards, such as ANSI/IEEE 1084-1986 and ANSI/IEEE 1212-1991, IEEE 610.10-1994, and IEEE 100-2000. Some of these standards specifically limited the binary meaning to multiples of \"byte\" (\"B\") or \"bit\" (\"b\").",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Before the IEC standard, several alternative proposals existed for unique binary prefixes, starting in the late 1960s. In 1996, Markus Kuhn proposed the extra prefix \"di\" and the symbol suffix or subscript \"2\" to mean \"binary\"; so that, for example, \"one dikilobyte\" would mean \"1024 bytes\", denoted \"K2B\" or \"K2B\".",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In 1968, Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ to denote 1024, and so on. (At the time, memory size was small, and only K was in widespread use.) In the same year, Wallace Givens responded with a suggestion to use bK as an abbreviation for 1024 and bK2 or bK for 1024, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day. Bruce Alan Martin of Brookhaven National Laboratory proposed that, instead of prefixes, binary powers of two were indicated by the letter B followed by the exponent, similar to E in decimal scientific notation. Thus one would write 3B20 for 3 × 2. This convention is still used on some calculators to present binary floating point-numbers today.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In 1969, Donald Knuth, who uses decimal notation like 1 MB = 1000 kB, proposed that the powers of 1024 be designated as \"large kilobytes\" and \"large megabytes\", with abbreviations KKB and MMB. However, the use of double SI prefixes, although rejected by the BIPM, had already been given a multiplicative meaning; so that \"1 MMB\" could be understood as \"(10) bytes, that is, \"1 TB\".",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The ambiguous meanings of \"kilo\", \"mega\", \"giga\", etc., has caused significant consumer confusion, especially in the personal computer era. A common source of confusion was the discrepancy between the capacities of hard drives specified by manufacturers, using those prefixes in the decimal sense, and the numbers reported by operating systems and other software, that used them in the binary sense, such as the Apple in 1984. For example, a hard drive marketed as \"1 TB\" could be reported as having only \"931 GB\". The confusion was compounded by fact that RAM manufacturers used the binary sense too.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "The different interpretations of disk size prefixes led to class action lawsuits against digital storage manufacturers. These cases involved both flash memory and hard disk drives.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Early cases (2004–2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging. Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices or defining MB as 1 million bytes and 1 GB as 1 billion bytes.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "On 20 February 2004, Willem Vroegh filed a lawsuit against Lexar Media, Dane–Elec Memory, Fuji Photo Film USA, Eastman Kodak Company, Kingston Technology Company, Inc., Memorex Products, Inc.; PNY Technologies Inc., SanDisk Corporation, Verbatim Corporation, and Viking Interworks alleging that their descriptions of the capacity of their flash memory cards were false and misleading.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. \"Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes.\" The plaintiffs wanted the defendants to use the customary values of 1024 for megabyte and 1024 for gigabyte. The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites. The consumers could apply for \"a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device\".",
"title": "History"
},
{
"paragraph_id": 39,
"text": "On 7 July 2005, an action entitled Orin Safier v. Western Digital Corporation, et al. was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812. The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Although Western Digital maintained that their usage of units is consistent with \"the indisputably correct industry standard for measuring and describing storage capacity\", and that they \"cannot be expected to reform the software industry\", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "Western Digital offered to compensate customers with a free download of backup and recovery software valued at US$30. They also paid $500000 in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit. The settlement called for Western Digital to add a disclaimer to their later packaging and advertising. Western Digital had this footnote in their settlement. \"Apparently, Plaintiff believes that he could sue an egg company for fraud for labeling a carton of 12 eggs a 'dozen', because some bakers would view a 'dozen' as including 13 items.\"",
"title": "History"
},
{
"paragraph_id": 42,
"text": "A lawsuit (Cho v. Seagate Technology (US) Holdings, Inc., San Francisco Superior Court, Case No. CGC-06-453195) was filed against Seagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between 22 March 2001 and 26 September 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with free backup software or a 5% refund on the cost of the drives.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "On 22 January 2020, the district court of the Northern District of California ruled in favor of the defendant, SanDisk, upholding its use of \"GB\" to mean 1000000000 bytes.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) proposed the prefixes \"kibi\" (short for \"kilobinary\"), \"mebi\" (\"megabinary\"), \"gibi\" (\"gigabinary\") and \"tebi\" (\"terabinary\"), with respective symbols \"kb\", \"Mb\", \"Gb\" and \"Tb\", for binary multipliers. The proposal suggested that the SI prefixes should be used only for powers of 10; so that a disk drive capacity of \"500 gigabytes\", \"0.5 terabytes\", \"500 GB\", or \"0.5 TB\" should all mean 500 × 10 bytes, exactly or approximately, rather than 500 × 2 (= 536870912000) or 0.5 × 2 (= 549755813888).",
"title": "History"
},
{
"paragraph_id": 45,
"text": "The proposal was not accepted by IUPAC at the time, but was taken up in 1996 by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). The prefixes \"kibi\", \"mebi\", \"gibi\" and \"tebi\" were retained, but with the symbols \"Ki\" (with capital \"K\"), \"Mi\", \"Gi\" and \"Ti\" respectively.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "In January 1999, the IEC published this proposal, with additional prefixes \"pebi\" (\"Pi\") and \"exbi\" (\"Ei\"), as an international standard (IEC 60027-2 Amendment 2) The standard reaffirmed the BIPM's position that the SI prefixes should always denote powers of 10. The third edition of the standard, published in 2005, added prefixes \"zebi\" and \"yobi\", thus matching all then-defined SI prefixes with binary counterparts.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "The harmonized ISO/IEC IEC 80000-13:2008 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities. In 2009, the prefixes kibi-, mebi-, etc. were defined by ISO 80000-1 in their own right, independently of the kibibyte, mebibyte, and so on.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "The BIPM standard JCGM 200:2012 \"International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition\" lists the IEC binary prefixes and states \"SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent 1024 bits (2 bits), which is 1 kibibit.\"",
"title": "History"
},
{
"paragraph_id": 49,
"text": "The IEC 60027-2 standard recommended operating systems and other software were updated to use binary or decimal prefixes consistently, but incorrect usage of SI prefixes for binary multiples is still common. At the time, the IEEE decided that their standards would use the prefixes \"kilo\", etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "The IEC standard binary prefixes are supported by other standardization bodies and technical organizations.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "The United States National Institute of Standards and Technology (NIST) supports the ISO/IEC standards for \"Prefixes for binary multiples\" and has a web page documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as bee. NIST has stated the SI prefixes \"refer strictly to powers of 10\" and that the binary definitions \"should not be used\" for them.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "As of 2014, the microelectronics industry standards body JEDEC describes the IEC prefixes in its online dictionary, but acknowledges that the SI prefixes and the symbols \"K\", \"M\" and \"G\" are still commonly used with the binary sense for memory sizes.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "On 19 March 2005, the IEEE standard IEEE 1541-2002 (\"Prefixes for Binary Multiples\") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. as of April 2008, the IEEE Publications division does not require the use of IEC prefixes in its major magazines such as Spectrum or Computer.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "The International Bureau of Weights and Measures (BIPM), which maintains the International System of Units (SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in the SI.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "The Society of Automotive Engineers (SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not cite the IEC binary prefixes.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03. The European Union (EU) has required the use of the IEC binary prefixes since 2007.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "Some computer industry participants, such as Hewlett-Packard (HP), and IBM have adopted or recommended IEC binary prefixes as part of their general documentation policies.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "As of 2023, the use of SI prefixes with the binary meanings is still prevalent for specifying the capacity of the main memory of computers, of RAM, ROM, EPROM, and EEPROM chips and modules, and of the cache of computer processors. For example, a \"512-megabyte\" or \"512 MB\" memory module holds 512 MiB; that is, 512 × 2 bytes, not 512 × 10 bytes.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "JEDEC continues to include the customary binary definitions of \"kilo\", \"mega\", and \"giga\" in the document Terms, Definitions, and Letter Symbols, and, as of 2010, still used those definitions in their memory standards.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "On the other hand, the SI prefixes with powers of ten meanings are generally used for the capacity of external storage units, such as disk drives, solid state drives, and USB flash drives, except for some flash memory chips intended to be used as EEPROMs. However, some disk manufacturers have used the IEC prefixes to avoid confusion. The decimal meaning of SI prefixes is usually also intended in measurements of data transfer rates, and clock speeds.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "Some operating systems and other software use either the IEC binary multiplier symbols (\"Ki\", \"Mi\", etc.) or the SI multiplier symbols (\"k\", \"M\", \"G\", etc.) with decimal meaning. Some programs, such as the Linux/GNU ls command, let the user choose between binary or decimal multipliers. However, some continue to use the SI symbols with the binary meanings, even when reporting disk or file sizes. Some programs may also use \"K\" instead of \"k\", with either meaning.",
"title": "History"
}
] | A binary prefix is a unit prefix that indicates a multiple of a unit of measurement by an integer power of two. The most commonly used binary prefixes are kibi (symbol Ki, meaning 210 = 1024), mebi (Mi, 220 = 1048576), and gibi (Gi, 230 = 1073741824). They are most often used in information technology as multipliers of bit and byte, when expressing the capacity of storage devices or the size of computer files. The binary prefixes "kibi", "mebi", etc. were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard (Amendment 2). They were meant to replace the metric (SI) decimal power prefixes, such as "kilo" ("k", 103 = 1000), "mega" ("M", 106 = 1000000) and "giga" ("G", 109 = 1000000000), that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold 2 × 220 = 2097152 bytes, instead of 2 × 106 = 2000000. On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds 10 × 109 = 10000000000 bytes, or a little more than that, but less than 10 × 230 = 10737418240 and a file whose size is listed as "2.3 GB" may have a size closer to 2.3 × 230 ≈ 2470000000 or to 2.3 × 109 = 2300000000, depending on the program or operating system providing that measurement. This kind of ambiguity is often confusing to computer system users and has resulted in lawsuits. The IEC 60027-2 binary prefixes have been incorporated in the ISO/IEC 80000 standard and are supported by other standards bodies, including the BIPM, which defines the SI system, the US NIST, and the European Union. Prior to the 1999 IEC standard, some industry organizations, such as the Joint Electron Device Engineering Council (JEDEC), attempted to redefine the terms kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB in the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such as magnetic storage) continued using those same terms and symbols with the decimal meaning. Since then, the major standards organizations have expressly disapproved the use of SI prefixes to denote binary multiples, and recommended or mandated the use of the IEC prefixes for that purpose, but the use of SI prefixes has persisted in some fields. While the binary prefixes are almost always used with the units of information, bits and bytes, they may be used with any other unit of measure, when convenient. For example, in signal processing one may need binary multiples of the frequency unit hertz (Hz), for example the kibihertz (KiHz), equal to 1024 Hz. | 2001-08-23T13:50:59Z | 2023-12-28T03:34:28Z | [
"Template:As of",
"Template:Citation needed",
"Template:Reflist",
"Template:About",
"Template:Bit and byte prefixes",
"Template:Nowrap",
"Template:Asof",
"Template:Cite web",
"Template:Val",
"Template:See also",
"Template:Sfrac",
"Template:Cite journal",
"Template:Cite press release",
"Template:Computer Storage Volumes",
"Template:Short description",
"Template:Use dmy dates",
"Template:Rp",
"Template:Webarchive",
"Template:Anchor",
"Template:Cn",
"Template:Cite magazine"
] | https://en.wikipedia.org/wiki/Binary_prefix |
4,078 | National Baseball Hall of Fame and Museum | The National Baseball Hall of Fame and Museum is a history museum and hall of fame in Cooperstown, New York, operated by private interests. It serves as the central point of the history of baseball in the United States and displays baseball-related artifacts and exhibits, honoring those who have excelled in playing, managing, and serving the sport. The Hall's motto is "Preserving History, Honoring Excellence, Connecting Generations". Cooperstown is often used as shorthand (or a metonym) for the National Baseball Hall of Fame and Museum.
The Hall of Fame was established in 1939 by Stephen Carlton Clark, an heir to the Singer Sewing Machine fortune. Clark sought to bring tourists to a city hurt by the Great Depression, which reduced the local tourist trade, and Prohibition, which devastated the local hops industry. Clark constructed the Hall of Fame's building, which was dedicated on June 12, 1939. (His granddaughter, Jane Forbes Clark, is the current chairman of the board of directors.) The erroneous claim that Civil War hero Abner Doubleday invented baseball in Cooperstown was instrumental in the early marketing of the Hall.
An expanded library and research facility opened in 1994. Dale Petroskey became the organization's president in 1999. In 2002, the Hall launched Baseball as America, a traveling exhibit that toured ten American museums over six years. The Hall of Fame has since also sponsored educational programming on the Internet to bring the Hall of Fame to schoolchildren who might not visit. The Hall and Museum completed a series of renovations in spring 2005. The Hall of Fame also presents an annual exhibit at FanFest at the Major League Baseball All-Star Game.
Among baseball fans, "Hall of Fame" means not only the museum and facility in Cooperstown, New York, but the pantheon of players, managers, umpires, executives, and pioneers who have been inducted into the Hall. The first five men elected were Ty Cobb, Babe Ruth, Honus Wagner, Christy Mathewson and Walter Johnson, chosen in 1936; roughly 20 more were selected before the entire group was inducted at the Hall's 1939 opening. As of January 2023, 342 people had been elected to the Hall of Fame, including 241 former Major League Baseball players, 39 Negro league baseball players and executives, 23 managers, 10 umpires, and 36 pioneers, executives, and organizers. One hundred eighteen members of the Hall of Fame have been inducted posthumously, including four who died after their selection was announced. Of the 39 Negro league members, 31 were inducted posthumously, including all 26 selected since the 1990s. The Hall of Fame includes one female member, Effa Manley.
The newest member, elected on December 3, 2023, is manager Jim Leyland.
In 2019, former Yankees closer Mariano Rivera became the first player to be elected unanimously. Derek Jeter, Marvin Miller, Ted Simmons, and Larry Walker were to be inducted in 2020, but their induction ceremony was delayed by the COVID-19 pandemic until September 8, 2021. The ceremony was open to the public, as COVID restrictions had been lifted.
Players are currently inducted into the Hall of Fame through election by either the Baseball Writers' Association of America (or BBWAA), or the Veterans Committee, which now consists of four subcommittees, each of which considers and votes for candidates from a separate era of baseball. Five years after retirement, any player with 10 years of major league experience who passes a screening committee (which removes from consideration players of clearly lesser qualification) is eligible to be elected by BBWAA members with 10 years' membership or more who also have been actively covering MLB at any time in the 10 years preceding the election (the latter requirement was added for the 2016 election). From a final ballot typically including 25–40 candidates, each writer may vote for up to 10 players; until the late 1950s, voters were advised to cast votes for the maximum 10 candidates. Any player named on 75% or more of all ballots cast is elected. A player who is named on fewer than 5% of ballots is dropped from future elections. In some instances, the screening committee had restored their names to later ballots, but in the mid-1990s, dropped players were made permanently ineligible for Hall of Fame consideration, even by the Veterans Committee. A 2001 change in the election procedures restored the eligibility of these dropped players; while their names will not appear on future BBWAA ballots, they may be considered by the Veterans Committee. Players receiving 5% or more of the votes but fewer than 75% are reconsidered annually until a maximum of ten years of eligibility (lowered from fifteen years for the 2015 election).
Under special circumstances, certain players may be deemed eligible for induction even though they have not met all requirements. Addie Joss was elected in 1978, despite only playing nine seasons before he died of meningitis. Additionally, if an otherwise eligible player dies before his fifth year of retirement, then that player may be placed on the ballot at the first election at least six months after his death. Roberto Clemente set the precedent: the writers put him up for consideration after his death on New Year's Eve, 1972, and he was inducted in 1973.
The five-year waiting period was established in 1954 after an evolutionary process. In 1936 all players were eligible, including active ones. From the 1937 election until the 1945 election, there was no waiting period, so any retired player was eligible, but writers were discouraged from voting for current major leaguers. Since there was no formal rule preventing a writer from casting a ballot for an active player, the scribes did not always comply with the informal guideline; Joe DiMaggio received a vote in 1945, for example. From the 1946 election until the 1954 election, an official one-year waiting period was in effect. (DiMaggio, for example, retired after the 1951 season and was first eligible in the 1953 election.) The modern rule establishing a wait of five years was passed in 1954, although those who had already been eligible under the old rule were grandfathered into the ballot, thus permitting Joe DiMaggio to be elected within four years of his retirement.
Z is for ZenithThe summit of fame.These men are up there.These men are the game.
— Ogden Nash, Sport magazine (January 1949)
Contrary to popular belief, no formal exception was made for Lou Gehrig (other than to hold a special one-man election for him): there was no waiting period at that time, and Gehrig met all other qualifications, so he would have been eligible for the next regular election after he retired during the 1939 season. However, the BBWAA decided to hold a special election at the 1939 Winter Meetings in Cincinnati, specifically to elect Gehrig (most likely because it was known that he was terminally ill, making it uncertain that he would live long enough to see another election). Nobody else was on that ballot, and the numerical results have never been made public. Since no elections were held in 1940 or 1941, the special election permitted Gehrig to enter the Hall while still alive.
If a player fails to be elected by the BBWAA within 10 years of his eligibility for election, he may be selected by the Veterans Committee. Following changes to the election process for that body made in 2010 and 2016, the Veterans Committee is now responsible for electing all otherwise eligible candidates who are not eligible for the BBWAA ballot — both long-retired players and non-playing personnel (managers, umpires, and executives). From 2011 to 2016, each candidate could be considered once every three years; now, the frequency depends on the era in which an individual made his greatest contributions. A more complete discussion of the new process is available below.
From 2008 to 2010, following changes made by the Hall in July 2007, the main Veterans Committee, then made up of living Hall of Famers, voted only on players whose careers began in 1943 or later. These changes also established three separate committees to select other figures:
Players of the Negro leagues have also been considered at various times, beginning in 1971. In 2005, the Hall completed a study on African American players between the late 19th century and the integration of the major leagues in 1947, and conducted a special election for such players in February 2006; seventeen figures from the Negro leagues were chosen in that election, in addition to the eighteen previously selected. Following the 2010 changes, Negro leagues figures were primarily considered for induction alongside other figures from the 1871–1946 era, called the "Pre-Integration Era" by the Hall; since 2016, Negro leagues figures are primarily considered alongside other figures from what the Hall calls the "Early Baseball" era (1871–1949).
Predictably, the selection process catalyzes endless debate among baseball fans over the merits of various candidates. Even players elected years ago remain the subjects of discussions as to whether they deserved election. For example, Bill James' 1994 book Whatever Happened to the Hall of Fame? goes into detail about who he believes does and does not belong in the Hall of Fame.
The selection rules for the Baseball Hall of Fame were modified to prevent the induction of anyone on Baseball's "permanently ineligible" list, such as Pete Rose or "Shoeless Joe" Jackson. Many others have been barred from participation in MLB, but none have Hall of Fame qualifications on the level of Jackson or Rose.
Jackson and Rose were both banned from MLB for life for actions related to gambling on their own teams—Jackson was determined to have cooperated with those who conspired to intentionally lose the 1919 World Series, and for accepting payment for losing, and Rose voluntarily accepted a permanent spot on the ineligible list in return for MLB's promise to make no official finding in relation to alleged betting on the Cincinnati Reds when he was their manager in the 1980s. (Baseball's Rule 21, prominently posted in every clubhouse locker room, mandates permanent banishment from MLB for having a gambling interest of any sort on a game in which a player or manager is directly involved.) Rose later admitted that he bet on the Reds in his 2004 autobiography. Baseball fans are deeply split on the issue of whether these two should remain banned or have their punishment revoked. Writer Bill James, though he advocates Rose eventually making it into the Hall of Fame, compared the people who want to put Jackson in the Hall of Fame to "those women who show up at murder trials wanting to marry the cute murderer".
The actions and composition of the Veterans Committee have been at times controversial, with occasional selections of contemporaries and teammates of the committee members over seemingly more worthy candidates.
In 2001, the Veterans Committee was reformed to comprise the living Hall of Fame members and other honorees. The revamped Committee held three elections, in 2003 and 2007, for both players and non-players, and in 2005 for players only. No individual was elected in that time, sparking criticism among some observers who expressed doubt whether the new Veterans Committee would ever elect a player. The Committee members, most of whom were Hall members, were accused of being reluctant to elect new candidates in the hope of heightening the value of their own selection. After no one was selected for the third consecutive election in 2007, Hall of Famer Mike Schmidt noted, "The same thing happens every year. The current members want to preserve the prestige as much as possible, and are unwilling to open the doors." In 2007, the committee and its selection processes were again reorganized; the main committee then included all living members of the Hall, and voted on a reduced number of candidates from among players whose careers began in 1943 or later. Separate committees, including sportswriters and broadcasters, would select umpires, managers and executives, as well as players from earlier eras.
In the first election to be held under the 2007 revisions, two managers and three executives were elected in December 2007 as part of the 2008 election process. The next Veterans Committee elections for players were held in December 2008 as part of the 2009 election process; the main committee did not select a player, while the panel for pre–World War II players elected Joe Gordon in its first and ultimately only vote. The main committee voted as part of the election process for inductions in odd-numbered years, while the pre-World War II panel would vote every five years, and the panel for umpires, managers, and executives voted as part of the election process for inductions in even-numbered years.
Further changes to the Veterans Committee process were announced by the Hall in July 2010, July 2016, and April 2022.
Per the latest changes, announced on April 22, 2022, the multiple eras previously utilized were collapsed to three, to be voted on in an annual rotation (one per year):
A one-year waiting period beyond potential BBWAA eligibility (which had been abolished in 2016) was reintroduced, thus restricting the committee to considering players retired for at least 16 seasons.
The eligibility criteria for Era Committee consideration differ between players, managers, and executives.
While the text on a player's or manager's plaque lists all teams for which the inductee was a member in that specific role, inductees are usually depicted wearing the cap of a specific team, though in a few cases, like umpires, they wear caps without logos. (Executives are not depicted wearing caps.) Additionally, as of 2015, inductee biographies on the Hall's website for all players and managers, and executives who were associated with specific teams, list a "primary team", which does not necessarily match the cap logo. The Hall selects the logo "based on where that player makes his most indelible mark."
Although the Hall always made the final decision on which logo was shown, until 2001 the Hall deferred to the wishes of players or managers whose careers were linked with multiple teams. Some examples of inductees associated with multiple teams are the following:
In all of the above cases, the "primary team" is the team for which the inductee spent the largest portion of his career except for Ryan, whose primary team is listed as the Angels despite playing one fewer season for that team than for the Astros.
In 2001, the Hall of Fame decided to change the policy on cap logo selection, as a result of rumors that some teams were offering compensation, such as number retirement, money, or organizational jobs, in exchange for the cap designation. (For example, though Wade Boggs denied the claims, some media reports had said that his contract with the Tampa Bay Devil Rays required him to request depiction in the Hall of Fame as a Devil Ray.) The Hall decided that it would no longer defer to the inductee, though the player's wishes would be considered, when deciding on the logo to appear on the plaque. Newly elected members affected by the change include the following:
Sam Crane (who had played a decade in 19th century baseball before becoming a manager and sportswriter) had first approached the idea of making a memorial to the great players of the past in what was believed to have been the birthplace of baseball: Cooperstown, New York, but the idea did not muster much momentum until after his death in 1925. In 1934, the idea for establishing a Baseball Hall of Fame and Museum was devised by several individuals, such as Ford Frick (president of the National League) and Alexander Cleland, a Scottish immigrant who decided to serve as the first executive secretary for the Museum for the next seven years that worked with the interests of the Village and Major League Baseball. Stephen Carlton Clark (a Cooperstown native) paid for the construction of the museum, which was planned to open in 1939 to mark the "Centennial of Baseball", which included renovations to Doubleday Field. William Beattie served as the first curator of the museum.
According to the Hall of Fame, approximately 260,000 visitors enter the museum each year, and the running total has surpassed 17 million. These visitors see only a fraction of its 40,000 artifacts, 3 million library items (such as newspaper clippings and photos) and 140,000 baseball cards.
The Hall has seen a noticeable decrease in attendance in recent years. A 2013 story on ESPN.com about the village of Cooperstown and its relation to the game partially linked the reduced attendance with Cooperstown Dreams Park, a youth baseball complex about 5 miles (8.0 km) away in the town of Hartwick. The 22 fields at Dreams Park currently draw 17,000 players each summer for a week of intensive play; while the complex includes housing for the players, their parents and grandparents must stay elsewhere. According to the story,
Prior to Dreams Park, a room might be filled for a week by several sets of tourists. Now, that room will be taken by just one family for the week, and that family may only go into Cooperstown and the Hall of Fame once. While there are other contributing factors (the recession and high gas prices among them), the Hall's attendance has tumbled since Dreams Park opened. The Hall drew 383,000 visitors in 1999. It drew 262,000 last year.
A controversy erupted in 1982, when it emerged that some historic items given to the Hall had been sold on the collectibles market. The items had been lent to the Baseball Commissioner's office, gotten mixed up with other property owned by the Commissioner's office and employees of the office, and moved to the garage of Joe Reichler, an assistant to Commissioner Bowie Kuhn, who sold the items to resolve his personal financial difficulties. Under pressure from the New York Attorney General, the Commissioner's Office made reparations, but the negative publicity damaged the Hall of Fame's reputation, and made it more difficult for it to solicit donations.
In 2012, Congress passed and President Barack Obama signed a law ordering the United States Mint to produce and sell commemorative, non-circulating coins to benefit the private, non-profit Hall. The bill, H.R. 2527, was introduced in the United States House of Representatives by Rep. Richard Hanna, a Republican from New York, and passed the House on October 26, 2011. The coins, which depict baseball gloves and balls, are the first concave designs produced by the Mint. The mintage included 50,000 gold coins, 400,000 silver coins, and 750,000 clad (nickel-copper) coins. The Mint released them on March 27, 2014, and the gold and silver editions quickly sold out. The Hall receives money from surcharges included in the sale price: a total of $9.5 million if all the coins are sold. | [
{
"paragraph_id": 0,
"text": "The National Baseball Hall of Fame and Museum is a history museum and hall of fame in Cooperstown, New York, operated by private interests. It serves as the central point of the history of baseball in the United States and displays baseball-related artifacts and exhibits, honoring those who have excelled in playing, managing, and serving the sport. The Hall's motto is \"Preserving History, Honoring Excellence, Connecting Generations\". Cooperstown is often used as shorthand (or a metonym) for the National Baseball Hall of Fame and Museum.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Hall of Fame was established in 1939 by Stephen Carlton Clark, an heir to the Singer Sewing Machine fortune. Clark sought to bring tourists to a city hurt by the Great Depression, which reduced the local tourist trade, and Prohibition, which devastated the local hops industry. Clark constructed the Hall of Fame's building, which was dedicated on June 12, 1939. (His granddaughter, Jane Forbes Clark, is the current chairman of the board of directors.) The erroneous claim that Civil War hero Abner Doubleday invented baseball in Cooperstown was instrumental in the early marketing of the Hall.",
"title": ""
},
{
"paragraph_id": 2,
"text": "An expanded library and research facility opened in 1994. Dale Petroskey became the organization's president in 1999. In 2002, the Hall launched Baseball as America, a traveling exhibit that toured ten American museums over six years. The Hall of Fame has since also sponsored educational programming on the Internet to bring the Hall of Fame to schoolchildren who might not visit. The Hall and Museum completed a series of renovations in spring 2005. The Hall of Fame also presents an annual exhibit at FanFest at the Major League Baseball All-Star Game.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Among baseball fans, \"Hall of Fame\" means not only the museum and facility in Cooperstown, New York, but the pantheon of players, managers, umpires, executives, and pioneers who have been inducted into the Hall. The first five men elected were Ty Cobb, Babe Ruth, Honus Wagner, Christy Mathewson and Walter Johnson, chosen in 1936; roughly 20 more were selected before the entire group was inducted at the Hall's 1939 opening. As of January 2023, 342 people had been elected to the Hall of Fame, including 241 former Major League Baseball players, 39 Negro league baseball players and executives, 23 managers, 10 umpires, and 36 pioneers, executives, and organizers. One hundred eighteen members of the Hall of Fame have been inducted posthumously, including four who died after their selection was announced. Of the 39 Negro league members, 31 were inducted posthumously, including all 26 selected since the 1990s. The Hall of Fame includes one female member, Effa Manley.",
"title": "Inductees"
},
{
"paragraph_id": 4,
"text": "The newest member, elected on December 3, 2023, is manager Jim Leyland.",
"title": "Inductees"
},
{
"paragraph_id": 5,
"text": "In 2019, former Yankees closer Mariano Rivera became the first player to be elected unanimously. Derek Jeter, Marvin Miller, Ted Simmons, and Larry Walker were to be inducted in 2020, but their induction ceremony was delayed by the COVID-19 pandemic until September 8, 2021. The ceremony was open to the public, as COVID restrictions had been lifted.",
"title": "Inductees"
},
{
"paragraph_id": 6,
"text": "Players are currently inducted into the Hall of Fame through election by either the Baseball Writers' Association of America (or BBWAA), or the Veterans Committee, which now consists of four subcommittees, each of which considers and votes for candidates from a separate era of baseball. Five years after retirement, any player with 10 years of major league experience who passes a screening committee (which removes from consideration players of clearly lesser qualification) is eligible to be elected by BBWAA members with 10 years' membership or more who also have been actively covering MLB at any time in the 10 years preceding the election (the latter requirement was added for the 2016 election). From a final ballot typically including 25–40 candidates, each writer may vote for up to 10 players; until the late 1950s, voters were advised to cast votes for the maximum 10 candidates. Any player named on 75% or more of all ballots cast is elected. A player who is named on fewer than 5% of ballots is dropped from future elections. In some instances, the screening committee had restored their names to later ballots, but in the mid-1990s, dropped players were made permanently ineligible for Hall of Fame consideration, even by the Veterans Committee. A 2001 change in the election procedures restored the eligibility of these dropped players; while their names will not appear on future BBWAA ballots, they may be considered by the Veterans Committee. Players receiving 5% or more of the votes but fewer than 75% are reconsidered annually until a maximum of ten years of eligibility (lowered from fifteen years for the 2015 election).",
"title": "Inductees"
},
{
"paragraph_id": 7,
"text": "Under special circumstances, certain players may be deemed eligible for induction even though they have not met all requirements. Addie Joss was elected in 1978, despite only playing nine seasons before he died of meningitis. Additionally, if an otherwise eligible player dies before his fifth year of retirement, then that player may be placed on the ballot at the first election at least six months after his death. Roberto Clemente set the precedent: the writers put him up for consideration after his death on New Year's Eve, 1972, and he was inducted in 1973.",
"title": "Inductees"
},
{
"paragraph_id": 8,
"text": "The five-year waiting period was established in 1954 after an evolutionary process. In 1936 all players were eligible, including active ones. From the 1937 election until the 1945 election, there was no waiting period, so any retired player was eligible, but writers were discouraged from voting for current major leaguers. Since there was no formal rule preventing a writer from casting a ballot for an active player, the scribes did not always comply with the informal guideline; Joe DiMaggio received a vote in 1945, for example. From the 1946 election until the 1954 election, an official one-year waiting period was in effect. (DiMaggio, for example, retired after the 1951 season and was first eligible in the 1953 election.) The modern rule establishing a wait of five years was passed in 1954, although those who had already been eligible under the old rule were grandfathered into the ballot, thus permitting Joe DiMaggio to be elected within four years of his retirement.",
"title": "Inductees"
},
{
"paragraph_id": 9,
"text": "Z is for ZenithThe summit of fame.These men are up there.These men are the game.",
"title": "Inductees"
},
{
"paragraph_id": 10,
"text": "— Ogden Nash, Sport magazine (January 1949)",
"title": "Inductees"
},
{
"paragraph_id": 11,
"text": "Contrary to popular belief, no formal exception was made for Lou Gehrig (other than to hold a special one-man election for him): there was no waiting period at that time, and Gehrig met all other qualifications, so he would have been eligible for the next regular election after he retired during the 1939 season. However, the BBWAA decided to hold a special election at the 1939 Winter Meetings in Cincinnati, specifically to elect Gehrig (most likely because it was known that he was terminally ill, making it uncertain that he would live long enough to see another election). Nobody else was on that ballot, and the numerical results have never been made public. Since no elections were held in 1940 or 1941, the special election permitted Gehrig to enter the Hall while still alive.",
"title": "Inductees"
},
{
"paragraph_id": 12,
"text": "If a player fails to be elected by the BBWAA within 10 years of his eligibility for election, he may be selected by the Veterans Committee. Following changes to the election process for that body made in 2010 and 2016, the Veterans Committee is now responsible for electing all otherwise eligible candidates who are not eligible for the BBWAA ballot — both long-retired players and non-playing personnel (managers, umpires, and executives). From 2011 to 2016, each candidate could be considered once every three years; now, the frequency depends on the era in which an individual made his greatest contributions. A more complete discussion of the new process is available below.",
"title": "Inductees"
},
{
"paragraph_id": 13,
"text": "From 2008 to 2010, following changes made by the Hall in July 2007, the main Veterans Committee, then made up of living Hall of Famers, voted only on players whose careers began in 1943 or later. These changes also established three separate committees to select other figures:",
"title": "Inductees"
},
{
"paragraph_id": 14,
"text": "Players of the Negro leagues have also been considered at various times, beginning in 1971. In 2005, the Hall completed a study on African American players between the late 19th century and the integration of the major leagues in 1947, and conducted a special election for such players in February 2006; seventeen figures from the Negro leagues were chosen in that election, in addition to the eighteen previously selected. Following the 2010 changes, Negro leagues figures were primarily considered for induction alongside other figures from the 1871–1946 era, called the \"Pre-Integration Era\" by the Hall; since 2016, Negro leagues figures are primarily considered alongside other figures from what the Hall calls the \"Early Baseball\" era (1871–1949).",
"title": "Inductees"
},
{
"paragraph_id": 15,
"text": "Predictably, the selection process catalyzes endless debate among baseball fans over the merits of various candidates. Even players elected years ago remain the subjects of discussions as to whether they deserved election. For example, Bill James' 1994 book Whatever Happened to the Hall of Fame? goes into detail about who he believes does and does not belong in the Hall of Fame.",
"title": "Inductees"
},
{
"paragraph_id": 16,
"text": "The selection rules for the Baseball Hall of Fame were modified to prevent the induction of anyone on Baseball's \"permanently ineligible\" list, such as Pete Rose or \"Shoeless Joe\" Jackson. Many others have been barred from participation in MLB, but none have Hall of Fame qualifications on the level of Jackson or Rose.",
"title": "Inductees"
},
{
"paragraph_id": 17,
"text": "Jackson and Rose were both banned from MLB for life for actions related to gambling on their own teams—Jackson was determined to have cooperated with those who conspired to intentionally lose the 1919 World Series, and for accepting payment for losing, and Rose voluntarily accepted a permanent spot on the ineligible list in return for MLB's promise to make no official finding in relation to alleged betting on the Cincinnati Reds when he was their manager in the 1980s. (Baseball's Rule 21, prominently posted in every clubhouse locker room, mandates permanent banishment from MLB for having a gambling interest of any sort on a game in which a player or manager is directly involved.) Rose later admitted that he bet on the Reds in his 2004 autobiography. Baseball fans are deeply split on the issue of whether these two should remain banned or have their punishment revoked. Writer Bill James, though he advocates Rose eventually making it into the Hall of Fame, compared the people who want to put Jackson in the Hall of Fame to \"those women who show up at murder trials wanting to marry the cute murderer\".",
"title": "Inductees"
},
{
"paragraph_id": 18,
"text": "The actions and composition of the Veterans Committee have been at times controversial, with occasional selections of contemporaries and teammates of the committee members over seemingly more worthy candidates.",
"title": "Inductees"
},
{
"paragraph_id": 19,
"text": "In 2001, the Veterans Committee was reformed to comprise the living Hall of Fame members and other honorees. The revamped Committee held three elections, in 2003 and 2007, for both players and non-players, and in 2005 for players only. No individual was elected in that time, sparking criticism among some observers who expressed doubt whether the new Veterans Committee would ever elect a player. The Committee members, most of whom were Hall members, were accused of being reluctant to elect new candidates in the hope of heightening the value of their own selection. After no one was selected for the third consecutive election in 2007, Hall of Famer Mike Schmidt noted, \"The same thing happens every year. The current members want to preserve the prestige as much as possible, and are unwilling to open the doors.\" In 2007, the committee and its selection processes were again reorganized; the main committee then included all living members of the Hall, and voted on a reduced number of candidates from among players whose careers began in 1943 or later. Separate committees, including sportswriters and broadcasters, would select umpires, managers and executives, as well as players from earlier eras.",
"title": "Inductees"
},
{
"paragraph_id": 20,
"text": "In the first election to be held under the 2007 revisions, two managers and three executives were elected in December 2007 as part of the 2008 election process. The next Veterans Committee elections for players were held in December 2008 as part of the 2009 election process; the main committee did not select a player, while the panel for pre–World War II players elected Joe Gordon in its first and ultimately only vote. The main committee voted as part of the election process for inductions in odd-numbered years, while the pre-World War II panel would vote every five years, and the panel for umpires, managers, and executives voted as part of the election process for inductions in even-numbered years.",
"title": "Inductees"
},
{
"paragraph_id": 21,
"text": "Further changes to the Veterans Committee process were announced by the Hall in July 2010, July 2016, and April 2022.",
"title": "Inductees"
},
{
"paragraph_id": 22,
"text": "Per the latest changes, announced on April 22, 2022, the multiple eras previously utilized were collapsed to three, to be voted on in an annual rotation (one per year):",
"title": "Inductees"
},
{
"paragraph_id": 23,
"text": "A one-year waiting period beyond potential BBWAA eligibility (which had been abolished in 2016) was reintroduced, thus restricting the committee to considering players retired for at least 16 seasons.",
"title": "Inductees"
},
{
"paragraph_id": 24,
"text": "The eligibility criteria for Era Committee consideration differ between players, managers, and executives.",
"title": "Inductees"
},
{
"paragraph_id": 25,
"text": "While the text on a player's or manager's plaque lists all teams for which the inductee was a member in that specific role, inductees are usually depicted wearing the cap of a specific team, though in a few cases, like umpires, they wear caps without logos. (Executives are not depicted wearing caps.) Additionally, as of 2015, inductee biographies on the Hall's website for all players and managers, and executives who were associated with specific teams, list a \"primary team\", which does not necessarily match the cap logo. The Hall selects the logo \"based on where that player makes his most indelible mark.\"",
"title": "Inductees"
},
{
"paragraph_id": 26,
"text": "Although the Hall always made the final decision on which logo was shown, until 2001 the Hall deferred to the wishes of players or managers whose careers were linked with multiple teams. Some examples of inductees associated with multiple teams are the following:",
"title": "Inductees"
},
{
"paragraph_id": 27,
"text": "In all of the above cases, the \"primary team\" is the team for which the inductee spent the largest portion of his career except for Ryan, whose primary team is listed as the Angels despite playing one fewer season for that team than for the Astros.",
"title": "Inductees"
},
{
"paragraph_id": 28,
"text": "In 2001, the Hall of Fame decided to change the policy on cap logo selection, as a result of rumors that some teams were offering compensation, such as number retirement, money, or organizational jobs, in exchange for the cap designation. (For example, though Wade Boggs denied the claims, some media reports had said that his contract with the Tampa Bay Devil Rays required him to request depiction in the Hall of Fame as a Devil Ray.) The Hall decided that it would no longer defer to the inductee, though the player's wishes would be considered, when deciding on the logo to appear on the plaque. Newly elected members affected by the change include the following:",
"title": "Inductees"
},
{
"paragraph_id": 29,
"text": "Sam Crane (who had played a decade in 19th century baseball before becoming a manager and sportswriter) had first approached the idea of making a memorial to the great players of the past in what was believed to have been the birthplace of baseball: Cooperstown, New York, but the idea did not muster much momentum until after his death in 1925. In 1934, the idea for establishing a Baseball Hall of Fame and Museum was devised by several individuals, such as Ford Frick (president of the National League) and Alexander Cleland, a Scottish immigrant who decided to serve as the first executive secretary for the Museum for the next seven years that worked with the interests of the Village and Major League Baseball. Stephen Carlton Clark (a Cooperstown native) paid for the construction of the museum, which was planned to open in 1939 to mark the \"Centennial of Baseball\", which included renovations to Doubleday Field. William Beattie served as the first curator of the museum.",
"title": "The museum"
},
{
"paragraph_id": 30,
"text": "According to the Hall of Fame, approximately 260,000 visitors enter the museum each year, and the running total has surpassed 17 million. These visitors see only a fraction of its 40,000 artifacts, 3 million library items (such as newspaper clippings and photos) and 140,000 baseball cards.",
"title": "The museum"
},
{
"paragraph_id": 31,
"text": "The Hall has seen a noticeable decrease in attendance in recent years. A 2013 story on ESPN.com about the village of Cooperstown and its relation to the game partially linked the reduced attendance with Cooperstown Dreams Park, a youth baseball complex about 5 miles (8.0 km) away in the town of Hartwick. The 22 fields at Dreams Park currently draw 17,000 players each summer for a week of intensive play; while the complex includes housing for the players, their parents and grandparents must stay elsewhere. According to the story,",
"title": "The museum"
},
{
"paragraph_id": 32,
"text": "Prior to Dreams Park, a room might be filled for a week by several sets of tourists. Now, that room will be taken by just one family for the week, and that family may only go into Cooperstown and the Hall of Fame once. While there are other contributing factors (the recession and high gas prices among them), the Hall's attendance has tumbled since Dreams Park opened. The Hall drew 383,000 visitors in 1999. It drew 262,000 last year.",
"title": "The museum"
},
{
"paragraph_id": 33,
"text": "A controversy erupted in 1982, when it emerged that some historic items given to the Hall had been sold on the collectibles market. The items had been lent to the Baseball Commissioner's office, gotten mixed up with other property owned by the Commissioner's office and employees of the office, and moved to the garage of Joe Reichler, an assistant to Commissioner Bowie Kuhn, who sold the items to resolve his personal financial difficulties. Under pressure from the New York Attorney General, the Commissioner's Office made reparations, but the negative publicity damaged the Hall of Fame's reputation, and made it more difficult for it to solicit donations.",
"title": "Notable events"
},
{
"paragraph_id": 34,
"text": "In 2012, Congress passed and President Barack Obama signed a law ordering the United States Mint to produce and sell commemorative, non-circulating coins to benefit the private, non-profit Hall. The bill, H.R. 2527, was introduced in the United States House of Representatives by Rep. Richard Hanna, a Republican from New York, and passed the House on October 26, 2011. The coins, which depict baseball gloves and balls, are the first concave designs produced by the Mint. The mintage included 50,000 gold coins, 400,000 silver coins, and 750,000 clad (nickel-copper) coins. The Mint released them on March 27, 2014, and the gold and silver editions quickly sold out. The Hall receives money from surcharges included in the sale price: a total of $9.5 million if all the coins are sold.",
"title": "Notable events"
}
] | The National Baseball Hall of Fame and Museum is a history museum and hall of fame in Cooperstown, New York, operated by private interests. It serves as the central point of the history of baseball in the United States and displays baseball-related artifacts and exhibits, honoring those who have excelled in playing, managing, and serving the sport. The Hall's motto is "Preserving History, Honoring Excellence, Connecting Generations". Cooperstown is often used as shorthand for the National Baseball Hall of Fame and Museum. The Hall of Fame was established in 1939 by Stephen Carlton Clark, an heir to the Singer Sewing Machine fortune. Clark sought to bring tourists to a city hurt by the Great Depression, which reduced the local tourist trade, and Prohibition, which devastated the local hops industry. Clark constructed the Hall of Fame's building, which was dedicated on June 12, 1939. The erroneous claim that Civil War hero Abner Doubleday invented baseball in Cooperstown was instrumental in the early marketing of the Hall. An expanded library and research facility opened in 1994. Dale Petroskey became the organization's president in 1999. In 2002, the Hall launched Baseball as America, a traveling exhibit that toured ten American museums over six years. The Hall of Fame has since also sponsored educational programming on the Internet to bring the Hall of Fame to schoolchildren who might not visit. The Hall and Museum completed a series of renovations in spring 2005. The Hall of Fame also presents an annual exhibit at FanFest at the Major League Baseball All-Star Game. | 2001-10-09T16:50:16Z | 2023-12-15T01:21:06Z | [
"Template:Cite press release",
"Template:Redirect",
"Template:Use American English",
"Template:Wide image",
"Template:Section link",
"Template:Reflist",
"Template:Infobox museum",
"Template:Quote box",
"Template:Cite magazine",
"Template:J. G. Taylor Spink Award",
"Template:Honor Rolls of Baseball",
"Template:Cite news",
"Template:Cite book",
"Template:Baseball Hall of Fame members",
"Template:Short description",
"Template:Use mdy dates",
"Template:As of",
"Template:Circa",
"Template:Webarchive",
"Template:Panorama",
"Template:USBill",
"Template:USPL",
"Template:Convert",
"Template:Clear",
"Template:Cite web",
"Template:Ford C. Frick Award",
"Template:Portal",
"Template:Authority control",
"Template:See also",
"Template:Main",
"Template:ISBN",
"Template:Dead link",
"Template:Commons category",
"Template:Baseball Hall of Fame"
] | https://en.wikipedia.org/wiki/National_Baseball_Hall_of_Fame_and_Museum |
4,079 | BPP (complexity) | In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.
Informally, a problem is in BPP if there is an algorithm for it that has the following properties:
A language L is in BPP if and only if there exists a probabilistic Turing machine M, such that
Unlike the complexity class ZPP, the machine M is required to run for polynomial time on all inputs, regardless of the outcome of the random coin flips.
Alternatively, BPP can be defined using only deterministic Turing machines. A language L is in BPP if and only if there exists a polynomial p and deterministic Turing machine M, such that
In this definition, the string y corresponds to the output of the random coin flips that the probabilistic Turing machine would have made. For some applications this definition is preferable since it does not mention probabilistic Turing machines.
In practice, an error probability of 1/3 might not be acceptable, however, the choice of 1/3 in the definition is arbitrary. Modifying the definition to use any constant between 0 and 1/2 (exclusive) in place of 1/3 would not change the resulting set BPP. For example, if one defined the class with the restriction that the algorithm can be wrong with probability at most 1/2, this would result in the same class of problems. The error probability does not even have to be constant: the same class of problems is defined by allowing error as high as 1/2 − n on the one hand, or requiring error as small as 2 on the other hand, where c is any positive constant, and n is the length of input. This flexibility in the choice of error probability is based on the idea of running an error-prone algorithm many times, and using the majority result of the runs to obtain a more accurate algorithm. The chance that the majority of the runs are wrong drops off exponentially as a consequence of the Chernoff bound.
P = ? B P P {\displaystyle {\mathsf {P}}{\overset {?}{=}}{\mathsf {BPP}}}
All problems in P are obviously also in BPP. However, many problems have been known to be in BPP but not known to be in P. The number of such problems is decreasing, and it is conjectured that P = BPP.
For a long time, one of the most famous problems known to be in BPP but not known to be in P was the problem of determining whether a given number is prime. However, in the 2002 paper PRIMES is in P, Manindra Agrawal and his students Neeraj Kayal and Nitin Saxena found a deterministic polynomial-time algorithm for this problem, thus showing that it is in P.
An important example of a problem in BPP (in fact in co-RP) still not known to be in P is polynomial identity testing, the problem of determining whether a polynomial is identically equal to the zero polynomial, when you have access to the value of the polynomial for any given input, but not to the coefficients. In other words, is there an assignment of values to the variables such that when a nonzero polynomial is evaluated on these values, the result is nonzero? It suffices to choose each variable's value uniformly at random from a finite subset of at least d values to achieve bounded error probability, where d is the total degree of the polynomial.
If the access to randomness is removed from the definition of BPP, we get the complexity class P. In the definition of the class, if we replace the ordinary Turing machine with a quantum computer, we get the class BQP.
Adding postselection to BPP, or allowing computation paths to have different lengths, gives the class BPPpath. BPPpath is known to contain NP, and it is contained in its quantum counterpart PostBQP.
A Monte Carlo algorithm is a randomized algorithm which is likely to be correct. Problems in the class BPP have Monte Carlo algorithms with polynomial bounded running time. This is compared to a Las Vegas algorithm which is a randomized algorithm which either outputs the correct answer, or outputs "fail" with low probability. Las Vegas algorithms with polynomial bound running times are used to define the class ZPP. Alternatively, ZPP contains probabilistic algorithms that are always correct and have expected polynomial running time. This is weaker than saying it is a polynomial time algorithm, since it may run for super-polynomial time, but with very low probability.
It is known that BPP is closed under complement; that is, BPP = co-BPP. BPP is low for itself, meaning that a BPP machine with the power to solve BPP problems instantly (a BPP oracle machine) is not any more powerful than the machine without this extra power. In symbols, BPP = BPP.
The relationship between BPP and NP is unknown: it is not known whether BPP is a subset of NP, NP is a subset of BPP or neither. If NP is contained in BPP, which is considered unlikely since it would imply practical solutions for NP-complete problems, then NP = RP and PH ⊆ BPP.
It is known that RP is a subset of BPP, and BPP is a subset of PP. It is not known whether those two are strict subsets, since we don't even know if P is a strict subset of PSPACE. BPP is contained in the second level of the polynomial hierarchy and therefore it is contained in PH. More precisely, the Sipser–Lautemann theorem states that B P P ⊆ Σ 2 ∩ Π 2 {\displaystyle {\mathsf {BPP}}\subseteq \Sigma _{2}\cap \Pi _{2}} . As a result, P = NP leads to P = BPP since PH collapses to P in this case. Thus either P = BPP or P ≠ NP or both.
Adleman's theorem states that membership in any language in BPP can be determined by a family of polynomial-size Boolean circuits, which means BPP is contained in P/poly. Indeed, as a consequence of the proof of this fact, every BPP algorithm operating on inputs of bounded length can be derandomized into a deterministic algorithm using a fixed string of random bits. Finding this string may be expensive, however. Some weak separation results for Monte Carlo time classes were proven by Karpinski & Verbeek (1987a), see also Karpinski & Verbeek (1987b).
The class BPP is closed under complementation, union and intersection.
Relative to oracles, we know that there exist oracles A and B, such that P = BPP and P ≠ BPP. Moreover, relative to a random oracle with probability 1, P = BPP and BPP is strictly contained in NP and co-NP.
There is even an oracle in which BPP=EXP (and hence P<NP<BPP=EXP=NEXP), which can be iteratively constructed as follows. For a fixed E (relativized) complete problem, the oracle will give correct answers with high probability if queried with the problem instance followed by a random string of length kn (n is instance length; k is an appropriate small constant). Start with n=1. For every instance of the problem of length n fix oracle answers (see lemma below) to fix the instance output. Next, provide the instance outputs for queries consisting of the instance followed by kn-length string, and then treat output for queries of length ≤(k+1)n as fixed, and proceed with instances of length n+1.
Lemma: Given a problem (specifically, an oracle machine code and time constraint) in relativized E, for every partially constructed oracle and input of length n, the output can be fixed by specifying 2 oracle answers. Proof: The machine is simulated, and the oracle answers (that are not already fixed) are fixed step-by-step. There is at most one oracle query per deterministic computation step. For the relativized NP oracle, if possible fix the output to be yes by choosing a computation path and fixing the answers of the base oracle; otherwise no fixing is necessary, and either way there is at most 1 answer of the base oracle per step. Since there are 2 steps, the lemma follows.
The lemma ensures that (for a large enough k), it is possible to do the construction while leaving enough strings for the relativized E answers. Also, we can ensure that for the relativized E, linear time suffices, even for function problems (if given a function oracle and linear output size) and with exponentially small (with linear exponent) error probability. Also, this construction is effective in that given an arbitrary oracle A we can arrange the oracle B to have P≤P and EXP=EXP=BPP. Also, for a ZPP=EXP oracle (and hence ZPP=BPP=EXP<NEXP), one would fix the answers in the relativized E computation to a special nonanswer, thus ensuring that no fake answers are given.
The existence of certain strong pseudorandom number generators is conjectured by most experts of the field. This conjecture implies that randomness does not give additional computational power to polynomial time computation, that is, P = RP = BPP. Note that ordinary generators are not sufficient to show this result; any probabilistic algorithm implemented using a typical random number generator will always produce incorrect results on certain inputs irrespective of the seed (though these inputs might be rare).
László Babai, Lance Fortnow, Noam Nisan, and Avi Wigderson showed that unless EXPTIME collapses to MA, BPP is contained in
The class i.o.-SUBEXP, which stands for infinitely often SUBEXP, contains problems which have sub-exponential time algorithms for infinitely many input sizes. They also showed that P = BPP if the exponential-time hierarchy, which is defined in terms of the polynomial hierarchy and E as E, collapses to E; however, note that the exponential-time hierarchy is usually conjectured not to collapse.
Russell Impagliazzo and Avi Wigderson showed that if any problem in E, where
has circuit complexity 2 then P = BPP. | [
{
"paragraph_id": 0,
"text": "In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Informally, a problem is in BPP if there is an algorithm for it that has the following properties:",
"title": ""
},
{
"paragraph_id": 2,
"text": "A language L is in BPP if and only if there exists a probabilistic Turing machine M, such that",
"title": "Definition"
},
{
"paragraph_id": 3,
"text": "Unlike the complexity class ZPP, the machine M is required to run for polynomial time on all inputs, regardless of the outcome of the random coin flips.",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "Alternatively, BPP can be defined using only deterministic Turing machines. A language L is in BPP if and only if there exists a polynomial p and deterministic Turing machine M, such that",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "In this definition, the string y corresponds to the output of the random coin flips that the probabilistic Turing machine would have made. For some applications this definition is preferable since it does not mention probabilistic Turing machines.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "In practice, an error probability of 1/3 might not be acceptable, however, the choice of 1/3 in the definition is arbitrary. Modifying the definition to use any constant between 0 and 1/2 (exclusive) in place of 1/3 would not change the resulting set BPP. For example, if one defined the class with the restriction that the algorithm can be wrong with probability at most 1/2, this would result in the same class of problems. The error probability does not even have to be constant: the same class of problems is defined by allowing error as high as 1/2 − n on the one hand, or requiring error as small as 2 on the other hand, where c is any positive constant, and n is the length of input. This flexibility in the choice of error probability is based on the idea of running an error-prone algorithm many times, and using the majority result of the runs to obtain a more accurate algorithm. The chance that the majority of the runs are wrong drops off exponentially as a consequence of the Chernoff bound.",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "P = ? B P P {\\displaystyle {\\mathsf {P}}{\\overset {?}{=}}{\\mathsf {BPP}}}",
"title": "Problems"
},
{
"paragraph_id": 8,
"text": "All problems in P are obviously also in BPP. However, many problems have been known to be in BPP but not known to be in P. The number of such problems is decreasing, and it is conjectured that P = BPP.",
"title": "Problems"
},
{
"paragraph_id": 9,
"text": "For a long time, one of the most famous problems known to be in BPP but not known to be in P was the problem of determining whether a given number is prime. However, in the 2002 paper PRIMES is in P, Manindra Agrawal and his students Neeraj Kayal and Nitin Saxena found a deterministic polynomial-time algorithm for this problem, thus showing that it is in P.",
"title": "Problems"
},
{
"paragraph_id": 10,
"text": "An important example of a problem in BPP (in fact in co-RP) still not known to be in P is polynomial identity testing, the problem of determining whether a polynomial is identically equal to the zero polynomial, when you have access to the value of the polynomial for any given input, but not to the coefficients. In other words, is there an assignment of values to the variables such that when a nonzero polynomial is evaluated on these values, the result is nonzero? It suffices to choose each variable's value uniformly at random from a finite subset of at least d values to achieve bounded error probability, where d is the total degree of the polynomial.",
"title": "Problems"
},
{
"paragraph_id": 11,
"text": "If the access to randomness is removed from the definition of BPP, we get the complexity class P. In the definition of the class, if we replace the ordinary Turing machine with a quantum computer, we get the class BQP.",
"title": "Related classes"
},
{
"paragraph_id": 12,
"text": "Adding postselection to BPP, or allowing computation paths to have different lengths, gives the class BPPpath. BPPpath is known to contain NP, and it is contained in its quantum counterpart PostBQP.",
"title": "Related classes"
},
{
"paragraph_id": 13,
"text": "A Monte Carlo algorithm is a randomized algorithm which is likely to be correct. Problems in the class BPP have Monte Carlo algorithms with polynomial bounded running time. This is compared to a Las Vegas algorithm which is a randomized algorithm which either outputs the correct answer, or outputs \"fail\" with low probability. Las Vegas algorithms with polynomial bound running times are used to define the class ZPP. Alternatively, ZPP contains probabilistic algorithms that are always correct and have expected polynomial running time. This is weaker than saying it is a polynomial time algorithm, since it may run for super-polynomial time, but with very low probability.",
"title": "Related classes"
},
{
"paragraph_id": 14,
"text": "It is known that BPP is closed under complement; that is, BPP = co-BPP. BPP is low for itself, meaning that a BPP machine with the power to solve BPP problems instantly (a BPP oracle machine) is not any more powerful than the machine without this extra power. In symbols, BPP = BPP.",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 15,
"text": "The relationship between BPP and NP is unknown: it is not known whether BPP is a subset of NP, NP is a subset of BPP or neither. If NP is contained in BPP, which is considered unlikely since it would imply practical solutions for NP-complete problems, then NP = RP and PH ⊆ BPP.",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 16,
"text": "It is known that RP is a subset of BPP, and BPP is a subset of PP. It is not known whether those two are strict subsets, since we don't even know if P is a strict subset of PSPACE. BPP is contained in the second level of the polynomial hierarchy and therefore it is contained in PH. More precisely, the Sipser–Lautemann theorem states that B P P ⊆ Σ 2 ∩ Π 2 {\\displaystyle {\\mathsf {BPP}}\\subseteq \\Sigma _{2}\\cap \\Pi _{2}} . As a result, P = NP leads to P = BPP since PH collapses to P in this case. Thus either P = BPP or P ≠ NP or both.",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 17,
"text": "Adleman's theorem states that membership in any language in BPP can be determined by a family of polynomial-size Boolean circuits, which means BPP is contained in P/poly. Indeed, as a consequence of the proof of this fact, every BPP algorithm operating on inputs of bounded length can be derandomized into a deterministic algorithm using a fixed string of random bits. Finding this string may be expensive, however. Some weak separation results for Monte Carlo time classes were proven by Karpinski & Verbeek (1987a), see also Karpinski & Verbeek (1987b).",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 18,
"text": "The class BPP is closed under complementation, union and intersection.",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 19,
"text": "Relative to oracles, we know that there exist oracles A and B, such that P = BPP and P ≠ BPP. Moreover, relative to a random oracle with probability 1, P = BPP and BPP is strictly contained in NP and co-NP.",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 20,
"text": "There is even an oracle in which BPP=EXP (and hence P<NP<BPP=EXP=NEXP), which can be iteratively constructed as follows. For a fixed E (relativized) complete problem, the oracle will give correct answers with high probability if queried with the problem instance followed by a random string of length kn (n is instance length; k is an appropriate small constant). Start with n=1. For every instance of the problem of length n fix oracle answers (see lemma below) to fix the instance output. Next, provide the instance outputs for queries consisting of the instance followed by kn-length string, and then treat output for queries of length ≤(k+1)n as fixed, and proceed with instances of length n+1.",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 21,
"text": "Lemma: Given a problem (specifically, an oracle machine code and time constraint) in relativized E, for every partially constructed oracle and input of length n, the output can be fixed by specifying 2 oracle answers. Proof: The machine is simulated, and the oracle answers (that are not already fixed) are fixed step-by-step. There is at most one oracle query per deterministic computation step. For the relativized NP oracle, if possible fix the output to be yes by choosing a computation path and fixing the answers of the base oracle; otherwise no fixing is necessary, and either way there is at most 1 answer of the base oracle per step. Since there are 2 steps, the lemma follows.",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 22,
"text": "The lemma ensures that (for a large enough k), it is possible to do the construction while leaving enough strings for the relativized E answers. Also, we can ensure that for the relativized E, linear time suffices, even for function problems (if given a function oracle and linear output size) and with exponentially small (with linear exponent) error probability. Also, this construction is effective in that given an arbitrary oracle A we can arrange the oracle B to have P≤P and EXP=EXP=BPP. Also, for a ZPP=EXP oracle (and hence ZPP=BPP=EXP<NEXP), one would fix the answers in the relativized E computation to a special nonanswer, thus ensuring that no fake answers are given.",
"title": "Complexity-theoretic properties"
},
{
"paragraph_id": 23,
"text": "The existence of certain strong pseudorandom number generators is conjectured by most experts of the field. This conjecture implies that randomness does not give additional computational power to polynomial time computation, that is, P = RP = BPP. Note that ordinary generators are not sufficient to show this result; any probabilistic algorithm implemented using a typical random number generator will always produce incorrect results on certain inputs irrespective of the seed (though these inputs might be rare).",
"title": "Derandomization"
},
{
"paragraph_id": 24,
"text": "László Babai, Lance Fortnow, Noam Nisan, and Avi Wigderson showed that unless EXPTIME collapses to MA, BPP is contained in",
"title": "Derandomization"
},
{
"paragraph_id": 25,
"text": "The class i.o.-SUBEXP, which stands for infinitely often SUBEXP, contains problems which have sub-exponential time algorithms for infinitely many input sizes. They also showed that P = BPP if the exponential-time hierarchy, which is defined in terms of the polynomial hierarchy and E as E, collapses to E; however, note that the exponential-time hierarchy is usually conjectured not to collapse.",
"title": "Derandomization"
},
{
"paragraph_id": 26,
"text": "Russell Impagliazzo and Avi Wigderson showed that if any problem in E, where",
"title": "Derandomization"
},
{
"paragraph_id": 27,
"text": "has circuit complexity 2 then P = BPP.",
"title": "Derandomization"
}
] | In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances.
BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine. Informally, a problem is in BPP if there is an algorithm for it that has the following properties: It is allowed to flip coins and make random decisions
It is guaranteed to run in polynomial time
On any given run of the algorithm, it has a probability of at most 1/3 of giving the wrong answer, whether the answer is YES or NO. | 2001-08-22T05:57:14Z | 2023-09-29T15:30:24Z | [
"Template:Cite book",
"Template:Webarchive",
"Template:No",
"Template:Tmath",
"Template:Harvtxt",
"Template:Citation",
"Template:Doi",
"Template:Citation needed",
"Template:Cite journal",
"Template:ComplexityClasses",
"Template:Short description",
"Template:Diagonal split header",
"Template:Reflist",
"Template:Cite conference",
"Template:Yes",
"Template:Unsolved",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/BPP_(complexity) |
4,080 | BQP | In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.
A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3.
BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits { Q n : n ∈ N } {\displaystyle \{Q_{n}\colon n\in \mathbb {N} \}} , such that
Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances.
Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n on the one hand, or requiring error as small as 2 on the other hand, where c is any positive constant, and n is the length of input.
Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every problem in Promise-BQP reduces to it in polynomial time.
Here is an intuitive problem that is complete for efficient quantum computation, which stems directly from the definition of Promise-BQP. Note that for technical reasons, completeness proofs focus on the promise problem version of BQP. We show that the problem below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class having a trivial promise, for which no complete problems are known).
Given a description of a quantum circuit C {\displaystyle C} acting on n {\displaystyle n} qubits with m {\displaystyle m} gates, where m {\displaystyle m} is a polynomial in n {\displaystyle n} and each gate acts on one or two qubits, and two numbers α , β ∈ [ 0 , 1 ] , α > β {\displaystyle \alpha ,\beta \in [0,1],\alpha >\beta } , distinguish between the following two cases:
Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases.
Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB.
Proof. Suppose we have an algorithm A {\displaystyle A} that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuit C {\displaystyle C} acting on n {\displaystyle n} qubits, and two numbers α , β ∈ [ 0 , 1 ] , α > β {\displaystyle \alpha ,\beta \in [0,1],\alpha >\beta } , A {\displaystyle A} distinguishes between the above two cases. We can solve any problem in BQP with this oracle, by setting α = 2 / 3 , β = 1 / 3 {\displaystyle \alpha =2/3,\beta =1/3} .
For any L ∈ B Q P {\displaystyle L\in \mathrm {BQP} } , there exists family of quantum circuits { Q n : n ∈ N } {\displaystyle \{Q_{n}\colon n\in \mathbb {N} \}} such that for all n ∈ N {\displaystyle n\in \mathbb {N} } , a state | x ⟩ {\displaystyle |x\rangle } of n {\displaystyle n} qubits, if x ∈ L , P r ( Q n ( | x ⟩ ) = 1 ) ≥ 2 / 3 {\displaystyle x\in L,Pr(Q_{n}(|x\rangle )=1)\geq 2/3} ; else if x ∉ L , P r ( Q n ( | x ⟩ ) = 0 ) ≥ 2 / 3 {\displaystyle x\notin L,Pr(Q_{n}(|x\rangle )=0)\geq 2/3} . Fix an input | x ⟩ {\displaystyle |x\rangle } of n {\displaystyle n} qubits, and the corresponding quantum circuit Q n {\displaystyle Q_{n}} . We can first construct a circuit C x {\displaystyle C_{x}} such that C x | 0 ⟩ ⊗ n = | x ⟩ {\displaystyle C_{x}|0\rangle ^{\otimes n}=|x\rangle } . This can be done easily by hardwiring | x ⟩ {\displaystyle |x\rangle } and apply a sequence of CNOT gates to flip the qubits. Then we can combine two circuits to get C ′ = Q n C x {\displaystyle C'=Q_{n}C_{x}} , and now C ′ | 0 ⟩ ⊗ n = Q n | x ⟩ {\displaystyle C'|0\rangle ^{\otimes n}=Q_{n}|x\rangle } . And finally, necessarily the results of Q n {\displaystyle Q_{n}} is obtained by measuring several qubits and apply some (classical) logic gates to them. We can always defer the measurement and reroute the circuits so that by measuring the first qubit of C ′ | 0 ⟩ ⊗ n = Q n | x ⟩ {\displaystyle C'|0\rangle ^{\otimes n}=Q_{n}|x\rangle } , we get the output. This will be our circuit C {\displaystyle C} , and we decide the membership of x {\displaystyle x} in L {\displaystyle L} by running A ( C ) {\displaystyle A(C)} with α = 2 / 3 , β = 1 / 3 {\displaystyle \alpha =2/3,\beta =1/3} . By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), so L ∈ B Q P {\displaystyle L\in \mathrm {BQP} } reduces to APPROX-QCIRCUIT-PROB.
APPROX-QCIRCUIT-PROB comes handy when we try to prove the relationships between some well-known complexity classes and BQP.
What is the relationship between B Q P {\displaystyle {\mathsf {BQP}}} and N P {\displaystyle {\mathsf {NP}}} ?
BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means BQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time.
BQP contains P and BPP and is contained in AWPP, PP and PSPACE. In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are:
As the problem of P ≟ PSPACE has not yet been solved, the proof of inequality between BQP and classes mentioned above is supposed to be difficult. The relation between BQP and NP is not known. In May 2018, computer scientists Ran Raz of Princeton University and Avishay Tal of Stanford University published a paper which showed that, relative to an oracle, BQP was not contained in PH. It can be proven that there exists an oracle A such that BQP ⊈ {\displaystyle \nsubseteq } PH. In an extremely informal sense, this can be thought of as giving PH and BQP an identical, but additional, capability and verifying that BQP with the oracle (BQP) can do things PH cannot. While an oracle separation has been proven, the fact that BQP is not contained in PH has not been proven. An oracle separation does not prove whether or not complexity classes are the same. The oracle separation gives intuition that BQP may not be contained in PH.
It has been suspected for many years that Fourier Sampling is a problem that exists within BQP, but not within the polynomial hierarchy. Recent conjectures have provided evidence that a similar problem, Fourier Checking, also exists in the class BQP without being contained in the polynomial hierarchy. This conjecture is especially notable because it suggests that problems existing in BQP could be classified as harder than NP-Complete problems. Paired with the fact that many practical BQP problems are suspected to exist outside of P (it is suspected and not verified because there is no proof that P ≠ NP), this illustrates the potential power of quantum computing in relation to classical computing.
Adding postselection to BQP results in the complexity class PostBQP which is equal to PP.
We will prove or discuss some of these results below.
We begin with an easier containment. To show that B Q P ⊆ E X P {\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {EXP}}} , it suffices to show that APPROX-QCIRCUIT-PROB is in EXP since APPROX-QCIRCUIT-PROB is BQP-complete.
Claim — APPROX-QCIRCUIT-PROB ∈ E X P {\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {EXP}}}
The idea is simple. Since we have exponential power, given a quantum circuit C, we can use classical computer to stimulate each gate in C to get the final state.
More formally, let C be a polynomial sized quantum circuit on n qubits and m gates, where m is polynomial in n. Let | ψ 0 ⟩ = | 0 ⟩ ⊗ n {\displaystyle |\psi _{0}\rangle =|0\rangle ^{\otimes n}} and | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } be the state after the i-th gate in the circuit is applied to | ψ i − 1 ⟩ {\displaystyle |\psi _{i-1}\rangle } . Each state | ψ i ⟩ {\displaystyle |\psi _{i}\rangle } can be represented in a classical computer as a unit vector in C 2 n {\displaystyle \mathbb {C} ^{2^{n}}} . Furthermore, each gate can be represented by a matrix in C 2 n × 2 n {\displaystyle \mathbb {C} ^{2^{n}\times 2^{n}}} . Hence, the final state | ψ m ⟩ {\displaystyle |\psi _{m}\rangle } can be computed in O ( m 2 2 n ) {\displaystyle O(m2^{2n})} time, and therefore all together, we have an 2 O ( n ) {\displaystyle 2^{O(n)}} time algorithm for calculating the final state, and thus the probability that the first qubit is measured to be one. This implies that APPROX-QCIRCUIT-PROB ∈ E X P {\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {EXP}}} .
Note that this algorithm also requires 2 O ( n ) {\displaystyle 2^{O(n)}} space to store the vectors and the matrices. We will show in the following section that we can improve upon the space complexity.
To prove B Q P ⊆ P S P A C E {\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {PSPACE}}} , we first introduce a technique called the sum of histories.
Source:
Sum of histories is a technique introduced by physicist Richard Feynman for path integral formulation. We apply this technique to quantum computing to solve APPROX-QCIRCUIT-PROB.
Consider a quantum circuit C, which consists of t gates, g 1 , g 2 , ⋯ , g m {\displaystyle g_{1},g_{2},\cdots ,g_{m}} , where each g j {\displaystyle g_{j}} comes from a universal gate set and acts on at most two qubits. To understand what the sum of histories is, we visualize the evolution of a quantum state given a quantum circuit as a tree. The root is the input | 0 ⟩ ⊗ n {\displaystyle |0\rangle ^{\otimes n}} , and each node in the tree has 2 n {\displaystyle 2^{n}} children, each representing a state in C n {\displaystyle \mathbb {C} ^{n}} . The weight on a tree edge from a node in j-th level representing a state | x ⟩ {\displaystyle |x\rangle } to a node in j + 1 {\displaystyle j+1} -th level representing a state | y ⟩ {\displaystyle |y\rangle } is ⟨ y | g j + 1 | x ⟩ {\displaystyle \langle y|g_{j+1}|x\rangle } , the amplitude of | y ⟩ {\displaystyle |y\rangle } after applying g j + 1 {\displaystyle g_{j+1}} on | x ⟩ {\displaystyle |x\rangle } . The transition amplitude of a root-to-leaf path is the product of all the weights on the edges along the path. To get the probability of the final state being | ψ ⟩ {\displaystyle |\psi \rangle } , we sum up the amplitudes of all root-to-leave paths that ends at a node representing | ψ ⟩ {\displaystyle |\psi \rangle } .
More formally, for the quantum circuit C, its sum over histories tree is a tree of depth m, with one level for each gate g i {\displaystyle g_{i}} in addition to the root, and with branching factor 2 n {\displaystyle 2^{n}} .
Define — A history is a path in the sum of histories tree. We will denote a history by a sequence ( u 0 = | 0 ⟩ ⊗ n → u 1 → ⋯ → u m − 1 → u m = x ) {\displaystyle (u_{0}=|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{m-1}\rightarrow u_{m}=x)} for some final state x.
Define — Let u , v ∈ { 0 , 1 } n {\displaystyle u,v\in \{0,1\}^{n}} . Let amplitude of the edge ( | u ⟩ , | v ⟩ ) {\displaystyle (|u\rangle ,|v\rangle )} in the j-th level of the sum over histories tree be α j ( u → v ) = ⟨ v | g j | u ⟩ {\displaystyle \alpha _{j}(u\rightarrow v)=\langle v|g_{j}|u\rangle } . For any history h = ( u 0 → u 1 → ⋯ → u m − 1 → u m ) {\displaystyle h=(u_{0}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{m-1}\rightarrow u_{m})} , the transition amplitude of the history is the product α h = α 1 ( | 0 ⟩ ⊗ n → u 1 ) α 2 ( u 1 → u 2 ) ⋯ α m ( u m − 1 → x ) {\displaystyle \alpha _{h}=\alpha _{1}(|0\rangle ^{\otimes n}\rightarrow u_{1})\alpha _{2}(u_{1}\rightarrow u_{2})\cdots \alpha _{m}(u_{m-1}\rightarrow x)} .
Claim — For a history ( u 0 → ⋯ → u m ) {\displaystyle (u_{0}\rightarrow \cdots \rightarrow u_{m})} . The transition amplitude of the history is computable in polynomial time.
Each gate g j {\displaystyle g_{j}} can be decomposed into g j = I ⊗ g ~ j {\displaystyle g_{j}=I\otimes {\tilde {g}}_{j}} for some unitary operator g ~ j {\displaystyle {\tilde {g}}_{j}} acting on two qubits, which without loss of generality can taken to be the first two. Hence, ⟨ v | g j | u ⟩ = ⟨ v 1 , v 2 | g ~ j | u 1 , u 2 ⟩ ⟨ v 3 , ⋯ , v n | u 3 , ⋯ , u n ⟩ {\displaystyle \langle v|g_{j}|u\rangle =\langle v_{1},v_{2}|{\tilde {g}}_{j}|u_{1},u_{2}\rangle \langle v_{3},\cdots ,v_{n}|u_{3},\cdots ,u_{n}\rangle } which can be computed in polynomial time in n. Since m is polynomial in n, the transition amplitude of the history can be computed in polynomial time.
Claim — Let C | 0 ⟩ ⊗ n = ∑ x ∈ { 0 , 1 } n α x | x ⟩ {\displaystyle C|0\rangle ^{\otimes n}=\sum _{x\in \{0,1\}^{n}}\alpha _{x}|x\rangle } be the final state of the quantum circuit. For some x ∈ { 0 , 1 } n {\displaystyle x\in \{0,1\}^{n}} , the amplitude α x {\displaystyle \alpha _{x}} can be computed by α x = ∑ h = ( | 0 ⟩ ⊗ n → u 1 → ⋯ → u t − 1 → | x ⟩ ) α h {\displaystyle \alpha _{x}=\sum _{h=(|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{t-1}\rightarrow |x\rangle )}\alpha _{h}} .
We have α x = ⟨ x | C | 0 ⟩ ⊗ n = ⟨ x | g t g t − 1 ⋯ g 1 | C | 0 ⟩ ⊗ n {\displaystyle \alpha _{x}=\langle x|C|0\rangle ^{\otimes n}=\langle x|g_{t}g_{t-1}\cdots g_{1}|C|0\rangle ^{\otimes n}} . The result comes directly by inserting I = ∑ x ∈ { 0 , 1 } n | x ⟩ ⟨ x | {\displaystyle I=\sum _{x\in \{0,1\}^{n}}|x\rangle \langle x|} between g 1 , g 2 {\displaystyle g_{1},g_{2}} , and g 2 , g 3 {\displaystyle g_{2},g_{3}} , and so on, and then expand out the equation. Then each term corresponds to a α h {\displaystyle \alpha _{h}} , where h = ( | 0 ⟩ ⊗ n → u 1 → ⋯ → u t − 1 → | x ⟩ ) {\displaystyle h=(|0\rangle ^{\otimes n}\rightarrow u_{1}\rightarrow \cdots \rightarrow u_{t-1}\rightarrow |x\rangle )}
Claim — APPROX-QCIRCUIT-PROB ∈ P S P A C E {\displaystyle {\text{APPROX-QCIRCUIT-PROB}}\in {\mathsf {PSPACE}}}
Notice in the sum over histories algorithm to compute some amplitude α x {\displaystyle \alpha _{x}} , only one history is stored at any point in the computation. Hence, the sum over histories algorithm uses O ( n m ) {\displaystyle O(nm)} space to compute α x {\displaystyle \alpha _{x}} for any x since O ( n m ) {\displaystyle O(nm)} bits are needed to store the histories in addition to some workspace variables.
Therefore, in polynomial space, we may compute ∑ x | α x | 2 {\displaystyle \sum _{x}|\alpha _{x}|^{2}} over all x with the first qubit being 1, which is the probability that the first qubit is measured to be 1 by the end of the circuit.
Notice that compared with the simulation given for the proof that B Q P ⊆ E X P {\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {EXP}}} , our algorithm here takes far less space but far more time instead. In fact it takes O ( m 2 m n ) {\displaystyle O(m2^{mn})} time to calculate a single amplitude!
A similar sum-over-histories argument can be used to show that B Q P ⊆ P P {\displaystyle {\mathsf {BQP}}\subseteq {\mathsf {PP}}} .
We know P ⊆ B Q P {\displaystyle {\mathsf {P}}\subseteq {\mathsf {BQP}}} , since every classical circuit can be simulated by a quantum circuit.
It is conjectured that BQP solves hard problems outside of P, specifically, problems in NP. The claim is indefinite because we don't know if P=NP, so we don't know if those problems are actually in P. Below are some evidence of the conjecture: | [
{
"paragraph_id": 0,
"text": "In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3.",
"title": ""
},
{
"paragraph_id": 2,
"text": "BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits { Q n : n ∈ N } {\\displaystyle \\{Q_{n}\\colon n\\in \\mathbb {N} \\}} , such that",
"title": "Definition"
},
{
"paragraph_id": 3,
"text": "Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances.",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "Similarly to other \"bounded error\" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n on the one hand, or requiring error as small as 2 on the other hand, where c is any positive constant, and n is the length of input.",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every problem in Promise-BQP reduces to it in polynomial time.",
"title": "A complete problem for Promise-BQP"
},
{
"paragraph_id": 6,
"text": "Here is an intuitive problem that is complete for efficient quantum computation, which stems directly from the definition of Promise-BQP. Note that for technical reasons, completeness proofs focus on the promise problem version of BQP. We show that the problem below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class having a trivial promise, for which no complete problems are known).",
"title": "A complete problem for Promise-BQP"
},
{
"paragraph_id": 7,
"text": "Given a description of a quantum circuit C {\\displaystyle C} acting on n {\\displaystyle n} qubits with m {\\displaystyle m} gates, where m {\\displaystyle m} is a polynomial in n {\\displaystyle n} and each gate acts on one or two qubits, and two numbers α , β ∈ [ 0 , 1 ] , α > β {\\displaystyle \\alpha ,\\beta \\in [0,1],\\alpha >\\beta } , distinguish between the following two cases:",
"title": "A complete problem for Promise-BQP"
},
{
"paragraph_id": 8,
"text": "Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases.",
"title": "A complete problem for Promise-BQP"
},
{
"paragraph_id": 9,
"text": "Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB.",
"title": "A complete problem for Promise-BQP"
},
{
"paragraph_id": 10,
"text": "Proof. Suppose we have an algorithm A {\\displaystyle A} that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuit C {\\displaystyle C} acting on n {\\displaystyle n} qubits, and two numbers α , β ∈ [ 0 , 1 ] , α > β {\\displaystyle \\alpha ,\\beta \\in [0,1],\\alpha >\\beta } , A {\\displaystyle A} distinguishes between the above two cases. We can solve any problem in BQP with this oracle, by setting α = 2 / 3 , β = 1 / 3 {\\displaystyle \\alpha =2/3,\\beta =1/3} .",
"title": "A complete problem for Promise-BQP"
},
{
"paragraph_id": 11,
"text": "For any L ∈ B Q P {\\displaystyle L\\in \\mathrm {BQP} } , there exists family of quantum circuits { Q n : n ∈ N } {\\displaystyle \\{Q_{n}\\colon n\\in \\mathbb {N} \\}} such that for all n ∈ N {\\displaystyle n\\in \\mathbb {N} } , a state | x ⟩ {\\displaystyle |x\\rangle } of n {\\displaystyle n} qubits, if x ∈ L , P r ( Q n ( | x ⟩ ) = 1 ) ≥ 2 / 3 {\\displaystyle x\\in L,Pr(Q_{n}(|x\\rangle )=1)\\geq 2/3} ; else if x ∉ L , P r ( Q n ( | x ⟩ ) = 0 ) ≥ 2 / 3 {\\displaystyle x\\notin L,Pr(Q_{n}(|x\\rangle )=0)\\geq 2/3} . Fix an input | x ⟩ {\\displaystyle |x\\rangle } of n {\\displaystyle n} qubits, and the corresponding quantum circuit Q n {\\displaystyle Q_{n}} . We can first construct a circuit C x {\\displaystyle C_{x}} such that C x | 0 ⟩ ⊗ n = | x ⟩ {\\displaystyle C_{x}|0\\rangle ^{\\otimes n}=|x\\rangle } . This can be done easily by hardwiring | x ⟩ {\\displaystyle |x\\rangle } and apply a sequence of CNOT gates to flip the qubits. Then we can combine two circuits to get C ′ = Q n C x {\\displaystyle C'=Q_{n}C_{x}} , and now C ′ | 0 ⟩ ⊗ n = Q n | x ⟩ {\\displaystyle C'|0\\rangle ^{\\otimes n}=Q_{n}|x\\rangle } . And finally, necessarily the results of Q n {\\displaystyle Q_{n}} is obtained by measuring several qubits and apply some (classical) logic gates to them. We can always defer the measurement and reroute the circuits so that by measuring the first qubit of C ′ | 0 ⟩ ⊗ n = Q n | x ⟩ {\\displaystyle C'|0\\rangle ^{\\otimes n}=Q_{n}|x\\rangle } , we get the output. This will be our circuit C {\\displaystyle C} , and we decide the membership of x {\\displaystyle x} in L {\\displaystyle L} by running A ( C ) {\\displaystyle A(C)} with α = 2 / 3 , β = 1 / 3 {\\displaystyle \\alpha =2/3,\\beta =1/3} . By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), so L ∈ B Q P {\\displaystyle L\\in \\mathrm {BQP} } reduces to APPROX-QCIRCUIT-PROB.",
"title": "A complete problem for Promise-BQP"
},
{
"paragraph_id": 12,
"text": "APPROX-QCIRCUIT-PROB comes handy when we try to prove the relationships between some well-known complexity classes and BQP.",
"title": "A complete problem for Promise-BQP"
},
{
"paragraph_id": 13,
"text": "What is the relationship between B Q P {\\displaystyle {\\mathsf {BQP}}} and N P {\\displaystyle {\\mathsf {NP}}} ?",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 14,
"text": "BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means BQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 15,
"text": "BQP contains P and BPP and is contained in AWPP, PP and PSPACE. In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are:",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 16,
"text": "As the problem of P ≟ PSPACE has not yet been solved, the proof of inequality between BQP and classes mentioned above is supposed to be difficult. The relation between BQP and NP is not known. In May 2018, computer scientists Ran Raz of Princeton University and Avishay Tal of Stanford University published a paper which showed that, relative to an oracle, BQP was not contained in PH. It can be proven that there exists an oracle A such that BQP ⊈ {\\displaystyle \\nsubseteq } PH. In an extremely informal sense, this can be thought of as giving PH and BQP an identical, but additional, capability and verifying that BQP with the oracle (BQP) can do things PH cannot. While an oracle separation has been proven, the fact that BQP is not contained in PH has not been proven. An oracle separation does not prove whether or not complexity classes are the same. The oracle separation gives intuition that BQP may not be contained in PH.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 17,
"text": "It has been suspected for many years that Fourier Sampling is a problem that exists within BQP, but not within the polynomial hierarchy. Recent conjectures have provided evidence that a similar problem, Fourier Checking, also exists in the class BQP without being contained in the polynomial hierarchy. This conjecture is especially notable because it suggests that problems existing in BQP could be classified as harder than NP-Complete problems. Paired with the fact that many practical BQP problems are suspected to exist outside of P (it is suspected and not verified because there is no proof that P ≠ NP), this illustrates the potential power of quantum computing in relation to classical computing.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 18,
"text": "Adding postselection to BQP results in the complexity class PostBQP which is equal to PP.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 19,
"text": "We will prove or discuss some of these results below.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 20,
"text": "We begin with an easier containment. To show that B Q P ⊆ E X P {\\displaystyle {\\mathsf {BQP}}\\subseteq {\\mathsf {EXP}}} , it suffices to show that APPROX-QCIRCUIT-PROB is in EXP since APPROX-QCIRCUIT-PROB is BQP-complete.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 21,
"text": "Claim — APPROX-QCIRCUIT-PROB ∈ E X P {\\displaystyle {\\text{APPROX-QCIRCUIT-PROB}}\\in {\\mathsf {EXP}}}",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 22,
"text": "The idea is simple. Since we have exponential power, given a quantum circuit C, we can use classical computer to stimulate each gate in C to get the final state.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 23,
"text": "More formally, let C be a polynomial sized quantum circuit on n qubits and m gates, where m is polynomial in n. Let | ψ 0 ⟩ = | 0 ⟩ ⊗ n {\\displaystyle |\\psi _{0}\\rangle =|0\\rangle ^{\\otimes n}} and | ψ i ⟩ {\\displaystyle |\\psi _{i}\\rangle } be the state after the i-th gate in the circuit is applied to | ψ i − 1 ⟩ {\\displaystyle |\\psi _{i-1}\\rangle } . Each state | ψ i ⟩ {\\displaystyle |\\psi _{i}\\rangle } can be represented in a classical computer as a unit vector in C 2 n {\\displaystyle \\mathbb {C} ^{2^{n}}} . Furthermore, each gate can be represented by a matrix in C 2 n × 2 n {\\displaystyle \\mathbb {C} ^{2^{n}\\times 2^{n}}} . Hence, the final state | ψ m ⟩ {\\displaystyle |\\psi _{m}\\rangle } can be computed in O ( m 2 2 n ) {\\displaystyle O(m2^{2n})} time, and therefore all together, we have an 2 O ( n ) {\\displaystyle 2^{O(n)}} time algorithm for calculating the final state, and thus the probability that the first qubit is measured to be one. This implies that APPROX-QCIRCUIT-PROB ∈ E X P {\\displaystyle {\\text{APPROX-QCIRCUIT-PROB}}\\in {\\mathsf {EXP}}} .",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 24,
"text": "Note that this algorithm also requires 2 O ( n ) {\\displaystyle 2^{O(n)}} space to store the vectors and the matrices. We will show in the following section that we can improve upon the space complexity.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 25,
"text": "To prove B Q P ⊆ P S P A C E {\\displaystyle {\\mathsf {BQP}}\\subseteq {\\mathsf {PSPACE}}} , we first introduce a technique called the sum of histories.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 26,
"text": "Source:",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 27,
"text": "Sum of histories is a technique introduced by physicist Richard Feynman for path integral formulation. We apply this technique to quantum computing to solve APPROX-QCIRCUIT-PROB.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 28,
"text": "Consider a quantum circuit C, which consists of t gates, g 1 , g 2 , ⋯ , g m {\\displaystyle g_{1},g_{2},\\cdots ,g_{m}} , where each g j {\\displaystyle g_{j}} comes from a universal gate set and acts on at most two qubits. To understand what the sum of histories is, we visualize the evolution of a quantum state given a quantum circuit as a tree. The root is the input | 0 ⟩ ⊗ n {\\displaystyle |0\\rangle ^{\\otimes n}} , and each node in the tree has 2 n {\\displaystyle 2^{n}} children, each representing a state in C n {\\displaystyle \\mathbb {C} ^{n}} . The weight on a tree edge from a node in j-th level representing a state | x ⟩ {\\displaystyle |x\\rangle } to a node in j + 1 {\\displaystyle j+1} -th level representing a state | y ⟩ {\\displaystyle |y\\rangle } is ⟨ y | g j + 1 | x ⟩ {\\displaystyle \\langle y|g_{j+1}|x\\rangle } , the amplitude of | y ⟩ {\\displaystyle |y\\rangle } after applying g j + 1 {\\displaystyle g_{j+1}} on | x ⟩ {\\displaystyle |x\\rangle } . The transition amplitude of a root-to-leaf path is the product of all the weights on the edges along the path. To get the probability of the final state being | ψ ⟩ {\\displaystyle |\\psi \\rangle } , we sum up the amplitudes of all root-to-leave paths that ends at a node representing | ψ ⟩ {\\displaystyle |\\psi \\rangle } .",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 29,
"text": "More formally, for the quantum circuit C, its sum over histories tree is a tree of depth m, with one level for each gate g i {\\displaystyle g_{i}} in addition to the root, and with branching factor 2 n {\\displaystyle 2^{n}} .",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 30,
"text": "Define — A history is a path in the sum of histories tree. We will denote a history by a sequence ( u 0 = | 0 ⟩ ⊗ n → u 1 → ⋯ → u m − 1 → u m = x ) {\\displaystyle (u_{0}=|0\\rangle ^{\\otimes n}\\rightarrow u_{1}\\rightarrow \\cdots \\rightarrow u_{m-1}\\rightarrow u_{m}=x)} for some final state x.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 31,
"text": "Define — Let u , v ∈ { 0 , 1 } n {\\displaystyle u,v\\in \\{0,1\\}^{n}} . Let amplitude of the edge ( | u ⟩ , | v ⟩ ) {\\displaystyle (|u\\rangle ,|v\\rangle )} in the j-th level of the sum over histories tree be α j ( u → v ) = ⟨ v | g j | u ⟩ {\\displaystyle \\alpha _{j}(u\\rightarrow v)=\\langle v|g_{j}|u\\rangle } . For any history h = ( u 0 → u 1 → ⋯ → u m − 1 → u m ) {\\displaystyle h=(u_{0}\\rightarrow u_{1}\\rightarrow \\cdots \\rightarrow u_{m-1}\\rightarrow u_{m})} , the transition amplitude of the history is the product α h = α 1 ( | 0 ⟩ ⊗ n → u 1 ) α 2 ( u 1 → u 2 ) ⋯ α m ( u m − 1 → x ) {\\displaystyle \\alpha _{h}=\\alpha _{1}(|0\\rangle ^{\\otimes n}\\rightarrow u_{1})\\alpha _{2}(u_{1}\\rightarrow u_{2})\\cdots \\alpha _{m}(u_{m-1}\\rightarrow x)} .",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 32,
"text": "Claim — For a history ( u 0 → ⋯ → u m ) {\\displaystyle (u_{0}\\rightarrow \\cdots \\rightarrow u_{m})} . The transition amplitude of the history is computable in polynomial time.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 33,
"text": "Each gate g j {\\displaystyle g_{j}} can be decomposed into g j = I ⊗ g ~ j {\\displaystyle g_{j}=I\\otimes {\\tilde {g}}_{j}} for some unitary operator g ~ j {\\displaystyle {\\tilde {g}}_{j}} acting on two qubits, which without loss of generality can taken to be the first two. Hence, ⟨ v | g j | u ⟩ = ⟨ v 1 , v 2 | g ~ j | u 1 , u 2 ⟩ ⟨ v 3 , ⋯ , v n | u 3 , ⋯ , u n ⟩ {\\displaystyle \\langle v|g_{j}|u\\rangle =\\langle v_{1},v_{2}|{\\tilde {g}}_{j}|u_{1},u_{2}\\rangle \\langle v_{3},\\cdots ,v_{n}|u_{3},\\cdots ,u_{n}\\rangle } which can be computed in polynomial time in n. Since m is polynomial in n, the transition amplitude of the history can be computed in polynomial time.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 34,
"text": "Claim — Let C | 0 ⟩ ⊗ n = ∑ x ∈ { 0 , 1 } n α x | x ⟩ {\\displaystyle C|0\\rangle ^{\\otimes n}=\\sum _{x\\in \\{0,1\\}^{n}}\\alpha _{x}|x\\rangle } be the final state of the quantum circuit. For some x ∈ { 0 , 1 } n {\\displaystyle x\\in \\{0,1\\}^{n}} , the amplitude α x {\\displaystyle \\alpha _{x}} can be computed by α x = ∑ h = ( | 0 ⟩ ⊗ n → u 1 → ⋯ → u t − 1 → | x ⟩ ) α h {\\displaystyle \\alpha _{x}=\\sum _{h=(|0\\rangle ^{\\otimes n}\\rightarrow u_{1}\\rightarrow \\cdots \\rightarrow u_{t-1}\\rightarrow |x\\rangle )}\\alpha _{h}} .",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 35,
"text": "We have α x = ⟨ x | C | 0 ⟩ ⊗ n = ⟨ x | g t g t − 1 ⋯ g 1 | C | 0 ⟩ ⊗ n {\\displaystyle \\alpha _{x}=\\langle x|C|0\\rangle ^{\\otimes n}=\\langle x|g_{t}g_{t-1}\\cdots g_{1}|C|0\\rangle ^{\\otimes n}} . The result comes directly by inserting I = ∑ x ∈ { 0 , 1 } n | x ⟩ ⟨ x | {\\displaystyle I=\\sum _{x\\in \\{0,1\\}^{n}}|x\\rangle \\langle x|} between g 1 , g 2 {\\displaystyle g_{1},g_{2}} , and g 2 , g 3 {\\displaystyle g_{2},g_{3}} , and so on, and then expand out the equation. Then each term corresponds to a α h {\\displaystyle \\alpha _{h}} , where h = ( | 0 ⟩ ⊗ n → u 1 → ⋯ → u t − 1 → | x ⟩ ) {\\displaystyle h=(|0\\rangle ^{\\otimes n}\\rightarrow u_{1}\\rightarrow \\cdots \\rightarrow u_{t-1}\\rightarrow |x\\rangle )}",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 36,
"text": "Claim — APPROX-QCIRCUIT-PROB ∈ P S P A C E {\\displaystyle {\\text{APPROX-QCIRCUIT-PROB}}\\in {\\mathsf {PSPACE}}}",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 37,
"text": "Notice in the sum over histories algorithm to compute some amplitude α x {\\displaystyle \\alpha _{x}} , only one history is stored at any point in the computation. Hence, the sum over histories algorithm uses O ( n m ) {\\displaystyle O(nm)} space to compute α x {\\displaystyle \\alpha _{x}} for any x since O ( n m ) {\\displaystyle O(nm)} bits are needed to store the histories in addition to some workspace variables.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 38,
"text": "Therefore, in polynomial space, we may compute ∑ x | α x | 2 {\\displaystyle \\sum _{x}|\\alpha _{x}|^{2}} over all x with the first qubit being 1, which is the probability that the first qubit is measured to be 1 by the end of the circuit.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 39,
"text": "Notice that compared with the simulation given for the proof that B Q P ⊆ E X P {\\displaystyle {\\mathsf {BQP}}\\subseteq {\\mathsf {EXP}}} , our algorithm here takes far less space but far more time instead. In fact it takes O ( m 2 m n ) {\\displaystyle O(m2^{mn})} time to calculate a single amplitude!",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 40,
"text": "A similar sum-over-histories argument can be used to show that B Q P ⊆ P P {\\displaystyle {\\mathsf {BQP}}\\subseteq {\\mathsf {PP}}} .",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 41,
"text": "We know P ⊆ B Q P {\\displaystyle {\\mathsf {P}}\\subseteq {\\mathsf {BQP}}} , since every classical circuit can be simulated by a quantum circuit.",
"title": "Relationship to other complexity classes"
},
{
"paragraph_id": 42,
"text": "It is conjectured that BQP solves hard problems outside of P, specifically, problems in NP. The claim is indefinite because we don't know if P=NP, so we don't know if those problems are actually in P. Below are some evidence of the conjecture:",
"title": "Relationship to other complexity classes"
}
] | In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP. A decision problem is a member of BQP if there exists a quantum algorithm that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3. | 2001-08-27T03:26:43Z | 2023-12-05T22:22:26Z | [
"Template:Diagonal split header",
"Template:Math proof",
"Template:Isbn",
"Template:Webarchive",
"Template:ComplexityClasses",
"Template:Yes",
"Template:Reflist",
"Template:Quantum computing",
"Template:Unsolved",
"Template:Val",
"Template:Cite web",
"Template:Short description",
"Template:No",
"Template:Math theorem",
"Template:Mvar",
"Template:Anchor",
"Template:Cite journal",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/BQP |
4,081 | Blade Runner 3: Replicant Night | Blade Runner 3: Replicant Night is a science fiction novel by an American writer K. W. Jeter, first published in 1996. It is a continuation of Jeter's novel Blade Runner 2: The Edge of Human, which was itself a sequel to both the film Blade Runner and the novel upon which the film was based, Philip K. Dick's Do Androids Dream of Electric Sheep?
Living on Mars, Deckard is acting as a consultant to a movie crew filming the story of his days as a blade runner. He finds himself drawn into a mission on behalf of the replicants he was once assigned to kill. Meanwhile, the mystery surrounding the beginnings of the Tyrell Corporation is being exposed.
The plot element of a replicant giving birth served as the basis for the 2017 film Blade Runner 2049. | [
{
"paragraph_id": 0,
"text": "Blade Runner 3: Replicant Night is a science fiction novel by an American writer K. W. Jeter, first published in 1996. It is a continuation of Jeter's novel Blade Runner 2: The Edge of Human, which was itself a sequel to both the film Blade Runner and the novel upon which the film was based, Philip K. Dick's Do Androids Dream of Electric Sheep?",
"title": ""
},
{
"paragraph_id": 1,
"text": "Living on Mars, Deckard is acting as a consultant to a movie crew filming the story of his days as a blade runner. He finds himself drawn into a mission on behalf of the replicants he was once assigned to kill. Meanwhile, the mystery surrounding the beginnings of the Tyrell Corporation is being exposed.",
"title": "Plot introduction"
},
{
"paragraph_id": 2,
"text": "The plot element of a replicant giving birth served as the basis for the 2017 film Blade Runner 2049.",
"title": "Film adaptation"
}
] | Blade Runner 3: Replicant Night is a science fiction novel by an American writer K. W. Jeter, first published in 1996. It is a continuation of Jeter's novel Blade Runner 2: The Edge of Human, which was itself a sequel to both the film Blade Runner and the novel upon which the film was based, Philip K. Dick's Do Androids Dream of Electric Sheep? | 2001-08-22T11:15:10Z | 2023-11-03T23:55:00Z | [
"Template:Infobox book",
"Template:Citogenesis",
"Template:Reflist",
"Template:Cite web",
"Template:Blade Runner",
"Template:1990s-sf-novel-stub",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Blade_Runner_3:_Replicant_Night |
4,082 | Blade Runner 2: The Edge of Human | Blade Runner 2: The Edge of Human (1995) is a science fiction novel by American writer K. W. Jeter. It is a continuation of both the film Blade Runner and the novel upon which the film was based, Philip K. Dick's Do Androids Dream of Electric Sheep?
Several months after the events depicted in Blade Runner, Deckard has retired to an isolated shack outside the city, taking the replicant Rachael with him in a Tyrell transport container, which slows down the replicant aging process. He is approached by a woman who explains she is Sarah Tyrell, niece of Eldon Tyrell, heiress to the Tyrell Corporation and the human template ("templant") for the Rachael replicant. She asks Deckard to hunt down the "missing" sixth replicant. At the same time, the templant for Roy Batty hires Dave Holden, the blade runner attacked by Leon, to help him hunt down the man he believes is the sixth replicant—Deckard.
Deckard and Holden's investigations lead them to re-visit Sebastian, Bryant, and John Isidore (from the book Do Androids Dream Of Electric Sheep?), learning more about the nature of the blade runners and the replicants.
When Deckard, Batty, and Holden finally clash, Batty's super-human fighting prowess leads Holden to believe he has been duped all along and that Batty is the sixth replicant. He shoots him. Deckard returns to Sarah with his suspicion: there is no sixth replicant. Sarah, speaking via a remote camera, confesses that she invented and maintained the rumor herself in order to deliberately discredit and eventually destroy the Tyrell Corporation because her uncle Eldon had based Rachel on her and then abandoned the real Sarah. Sarah brings Rachael back to the Corporation to meet with Deckard, and they escape.
However, Holden, recovering from his injuries during the fight, later uncovers the truth: Rachael has been killed by Tyrell agents, and the "Rachael" who escaped with Deckard was actually Sarah. She has completed her revenge by both destroying Tyrell and taking back Rachael's place.
The book's plot draws from other material related to Blade Runner in a number of ways:
However, it also contradicts material in some ways:
Michael Giltz of Entertainment Weekly gave the book a "C−", feeling that "only hardcore fans will be satisfied by this tale" and saying Jeter's "habit of echoing dialogue and scenes from the film is annoying and begs comparisons he would do well to avoid." Tal Cohen of Tal Cohen's Bookshelf called The Edge of Human "a good book", praising Jeter's "further, and deeper, investigation of the questions Philip K. Dick originally asked", but criticized the book for its "needless grandioseness" and for "rel[ying] on Blade Runner too heavily, [as] the number of new characters introduced is extremely small..."
Ian Kaplan of BearCave.com gave the book three stars out of five, saying that while he was "not entirely satisfied" and felt that the "story tends to be shallow", "Jeter does deal with the moral dilemma of the Blade Runners who hunt down beings that are virtually human in every way." J. Patton of The Bent Cover praised Jeter for "[not] try[ing] to emulate Philip K. Dick", adding, "This book also has all the grittiness and dark edges that the movie showed off so well, along with a very fast pace that will keep you reading into the wee hours of the night."
In the late 1990s, Edge of Human had been adapted into a screenplay by Stuart Hazeldine, Blade Runner Down, that was to be filmed as the sequel to the 1982 film Blade Runner. Ultimately neither this script nor the Jeter novel were used for the eventual sequel, Blade Runner 2049, which follows a different story. | [
{
"paragraph_id": 0,
"text": "Blade Runner 2: The Edge of Human (1995) is a science fiction novel by American writer K. W. Jeter. It is a continuation of both the film Blade Runner and the novel upon which the film was based, Philip K. Dick's Do Androids Dream of Electric Sheep?",
"title": ""
},
{
"paragraph_id": 1,
"text": "Several months after the events depicted in Blade Runner, Deckard has retired to an isolated shack outside the city, taking the replicant Rachael with him in a Tyrell transport container, which slows down the replicant aging process. He is approached by a woman who explains she is Sarah Tyrell, niece of Eldon Tyrell, heiress to the Tyrell Corporation and the human template (\"templant\") for the Rachael replicant. She asks Deckard to hunt down the \"missing\" sixth replicant. At the same time, the templant for Roy Batty hires Dave Holden, the blade runner attacked by Leon, to help him hunt down the man he believes is the sixth replicant—Deckard.",
"title": "Plot"
},
{
"paragraph_id": 2,
"text": "Deckard and Holden's investigations lead them to re-visit Sebastian, Bryant, and John Isidore (from the book Do Androids Dream Of Electric Sheep?), learning more about the nature of the blade runners and the replicants.",
"title": "Plot"
},
{
"paragraph_id": 3,
"text": "When Deckard, Batty, and Holden finally clash, Batty's super-human fighting prowess leads Holden to believe he has been duped all along and that Batty is the sixth replicant. He shoots him. Deckard returns to Sarah with his suspicion: there is no sixth replicant. Sarah, speaking via a remote camera, confesses that she invented and maintained the rumor herself in order to deliberately discredit and eventually destroy the Tyrell Corporation because her uncle Eldon had based Rachel on her and then abandoned the real Sarah. Sarah brings Rachael back to the Corporation to meet with Deckard, and they escape.",
"title": "Plot"
},
{
"paragraph_id": 4,
"text": "However, Holden, recovering from his injuries during the fight, later uncovers the truth: Rachael has been killed by Tyrell agents, and the \"Rachael\" who escaped with Deckard was actually Sarah. She has completed her revenge by both destroying Tyrell and taking back Rachael's place.",
"title": "Plot"
},
{
"paragraph_id": 5,
"text": "The book's plot draws from other material related to Blade Runner in a number of ways:",
"title": "Relationship to other works"
},
{
"paragraph_id": 6,
"text": "However, it also contradicts material in some ways:",
"title": "Relationship to other works"
},
{
"paragraph_id": 7,
"text": "Michael Giltz of Entertainment Weekly gave the book a \"C−\", feeling that \"only hardcore fans will be satisfied by this tale\" and saying Jeter's \"habit of echoing dialogue and scenes from the film is annoying and begs comparisons he would do well to avoid.\" Tal Cohen of Tal Cohen's Bookshelf called The Edge of Human \"a good book\", praising Jeter's \"further, and deeper, investigation of the questions Philip K. Dick originally asked\", but criticized the book for its \"needless grandioseness\" and for \"rel[ying] on Blade Runner too heavily, [as] the number of new characters introduced is extremely small...\"",
"title": "Reception"
},
{
"paragraph_id": 8,
"text": "Ian Kaplan of BearCave.com gave the book three stars out of five, saying that while he was \"not entirely satisfied\" and felt that the \"story tends to be shallow\", \"Jeter does deal with the moral dilemma of the Blade Runners who hunt down beings that are virtually human in every way.\" J. Patton of The Bent Cover praised Jeter for \"[not] try[ing] to emulate Philip K. Dick\", adding, \"This book also has all the grittiness and dark edges that the movie showed off so well, along with a very fast pace that will keep you reading into the wee hours of the night.\"",
"title": "Reception"
},
{
"paragraph_id": 9,
"text": "In the late 1990s, Edge of Human had been adapted into a screenplay by Stuart Hazeldine, Blade Runner Down, that was to be filmed as the sequel to the 1982 film Blade Runner. Ultimately neither this script nor the Jeter novel were used for the eventual sequel, Blade Runner 2049, which follows a different story.",
"title": "Failed film adaptation"
}
] | Blade Runner 2: The Edge of Human (1995) is a science fiction novel by American writer K. W. Jeter. It is a continuation of both the film Blade Runner and the novel upon which the film was based, Philip K. Dick's Do Androids Dream of Electric Sheep? | 2001-08-22T11:20:46Z | 2023-12-11T14:45:20Z | [
"Template:Infobox book",
"Template:Reflist",
"Template:Webarchive",
"Template:Cite news",
"Template:Cite book",
"Template:Short description",
"Template:Redirect",
"Template:Anchor",
"Template:Blade Runner"
] | https://en.wikipedia.org/wiki/Blade_Runner_2:_The_Edge_of_Human |
4,086 | Brainfuck | Brainfuck is an esoteric programming language created in 1993 by Urban Müller.
Notable for its extreme minimalism, the language consists of only eight simple commands, a data pointer and an instruction pointer. While it is fully Turing complete, it is not intended for practical use, but to challenge and amuse programmers. Brainfuck requires one to break commands into microscopic steps.
The language's name is a reference to the slang term brainfuck, which refers to things so complicated or unusual that they exceed the limits of one's understanding, as it was not meant or made for designing actual software but to challenge the boundaries of computer programming.
Müller designed Brainfuck with the goal of implementing the smallest possible compiler, inspired by the 1024-byte compiler for the FALSE programming language. Müller's original compiler was implemented in machine language and compiled to a binary with a size of 296 bytes. He uploaded the first Brainfuck compiler to Aminet in 1993. The program came with a "Readme" file, which briefly described the language, and challenged the reader "Who can program anything useful with it? :)". Müller also included an interpreter and some examples. A second version of the compiler used only 240 bytes.
Except for its two I/O commands, Brainfuck is a minor variation of the formal programming language P′′ created by Corrado Böhm in 1964, which is explicitly based on the Turing machine. In fact, using six symbols equivalent to the respective Brainfuck commands +, -, <, >, [, ], Böhm provided an explicit program for each of the basic functions that together serve to compute any computable function. So the first "Brainfuck" programs appear in Böhm's 1964 paper – and they were sufficient to prove Turing completeness.
The language consists of eight commands. A brainfuck program is a sequence of these commands, possibly interspersed with other characters (which are ignored). The commands are executed sequentially, with some exceptions: an instruction pointer begins at the first command, and each command it points to is executed, after which it normally moves forward to the next command. The program terminates when the instruction pointer moves past the last command.
The brainfuck language uses a simple machine model consisting of the program and instruction pointer, as well as a one-dimensional array of at least 30,000 byte cells initialized to zero; a movable data pointer (initialized to point to the leftmost byte of the array); and two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding).
The eight language commands each consist of a single character:
[ and ] match as parentheses usually do: each [ matches exactly one ] and vice versa, the [ comes first, and there can be no unmatched [ or ] between the two.
As the name suggests, Brainfuck programs tend to be difficult to comprehend. This is partly because any mildly complex task requires a long sequence of commands and partly because the program's text gives no direct indications of the program's state. These, as well as Brainfuck's inefficiency and its limited input/output capabilities, are some of the reasons it is not used for serious programming. Nonetheless, like any Turing complete language, Brainfuck is theoretically capable of computing any computable function or simulating any other computational model, if given access to an unlimited amount of memory. A variety of Brainfuck programs have been written. Although Brainfuck programs, especially complicated ones, are difficult to write, it is quite trivial to write an interpreter for Brainfuck in a more typical language such as C due to its simplicity. There even exist Brainfuck interpreters written in the Brainfuck language itself.
Brainfuck is an example of a so-called Turing tarpit: It can be used to write any program, but it is not practical to do so, because Brainfuck provides so little abstraction that the programs get very long or complicated.
As a first, simple example, the following code snippet will add the current cell's value to the next cell: Each time the loop is executed, the current cell is decremented, the data pointer moves to the right, that next cell is incremented, and the data pointer moves left again. This sequence is repeated until the starting cell is 0.
This can be incorporated into a simple addition program as follows:
The following program prints "Hello World!" and a newline to the screen:
For "readability", this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands +-<>[],. so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as:
Another example of a code golfed version that prints Hello, World!:
This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65–77) to N-Z (78-90), and vice versa. Also it must map a-m (97-109) to n-z (110-122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates. | [
{
"paragraph_id": 0,
"text": "Brainfuck is an esoteric programming language created in 1993 by Urban Müller.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Notable for its extreme minimalism, the language consists of only eight simple commands, a data pointer and an instruction pointer. While it is fully Turing complete, it is not intended for practical use, but to challenge and amuse programmers. Brainfuck requires one to break commands into microscopic steps.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The language's name is a reference to the slang term brainfuck, which refers to things so complicated or unusual that they exceed the limits of one's understanding, as it was not meant or made for designing actual software but to challenge the boundaries of computer programming.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Müller designed Brainfuck with the goal of implementing the smallest possible compiler, inspired by the 1024-byte compiler for the FALSE programming language. Müller's original compiler was implemented in machine language and compiled to a binary with a size of 296 bytes. He uploaded the first Brainfuck compiler to Aminet in 1993. The program came with a \"Readme\" file, which briefly described the language, and challenged the reader \"Who can program anything useful with it? :)\". Müller also included an interpreter and some examples. A second version of the compiler used only 240 bytes.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Except for its two I/O commands, Brainfuck is a minor variation of the formal programming language P′′ created by Corrado Böhm in 1964, which is explicitly based on the Turing machine. In fact, using six symbols equivalent to the respective Brainfuck commands +, -, <, >, [, ], Böhm provided an explicit program for each of the basic functions that together serve to compute any computable function. So the first \"Brainfuck\" programs appear in Böhm's 1964 paper – and they were sufficient to prove Turing completeness.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The language consists of eight commands. A brainfuck program is a sequence of these commands, possibly interspersed with other characters (which are ignored). The commands are executed sequentially, with some exceptions: an instruction pointer begins at the first command, and each command it points to is executed, after which it normally moves forward to the next command. The program terminates when the instruction pointer moves past the last command.",
"title": "Language design"
},
{
"paragraph_id": 6,
"text": "The brainfuck language uses a simple machine model consisting of the program and instruction pointer, as well as a one-dimensional array of at least 30,000 byte cells initialized to zero; a movable data pointer (initialized to point to the leftmost byte of the array); and two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding).",
"title": "Language design"
},
{
"paragraph_id": 7,
"text": "The eight language commands each consist of a single character:",
"title": "Language design"
},
{
"paragraph_id": 8,
"text": "[ and ] match as parentheses usually do: each [ matches exactly one ] and vice versa, the [ comes first, and there can be no unmatched [ or ] between the two.",
"title": "Language design"
},
{
"paragraph_id": 9,
"text": "As the name suggests, Brainfuck programs tend to be difficult to comprehend. This is partly because any mildly complex task requires a long sequence of commands and partly because the program's text gives no direct indications of the program's state. These, as well as Brainfuck's inefficiency and its limited input/output capabilities, are some of the reasons it is not used for serious programming. Nonetheless, like any Turing complete language, Brainfuck is theoretically capable of computing any computable function or simulating any other computational model, if given access to an unlimited amount of memory. A variety of Brainfuck programs have been written. Although Brainfuck programs, especially complicated ones, are difficult to write, it is quite trivial to write an interpreter for Brainfuck in a more typical language such as C due to its simplicity. There even exist Brainfuck interpreters written in the Brainfuck language itself.",
"title": "Language design"
},
{
"paragraph_id": 10,
"text": "Brainfuck is an example of a so-called Turing tarpit: It can be used to write any program, but it is not practical to do so, because Brainfuck provides so little abstraction that the programs get very long or complicated.",
"title": "Language design"
},
{
"paragraph_id": 11,
"text": "As a first, simple example, the following code snippet will add the current cell's value to the next cell: Each time the loop is executed, the current cell is decremented, the data pointer moves to the right, that next cell is incremented, and the data pointer moves left again. This sequence is repeated until the starting cell is 0.",
"title": "Examples"
},
{
"paragraph_id": 12,
"text": "This can be incorporated into a simple addition program as follows:",
"title": "Examples"
},
{
"paragraph_id": 13,
"text": "The following program prints \"Hello World!\" and a newline to the screen:",
"title": "Examples"
},
{
"paragraph_id": 14,
"text": "For \"readability\", this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands +-<>[],. so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as:",
"title": "Examples"
},
{
"paragraph_id": 15,
"text": "Another example of a code golfed version that prints Hello, World!:",
"title": "Examples"
},
{
"paragraph_id": 16,
"text": "This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65–77) to N-Z (78-90), and vice versa. Also it must map a-m (97-109) to n-z (110-122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or \"no change\"), at which point the program terminates.",
"title": "Examples"
}
] | Brainfuck is an esoteric programming language created in 1993 by Urban Müller. Notable for its extreme minimalism, the language consists of only eight simple commands, a data pointer and an instruction pointer. While it is fully Turing complete, it is not intended for practical use, but to challenge and amuse programmers. Brainfuck requires one to break commands into microscopic steps. The language's name is a reference to the slang term brainfuck, which refers to things so complicated or unusual that they exceed the limits of one's understanding, as it was not meant or made for designing actual software but to challenge the boundaries of computer programming. | 2001-11-12T14:45:06Z | 2023-12-23T04:47:20Z | [
"Template:Short description",
"Template:Distinguish",
"Template:Multiple issues",
"Template:Infobox programming language",
"Template:Reflist",
"Template:Cite web",
"Template:Efn",
"Template:Notelist",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Brainfuck |
4,091 | Bartolomeo Ammannati | Bartolomeo Ammannati (18 June 1511 – 13 April 1592) was an Italian architect and sculptor, born at Settignano, near Florence, Italy. He studied under Baccio Bandinelli and Jacopo Sansovino (assisting on the design of the Library of St. Mark's, the Biblioteca Marciana, Venice) and closely imitated the style of Michelangelo.
He was more distinguished in architecture than in sculpture. He worked in Rome in collaboration with Vignola and Vasari), including designs for the Villa Giulia, but also for works at Lucca. He labored during 1558–1570, in the refurbishment and enlargement of Pitti Palace, creating the courtyard consisting of three wings with rusticated facades, and one lower portico leading to the amphitheatre in the Boboli Gardens. His design mirrored the appearance of the main external façade of Pitti. He was also named Consul of Accademia delle Arti del Disegno of Florence, which had been founded by the Duke Cosimo I in 1563.
In 1569, Ammanati was commissioned to build the Ponte Santa Trinita, a bridge over the Arno River. The three arches are elliptic, and though very light and elegant, has survived, when floods had damaged other Arno bridges at different times. Santa Trinita was destroyed in 1944, during World War II, and rebuilt in 1957.
Ammannati designed what is considered a prototypic Mannerist sculptural ensemble in the Fountain of Neptune (Fontana del Nettuno), prominently located in the Piazza della Signoria in the center of Florence. The assignment was originally given to the aged Bartolommeo Bandinelli; however when Bandinelli died, Ammannati's design, bested the submissions of Benvenuto Cellini and Vincenzo Danti, to gain the commission. From 1563 and 1565, Ammannati and his assistants, among them Giambologna, sculpted the block of marble that had been chosen by Bandinelli. He took Grand Duke Cosimo I as model for Neptune's face. The statue was meant to highlight Cosimo's goal of establishing a Florentine Naval force. The ungainly sea god was placed at the corner of the Palazzo Vecchio within sight of Michelangelo's David statue, and the then 87-year-old sculptor is said to have scoffed at Ammannati— saying that he had ruined a beautiful piece of marble— with the ditty: "Ammannati, Ammanato, che bel marmo hai rovinato!" Ammannati continued work on this fountain for a decade, adding around the perimeter a cornucopia of demigod figures: bronze reclining river gods, laughing satyrs and marble sea horses emerging from the water.
In 1550 Ammannati married Laura Battiferri, an elegant poet and an accomplished woman. Later in his life he had a religious crisis, influenced by Counter-Reformation piety, which resulted in condemning his own works depicting nudity, and he left all his possessions to the Jesuits.
He died in Florence in 1592. | [
{
"paragraph_id": 0,
"text": "Bartolomeo Ammannati (18 June 1511 – 13 April 1592) was an Italian architect and sculptor, born at Settignano, near Florence, Italy. He studied under Baccio Bandinelli and Jacopo Sansovino (assisting on the design of the Library of St. Mark's, the Biblioteca Marciana, Venice) and closely imitated the style of Michelangelo.",
"title": ""
},
{
"paragraph_id": 1,
"text": "He was more distinguished in architecture than in sculpture. He worked in Rome in collaboration with Vignola and Vasari), including designs for the Villa Giulia, but also for works at Lucca. He labored during 1558–1570, in the refurbishment and enlargement of Pitti Palace, creating the courtyard consisting of three wings with rusticated facades, and one lower portico leading to the amphitheatre in the Boboli Gardens. His design mirrored the appearance of the main external façade of Pitti. He was also named Consul of Accademia delle Arti del Disegno of Florence, which had been founded by the Duke Cosimo I in 1563.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 1569, Ammanati was commissioned to build the Ponte Santa Trinita, a bridge over the Arno River. The three arches are elliptic, and though very light and elegant, has survived, when floods had damaged other Arno bridges at different times. Santa Trinita was destroyed in 1944, during World War II, and rebuilt in 1957.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Ammannati designed what is considered a prototypic Mannerist sculptural ensemble in the Fountain of Neptune (Fontana del Nettuno), prominently located in the Piazza della Signoria in the center of Florence. The assignment was originally given to the aged Bartolommeo Bandinelli; however when Bandinelli died, Ammannati's design, bested the submissions of Benvenuto Cellini and Vincenzo Danti, to gain the commission. From 1563 and 1565, Ammannati and his assistants, among them Giambologna, sculpted the block of marble that had been chosen by Bandinelli. He took Grand Duke Cosimo I as model for Neptune's face. The statue was meant to highlight Cosimo's goal of establishing a Florentine Naval force. The ungainly sea god was placed at the corner of the Palazzo Vecchio within sight of Michelangelo's David statue, and the then 87-year-old sculptor is said to have scoffed at Ammannati— saying that he had ruined a beautiful piece of marble— with the ditty: \"Ammannati, Ammanato, che bel marmo hai rovinato!\" Ammannati continued work on this fountain for a decade, adding around the perimeter a cornucopia of demigod figures: bronze reclining river gods, laughing satyrs and marble sea horses emerging from the water.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In 1550 Ammannati married Laura Battiferri, an elegant poet and an accomplished woman. Later in his life he had a religious crisis, influenced by Counter-Reformation piety, which resulted in condemning his own works depicting nudity, and he left all his possessions to the Jesuits.",
"title": ""
},
{
"paragraph_id": 5,
"text": "He died in Florence in 1592.",
"title": ""
}
] | Bartolomeo Ammannati was an Italian architect and sculptor, born at Settignano, near Florence, Italy. He studied under Baccio Bandinelli and Jacopo Sansovino and closely imitated the style of Michelangelo. He was more distinguished in architecture than in sculpture. He worked in Rome in collaboration with Vignola and Vasari), including designs for the Villa Giulia, but also for works at Lucca. He labored during 1558–1570, in the refurbishment and enlargement of Pitti Palace, creating the courtyard consisting of three wings with rusticated facades, and one lower portico leading to the amphitheatre in the Boboli Gardens. His design mirrored the appearance of the main external façade of Pitti. He was also named Consul of Accademia delle Arti del Disegno of Florence, which had been founded by the Duke Cosimo I in 1563. In 1569, Ammanati was commissioned to build the Ponte Santa Trinita, a bridge over the Arno River. The three arches are elliptic, and though very light and elegant, has survived, when floods had damaged other Arno bridges at different times. Santa Trinita was destroyed in 1944, during World War II, and rebuilt in 1957. Ammannati designed what is considered a prototypic Mannerist sculptural ensemble in the Fountain of Neptune, prominently located in the Piazza della Signoria in the center of Florence. The assignment was originally given to the aged Bartolommeo Bandinelli; however when Bandinelli died, Ammannati's design, bested the submissions of Benvenuto Cellini and Vincenzo Danti, to gain the commission. From 1563 and 1565, Ammannati and his assistants, among them Giambologna, sculpted the block of marble that had been chosen by Bandinelli. He took Grand Duke Cosimo I as model for Neptune's face. The statue was meant to highlight Cosimo's goal of establishing a Florentine Naval force. The ungainly sea god was placed at the corner of the Palazzo Vecchio within sight of Michelangelo's David statue, and the then 87-year-old sculptor is said to have scoffed at Ammannati— saying that he had ruined a beautiful piece of marble— with the ditty: "Ammannati, Ammanato, che bel marmo hai rovinato!" Ammannati continued work on this fountain for a decade, adding around the perimeter a cornucopia of demigod figures: bronze reclining river gods, laughing satyrs and marble sea horses emerging from the water. In 1550 Ammannati married Laura Battiferri, an elegant poet and an accomplished woman. Later in his life he had a religious crisis, influenced by Counter-Reformation piety, which resulted in condemning his own works depicting nudity, and he left all his possessions to the Jesuits. He died in Florence in 1592. | 2002-02-25T15:51:15Z | 2023-12-24T16:27:25Z | [
"Template:Short description",
"Template:More citations needed",
"Template:Snd",
"Template:EB1911",
"Template:Webarchive",
"Template:Commons category",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Bartolomeo_Ammannati |
4,092 | Bishop | A bishop is an ordained member of the clergy who is entrusted with a position of authority and oversight in a religious institution.
In Christianity, bishops are normally responsible for the governance and administration of dioceses. The role or office of the bishop is called episcopacy. Organizationally, several Christian denominations utilize ecclesiastical structures that call for the position of bishops, while other denominations have dispensed with this office, seeing it as a symbol of power. Bishops have also exercised political authority within their dioceses.
Traditionally, bishops claim apostolic succession, a direct historical lineage dating back to the original Twelve Apostles or Saint Paul. The bishops are by doctrine understood as those who possess the full priesthood given by Jesus Christ, and therefore may ordain other clergy, including other bishops. A person ordained as a deacon, priest (i.e. presbyter), and then bishop is understood to hold the fullness of the ministerial priesthood, given responsibility by Christ to govern, teach and sanctify the Body of Christ (the Church). Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry.
Some Pentecostal and other Protestant denominations have bishops who oversee congregations, though they do not claim apostolic succession.
The English term bishop derives from the Greek word ἐπίσκοπος, epískopos, meaning "overseer"; Greek was the language of the early Christian church. However, the term epískopos did not originate in Christianity. In Greek literature, the term had been used for several centuries before the advent of Christianity. It later transformed into the Latin episcopus, Old English biscop, Middle English bisshop and lastly bishop.
In the early Christian era the term was not always clearly distinguished from presbýteros (literally: "elder" or "senior", origin of the modern English word priest), but is used in the sense of the order or office of bishop, distinct from that of presbyter, in the writings attributed to Ignatius of Antioch (died c. 110).
The earliest organization of the Church in Jerusalem was, according to most scholars, similar to that of Jewish synagogues, but it had a council or college of ordained presbyters (πρεσβύτεροι, 'elders'). In Acts 11:30 and Acts 15:22, a collegiate system of government in Jerusalem is chaired by James the Just, according to tradition the first bishop of the city. In Acts 14:23, the Apostle Paul ordains presbyters in churches in Anatolia. The word presbyter was not yet distinguished from overseer (ἐπίσκοπος, episkopos, later used exclusively to mean bishop), as in Acts 20:17, Titus 1:5–7 and 1 Peter 5:1. The earliest writings of the Apostolic Fathers, the Didache and the First Epistle of Clement, for example, show the church used two terms for local church offices—presbyters (seen by many as an interchangeable term with episkopos or overseer) and deacon.
In the First epistle to Timothy and Epistle to Titus in the New Testament a more clearly defined episcopate can be seen. Both letters state that Paul had left Timothy in Ephesus and Titus in Crete to oversee the local church. Paul commands Titus to ordain presbyters/bishops and to exercise general oversight.
Early sources are unclear but various groups of Christian communities may have had the bishop surrounded by a group or college functioning as leaders of the local churches. Eventually the head or "monarchic" bishop came to rule more clearly, and all local churches would eventually follow the example of the other churches and structure themselves after the model of the others with the one bishop in clearer charge, though the role of the body of presbyters remained important.
Eventually, as Christendom grew, bishops no longer directly served individual congregations. Instead, the metropolitan bishop (the bishop in a large city) appointed priests to minister each congregation, acting as the bishop's delegate.
Around the end of the 1st century, the church's organization became clearer in historical documents. In the works of the Apostolic Fathers, and Ignatius of Antioch in particular, the role of the episkopos, or bishop, became more important or, rather, already was very important and being clearly defined. While Ignatius of Antioch offers the earliest clear description of monarchial bishops (a single bishop over all house churches in a city) he is an advocate of monepiscopal structure rather than describing an accepted reality. To the bishops and house churches to which he writes, he offers strategies on how to pressure house churches who do not recognize the bishop into compliance. Other contemporary Christian writers do not describe monarchial bishops, either continuing to equate them with the presbyters or speaking of episkopoi (bishops, plural) in a city.
As the Church continued to expand, new churches in important cities gained their own bishop. Churches in the regions outside an important city were served by Chorbishop, an official rank of bishops. However, soon, presbyters and deacons were sent from the bishop of a city church. Gradually, priests replaced the chorbishops. Thus, in time, the bishop changed from being the leader of a single church confined to an urban area to being the leader of the churches of a given geographical area.
Clement of Alexandria (end of the 2nd century) writes about the ordination of a certain Zachæus as bishop by the imposition of Simon Peter Bar-Jonah's hands. The words bishop and ordination are used in their technical meaning by the same Clement of Alexandria. The bishops in the 2nd century are defined also as the only clergy to whom the ordination to priesthood (presbyterate) and diaconate is entrusted: "a priest (presbyter) lays on hands, but does not ordain." (cheirothetei ou cheirotonei).
At the beginning of the 3rd century, Hippolytus of Rome describes another feature of the ministry of a bishop, which is that of the "Spiritum primatus sacerdotii habere potestatem dimittere peccata": the primate of sacrificial priesthood and the power to forgive sins.
The efficient organization of the Roman Empire became the template for the organisation of the church in the 4th century, particularly after Constantine's Edict of Milan. As the church moved from the shadows of privacy into the public forum it acquired land for churches, burials and clergy. In 391, Theodosius I decreed that any land that had been confiscated from the church by Roman authorities be returned.
The most usual term for the geographic area of a bishop's authority and ministry, the diocese, began as part of the structure of the Roman Empire under Diocletian. As Roman authority began to fail in the western portion of the empire, the church took over much of the civil administration. This can be clearly seen in the ministry of two popes: Pope Leo I in the 5th century, and Pope Gregory I in the 6th century. Both of these men were statesmen and public administrators in addition to their role as Christian pastors, teachers and leaders. In the Eastern churches, latifundia entailed to a bishop's see were much less common, the state power did not collapse the way it did in the West, and thus the tendency of bishops acquiring civil power was much weaker than in the West. However, the role of Western bishops as civil authorities, often called prince bishops, continued throughout much of the Middle Ages.
As well as being Archchancellors of the Holy Roman Empire after the 9th century, bishops generally served as chancellors to medieval monarchs, acting as head of the justiciary and chief chaplain. The Lord Chancellor of England was almost always a bishop up until the dismissal of Cardinal Thomas Wolsey by Henry VIII. Similarly, the position of Kanclerz in the Polish kingdom was always held by a bishop until the 16th century.
In modern times, the principality of Andorra is headed by Co-Princes of Andorra, one of whom is the Bishop of Urgell and the other, the sitting President of France, an arrangement that began with the Paréage of Andorra (1278), and was ratified in the 1993 constitution of Andorra.
The office of the Papacy is inherently held by the sitting Roman Catholic Bishop of Rome. Though not originally intended to hold temporal authority, since the Middle Ages the power of the Papacy gradually expanded deep into the secular realm and for centuries the sitting Bishop of Rome was the most powerful governmental office in Central Italy. In modern times, the Pope is also the sovereign Prince of Vatican City, an internationally recognized micro-state located entirely within the city of Rome.
In France, prior to the Revolution, representatives of the clergy — in practice, bishops and abbots of the largest monasteries — comprised the First Estate of the Estates-General. This role was abolished after separation of Church and State was implemented during the French Revolution.
In the 21st century, the more senior bishops of the Church of England continue to sit in the House of Lords of the Parliament of the United Kingdom, as representatives of the established church, and are known as Lords Spiritual. The Bishop of Sodor and Man, whose diocese lies outside the United Kingdom, is an ex officio member of the Legislative Council of the Isle of Man. In the past, the Bishop of Durham had extensive vice-regal powers within his northern diocese, which was a county palatine, the County Palatine of Durham, (previously, Liberty of Durham) of which he was ex officio the earl. In the 19th century, a gradual process of reform was enacted, with the majority of the bishop's historic powers vested in The Crown by 1858.
Eastern Orthodox bishops, along with all other members of the clergy, are canonically forbidden to hold political office. Occasional exceptions to this rule are tolerated when the alternative is political chaos. In the Ottoman Empire, the Patriarch of Constantinople, for example, had de facto administrative, cultural and legal jurisdiction, as well as spiritual authority, over all Eastern Orthodox Christians of the empire, as part of the Ottoman millet system. An Orthodox bishop headed the Prince-Bishopric of Montenegro from 1516 to 1852, assisted by a secular guvernadur. More recently, Archbishop Makarios III of Cyprus, served as President of the Cyprus from 1960 to 1977, an extremely turbulent time period on the island.
In 2001, Peter Hollingworth, AC, OBE – then the Anglican Archbishop of Brisbane – was controversially appointed Governor-General of Australia. Although Hollingworth gave up his episcopal position to accept the appointment, it still attracted considerable opposition in a country which maintains a formal separation between Church and State.
During the period of the English Civil War, the role of bishops as wielders of political power and as upholders of the established church became a matter of heated political controversy. Presbyterianism was the polity of most Reformed Churches in Europe, and had been favored by many in England since the English Reformation. Since in the primitive church the offices of presbyter and episkopos were not clearly distinguished, many Puritans held that this was the only form of government the church should have. The Anglican divine, Richard Hooker, objected to this claim in his famous work Of the Laws of Ecclesiastic Polity while, at the same time, defending Presbyterian ordination as valid (in particular Calvin's ordination of Beza). This was the official stance of the English Church until the Commonwealth, during which time, the views of Presbyterians and Independents (Congregationalists) were more freely expressed and practiced.
Bishops form the leadership in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, certain Lutheran churches, the Anglican Communion, the Independent Catholic churches, the Independent Anglican churches, and certain other, smaller, denominations.
The traditional role of a bishop is as pastor of a diocese (also called a bishopric, synod, eparchy or see), and so to serve as a "diocesan bishop", or "eparch" as it is called in many Eastern Christian churches. Dioceses vary considerably in size, geographically and population-wise. Some dioceses around the Mediterranean Sea which were Christianised early are rather compact, whereas dioceses in areas of rapid modern growth in Christian commitment—as in some parts of Sub-Saharan Africa, South America and the Far East—are much larger and more populous.
As well as traditional diocesan bishops, many churches have a well-developed structure of church leadership that involves a number of layers of authority and responsibility.
In Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, High Church Lutheranism, and Anglicanism, only a bishop can ordain other bishops, priests, and deacons.
In the Eastern liturgical tradition, a priest can celebrate the Divine Liturgy only with the blessing of a bishop. In Byzantine usage, an antimension signed by the bishop is kept on the altar partly as a reminder of whose altar it is and under whose omophorion the priest at a local parish is serving. In Syriac Church usage, a consecrated wooden block called a thabilitho is kept for the same reasons.
The bishop is the ordinary minister of the sacrament of confirmation in the Latin Church, and in the Old Catholic communion only a bishop may administer this sacrament. In the Lutheran and Anglican churches, the bishop normatively administers the rite of confirmation, although in those denominations that do not have an episcopal polity, confirmation is administered by the priest. However, in the Byzantine and other Eastern rites, whether Eastern or Oriental Orthodox or Eastern Catholic, chrismation is done immediately after baptism, and thus the priest is the one who confirms, using chrism blessed by a bishop.
Bishops in all of these communions are ordained by other bishops through the laying on of hands. Ordination of a bishop, and thus continuation of apostolic succession, takes place through a ritual centred on the imposition of hands and prayer.
Catholic, Eastern Orthodox, Oriental Orthodox, Anglican, Old Catholic and some Lutheran bishops claim to be part of the continuous sequence of ordained bishops since the days of the apostles referred to as apostolic succession.
In Scandinavia and the Baltic region, Lutheran churches participating in the Porvoo Communion (those of Iceland, Norway, Sweden, Finland, Estonia, and Lithuania), as well as many non-Porvoo membership Lutheran churches (including those of Kenya, Latvia, and Russia), as well as the confessional Communion of Nordic Lutheran Dioceses, believe that they ordain their bishops in the apostolic succession in lines stemming from the original apostles. The New Westminster Dictionary of Church History states that "In Sweden the apostolic succession was preserved because the Catholic bishops were allowed to stay in office, but they had to approve changes in the ceremonies."
While traditional teaching maintains that any bishop with apostolic succession can validly perform the ordination of another bishop, some churches require two or three bishops participate, either to ensure sacramental validity or to conform with church law. Catholic doctrine holds that one bishop can validly ordain another (priest) as a bishop. Though a minimum of three bishops participating is desirable (there are usually several more) in order to demonstrate collegiality, canonically only one bishop is necessary. The practice of only one bishop ordaining was normal in countries where the church was persecuted under Communist rule.
The title of archbishop or metropolitan may be granted to a senior bishop, usually one who is in charge of a large ecclesiastical jurisdiction. He may, or may not, have provincial oversight of suffragan bishops and may possibly have auxiliary bishops assisting him.
Apart from the ordination, which is always done by other bishops, there are different methods as to the actual selection of a candidate for ordination as bishop. In the Catholic Church the Congregation for Bishops generally oversees the selection of new bishops with the approval of the pope. The papal nuncio usually solicits names from the bishops of a country, consults with priests and leading members of a laity, and then selects three to be forwarded to the Holy See. In Europe, some cathedral chapters have duties to elect bishops. The Eastern Catholic churches generally elect their own bishops. Most Eastern Orthodox churches allow varying amounts of formalised laity or lower clergy influence on the choice of bishops. This also applies in those Eastern churches which are in union with the pope, though it is required that he give assent.
The pope, in addition to being the Bishop of Rome and spiritual head of the Catholic Church, is also the Patriarch of the Latin Church. Each bishop within the Latin Church is answerable directly to the Pope and not any other bishop except to metropolitans in certain oversight instances. The pope previously used the title Patriarch of the West, but this title was dropped from use in 2006, a move which caused some concern within the Eastern Orthodox Communion as, to them, it implied wider papal jurisdiction.
The Catholic Church does recognise as valid (though illicit) ordinations done by breakaway Catholic, Old Catholic or Oriental bishops, and groups descended from them; it also regards as both valid and licit those ordinations done by bishops of the Eastern churches, so long as those receiving the ordination conform to other canonical requirements (for example, is an adult male) and an eastern orthodox rite of episcopal ordination, expressing the proper functions and sacramental status of a bishop, is used; this has given rise to the phenomenon of episcopi vagantes (for example, clergy of the Independent Catholic groups which claim apostolic succession, though this claim is rejected by both Catholicism and Eastern Orthodoxy). With respect to Lutheranism, "the Catholic Church has never officially expressed its judgement on the validity of orders as they have been handed down by episcopal succession in these two national Lutheran churches" (the Evangelical Lutheran Church of Sweden and the Evangelical Lutheran Church of Finland) though it does "question how the ecclesiastical break in the 16th century has affected the apostolicity of the churches of the Reformation and thus the apostolicity of their ministry". Since Pope Leo XIII issued the bull Apostolicae curae in 1896, the Catholic Church has insisted that Anglican orders are invalid because of the Reformed changes in the Anglican ordination rites of the 16th century and divergence in understanding of the theology of priesthood, episcopacy and Eucharist. However, since the 1930s, Utrecht Old Catholic bishops (recognised by the Holy See as validly ordained) have sometimes taken part in the ordination of Anglican bishops. According to the writer Timothy Dufort, by 1969, all Church of England bishops had acquired Old Catholic lines of apostolic succession recognised by the Holy See. This development has been used to argue that the strain of apostolic succession has been re-introduced into Anglicanism, at least within the Church of England. However, other issues, such as the Anglican ordination of women, is at variance with Catholic understanding of Christian teaching, and have contributed to the reaffirmation of Catholic rejection of Anglican ordinations.
The Eastern Orthodox Churches do not accept the validity of any ordinations performed by the Independent Catholic groups, as Eastern Orthodoxy considers to be spurious any consecration outside the church as a whole. Eastern Orthodoxy considers apostolic succession to exist only within the Universal Church, and not through any authority held by individual bishops; thus, if a bishop ordains someone to serve outside the (Eastern Orthodox) Church, the ceremony is ineffectual, and no ordination has taken place regardless of the ritual used or the ordaining prelate's position within the Eastern Orthodox Churches.
The position of the Catholic Church is slightly different. Whilst it does recognise the validity of the orders of certain groups which separated from communion with Holy See (for instance, the ordinations of the Old Catholics in communion with Utrecht, as well as the Polish National Catholic Church - which received its orders directly from Utrecht, and was until recently part of that communion), Catholicism does not recognise the orders of any group whose teaching is at variance with what they consider the core tenets of Christianity; this is the case even though the clergy of the Independent Catholic groups may use the proper ordination ritual. There are also other reasons why the Holy See does not recognise the validity of the orders of the Independent clergy:
Whilst members of the Independent Catholic movement take seriously the issue of valid orders, it is highly significant that the relevant Vatican Congregations tend not to respond to petitions from Independent Catholic bishops and clergy who seek to be received into communion with the Holy See, hoping to continue in some sacramental role. In those instances where the pope does grant reconciliation, those deemed to be clerics within the Independent Old Catholic movement are invariably admitted as laity and not priests or bishops.
There is a mutual recognition of the validity of orders amongst Catholic, Eastern Orthodox, Old Catholic, Oriental Orthodox and Assyrian Church of the East churches.
Some provinces of the Anglican Communion have begun ordaining women as bishops in recent decades – for example, England, Ireland, Scotland, Wales, the United States, Australia, New Zealand, Canada and Cuba. The first woman to be consecrated a bishop within Anglicanism was Barbara Harris, who was ordained in the United States in 1989. In 2006, Katharine Jefferts Schori, the Episcopal Bishop of Nevada, became the first woman to become the presiding bishop of the Episcopal Church.
In the Evangelical Lutheran Church in America (ELCA) and the Evangelical Lutheran Church in Canada (ELCIC), the largest Lutheran Church bodies in the United States and Canada, respectively, and roughly based on the Nordic Lutheran national churches (similar to that of the Church of England), bishops are elected by Synod Assemblies, consisting of both lay members and clergy, for a term of six years, which can be renewed, depending upon the local synod's "constitution" (which is mirrored on either the ELCA or ELCIC's national constitution). Since the implementation of concordats between the ELCA and the Episcopal Church of the United States and the ELCIC and the Anglican Church of Canada, all bishops, including the presiding bishop (ELCA) or the national bishop (ELCIC), have been consecrated using the historic succession in line with bishops from the Evangelical Lutheran Church of Sweden, with at least one Anglican bishop serving as co-consecrator.
Since going into ecumenical communion with their respective Anglican body, bishops in the ELCA or the ELCIC not only approve the "rostering" of all ordained pastors, diaconal ministers, and associates in ministry, but they serve as the principal celebrant of all pastoral ordination and installation ceremonies, diaconal consecration ceremonies, as well as serving as the "chief pastor" of the local synod, upholding the teachings of Martin Luther as well as the documentations of the Ninety-Five Theses and the Augsburg Confession. Unlike their counterparts in the United Methodist Church, ELCA and ELCIC synod bishops do not appoint pastors to local congregations (pastors, like their counterparts in the Episcopal Church, are called by local congregations). The presiding bishop of the ELCA and the national bishop of the ELCIC, the national bishops of their respective bodies, are elected for a single 6-year term and may be elected to an additional term.
Although ELCA agreed with the Episcopal Church to limit ordination to the bishop "ordinarily", ELCA pastor-ordinators are given permission to perform the rites in "extraordinary" circumstance. In practice, "extraordinary" circumstance have included disagreeing with Episcopalian views of the episcopate, and as a result, ELCA pastors ordained by other pastors are not permitted to be deployed to Episcopal Churches (they can, however, serve in Presbyterian Church USA, United Methodist Church, Reformed Church in America, and Moravian Church congregations, as the ELCA is in full communion with these denominations). The Lutheran Church–Missouri Synod (LCMS) and the Wisconsin Evangelical Lutheran Synod (WELS), the second and third largest Lutheran bodies in the United States and the two largest Confessional Lutheran bodies in North America, do not follow an episcopal form of governance, settling instead on a form of quasi-congregationalism patterned off what they believe to be the practice of the early church. The second largest of the three predecessor bodies of the ELCA, the American Lutheran Church, was a congregationalist body, with national and synod presidents before they were re-titled as bishops (borrowing from the Lutheran churches in Germany) in the 1980s. With regard to ecclesial discipline and oversight, national and synod presidents typically function similarly to bishops in episcopal bodies.
In the African Methodist Episcopal Church, "Bishops are the Chief Officers of the Connectional Organization. They are elected for life by a majority vote of the General Conference which meets every four years."
In the Christian Methodist Episcopal Church in the United States, bishops are administrative superintendents of the church; they are elected by "delegate" votes for as many years deemed until the age of 74, then the bishop must retire. Among their duties, are responsibility for appointing clergy to serve local churches as pastor, for performing ordinations, and for safeguarding the doctrine and discipline of the church. The General Conference, a meeting every four years, has an equal number of clergy and lay delegates. In each Annual Conference, CME bishops serve for four-year terms. CME Church bishops may be male or female.
In the United Methodist Church (the largest branch of Methodism in the world) bishops serve as administrative and pastoral superintendents of the church. They are elected for life from among the ordained elders (presbyters) by vote of the delegates in regional (called jurisdictional) conferences, and are consecrated by the other bishops present at the conference through the laying on of hands. In the United Methodist Church bishops remain members of the "Order of Elders" while being consecrated to the "Office of the Episcopacy". Within the United Methodist Church only bishops are empowered to consecrate bishops and ordain clergy. Among their most critical duties is the ordination and appointment of clergy to serve local churches as pastor, presiding at sessions of the Annual, Jurisdictional, and General Conferences, providing pastoral ministry for the clergy under their charge, and safeguarding the doctrine and discipline of the church. Furthermore, individual bishops, or the Council of Bishops as a whole, often serve a prophetic role, making statements on important social issues and setting forth a vision for the denomination, though they have no legislative authority of their own. In all of these areas, bishops of the United Methodist Church function very much in the historic meaning of the term. According to the Book of Discipline of the United Methodist Church, a bishop's responsibilities are:
Leadership.—Spiritual and Temporal—
Presidential Duties.—1. To preside in the General, Jurisdictional, Central, and Annual Conferences. 2. To form the districts after consultation with the district superintendents and after the number of the same has been determined by vote of the Annual Conference. 3. To appoint the district superintendents annually (¶¶ 517–518). 4. To consecrate bishops, to ordain elders and deacons, to consecrate diaconal ministers, to commission deaconesses and home missionaries, and to see that the names of the persons commissioned and consecrated are entered on the journals of the conference and that proper credentials are furnished to these persons.
Working with Ministers.—1. To make and fix the appointments in the Annual Conferences, Provisional Annual Conferences, and Missions as the Discipline may direct (¶¶ 529–533).
2. To divide or to unite a circuit(s), stations(s), or mission(s) as judged necessary for missionary strategy and then to make appropriate appointments. 3. To read the appointments of deaconesses, diaconal ministers, lay persons in service under the World Division of the General Board of Global Ministries, and home missionaries. 4. To fix the Charge Conference membership of all ordained ministers appointed to ministries other than the local church in keeping with ¶443.3. 5. To transfer, upon the request of the receiving bishop, ministerial member(s) of one Annual Conference to another, provided said member(s) agrees to transfer; and to send immediately to the secretaries of both conferences involved, to the conference Boards of Ordained Ministry, and to the clearing house of the General Board of Pensions written notices of the transfer of members and of their standing in the course of study if they are undergraduates.
In each Annual Conference, United Methodist bishops serve for four-year terms, and may serve up to three terms before either retirement or appointment to a new Conference. United Methodist bishops may be male or female, with Marjorie Matthews being the first woman to be consecrated a bishop in 1980.
The collegial expression of episcopal leadership in the United Methodist Church is known as the Council of Bishops. The Council of Bishops speaks to the church and through the church into the world and gives leadership in the quest for Christian unity and interreligious relationships. The Conference of Methodist Bishops includes the United Methodist Council of Bishops plus bishops from affiliated autonomous Methodist or United Churches.
John Wesley consecrated Thomas Coke a "General Superintendent", and directed that Francis Asbury also be consecrated for the United States of America in 1784, where the Methodist Episcopal Church first became a separate denomination apart from the Church of England. Coke soon returned to England, but Asbury was the primary builder of the new church. At first he did not call himself bishop, but eventually submitted to the usage by the denomination.
Notable bishops in United Methodist history include Coke, Asbury, Richard Whatcoat, Philip William Otterbein, Martin Boehm, Jacob Albright, John Seybert, Matthew Simpson, John S. Stamm, William Ragsdale Cannon, Marjorie Matthews, Leontine T. Kelly, William B. Oden, Ntambo Nkulu Ntanda, Joseph Sprague, William Henry Willimon, and Thomas Bickerton.
In the Church of Jesus Christ of Latter-day Saints, the Bishop is the leader of a local congregation, called a ward. As with most LDS priesthood holders, the bishop is a part-time lay minister and earns a living through other employment. As such, it is his duty to preside, call local leaders, and judge the worthiness of members for certain activities. The bishop does not deliver sermons at every service (generally asking members to do so), but is expected to be a spiritual guide for his congregation. It is therefore believed that he has both the right and ability to receive divine inspiration (through the Holy Spirit) for the ward under his direction. Because it is a part-time position, all able members are expected to assist in the management of the ward by holding delegated lay positions (for example, women's and youth leaders, teachers) referred to as callings. The bishop is especially responsible for leading the youth, in connection with the fact that a bishop is the president of the Aaronic priesthood in his ward (and is thus a form of Mormon Kohen). Although members are asked to confess serious sins to him, unlike the Catholic Church, he is not the instrument of divine forgiveness, but merely a guide through the repentance process (and a judge in case transgressions warrant excommunication or other official discipline). The bishop is also responsible for the physical welfare of the ward, and thus collects tithing and fast offerings and distributes financial assistance where needed.
A literal descendant of Aaron has "legal right" to act as a bishop after being found worthy and ordained by the First Presidency. In the absence of a literal descendant of Aaron, a high priest in the Melchizedek priesthood is called to be a bishop. Each bishop is selected from resident members of the ward by the stake presidency with approval of the First Presidency, and chooses two counselors to form a bishopric. An priesthood holder called as bishop must be ordained a high priest if he is not already one, unlike the similar function of branch president. In special circumstances (such as a ward consisting entirely of young university students), a bishop may be chosen from outside the ward. Traditionally, bishops are married, though this is not always the case. A bishop is typically released after about five years and a new bishop is called to the position. Although the former bishop is released from his duties, he continues to hold the Aaronic priesthood office of bishop. Church members frequently refer to a former bishop as "Bishop" as a sign of respect and affection.
Latter-day Saint bishops do not wear any special clothing or insignia the way clergy in many other churches do, but are expected to dress and groom themselves neatly and conservatively per their local culture, especially when performing official duties. Bishops (as well as other members of the priesthood) can trace their line of authority back to Joseph Smith, who, according to church doctrine, was ordained to lead the church in modern times by the ancient apostles Peter, James, and John, who were ordained to lead the Church by Jesus Christ.
At the global level, the presiding bishop oversees the temporal affairs (buildings, properties, commercial corporations, and so on) of the worldwide church, including the church's massive global humanitarian aid and social welfare programs. The presiding bishop has two counselors; the three together form the presiding bishopric. As opposed to ward bishoprics, where the counselors do not hold the office of bishop, all three men in the presiding bishopric hold the office of bishop, and thus the counselors, as with the presiding bishop, are formally referred to as "Bishop".
The New Apostolic Church (NAC) knows three classes of ministries: Deacons, Priests and Apostles. The Apostles, who are all included in the apostolate with the Chief Apostle as head, are the highest ministries.
Of the several kinds of priest....ministries, the bishop is the highest. Nearly all bishops are set in line directly from the chief apostle. They support and help their superior apostle.
In the Church of God in Christ (COGIC), the ecclesiastical structure is composed of large dioceses that are called "jurisdictions" within COGIC, each under the authority of a bishop, sometimes called "state bishops". They can either be made up of large geographical regions of churches or churches that are grouped and organized together as their own separate jurisdictions because of similar affiliations, regardless of geographical location or dispersion. Each state in the U.S. has at least one jurisdiction while others may have several more, and each jurisdiction is usually composed of between 30 and 100 churches. Each jurisdiction is then broken down into several districts, which are smaller groups of churches (either grouped by geographical situation or by similar affiliations) which are each under the authority of District Superintendents who answer to the authority of their jurisdictional/state bishop. There are currently over 170 jurisdictions in the United States, and over 30 jurisdictions in other countries. The bishops of each jurisdiction, according to the COGIC Manual, are considered to be the modern day equivalent in the church of the early apostles and overseers of the New Testament church, and as the highest ranking clergymen in the COGIC, they are tasked with the responsibilities of being the head overseers of all religious, civil, and economic ministries and protocol for the church denomination. They also have the authority to appoint and ordain local pastors, elders, ministers, and reverends within the denomination. The bishops of the COGIC denomination are all collectively called "The Board of Bishops". From the Board of Bishops, and the General Assembly of the COGIC, the body of the church composed of clergy and lay delegates that are responsible for making and enforcing the bylaws of the denomination, every four years, twelve bishops from the COGIC are elected as "The General Board" of the church, who work alongside the delegates of the General Assembly and Board of Bishops to provide administration over the denomination as the church's head executive leaders. One of twelve bishops of the General Board is also elected the "presiding bishop" of the church, and two others are appointed by the presiding bishop himself, as his first and second assistant presiding bishops.
Bishops in the Church of God in Christ usually wear black clergy suits which consist of a black suit blazer, black pants, a purple or scarlet clergy shirt and a white clerical collar, which is usually referred to as "Class B Civic attire". Bishops in COGIC also typically wear the Anglican Choir Dress style vestments of a long purple or scarlet chimere, cuffs, and tippet worn over a long white rochet, and a gold pectoral cross worn around the neck with the tippet. This is usually referred to as "Class A Ceremonial attire". The bishops of COGIC alternate between Class A Ceremonial attire and Class B Civic attire depending on the protocol of the religious services and other events they have to attend.
In the polity of the Church of God (Cleveland, Tennessee), the international leader is the presiding bishop, and the members of the executive committee are executive bishops. Collectively, they supervise and appoint national and state leaders across the world. Leaders of individual states and regions are administrative bishops, who have jurisdiction over local churches in their respective states and are vested with appointment authority for local pastorates. All ministers are credentialed at one of three levels of licensure, the most senior of which is the rank of ordained bishop. To be eligible to serve in state, national, or international positions of authority, a minister must hold the rank of ordained bishop.
In 2002, the general convention of the Pentecostal Church of God came to a consensus to change the title of their overseer from general superintendent to bishop. The change was brought on because internationally, the term bishop is more commonly related to religious leaders than the previous title.
The title bishop is used for both the general (international leader) and the district (state) leaders. The title is sometimes used in conjunction with the previous, thus becoming general (district) superintendent/bishop.
According to the Seventh-day Adventist understanding of the doctrine of the church:
"The "elders" (Greek, presbuteros) or "bishops" (episkopos) were the most important officers of the church. The term elder means older one, implying dignity and respect. His position was similar to that of the one who had supervision of the synagogue. The term bishop means "overseer". Paul used these terms interchangeably, equating elders with overseers or bishops (Acts 20:17,28; Titus 1:5, 7).
"Those who held this position supervised the newly formed churches. Elder referred to the status or rank of the office, while bishop denoted the duty or responsibility of the office—"overseer". Since the apostles also called themselves elders (1 Peter 5:1; 2 John 1; 3 John 1), it is apparent that there were both local elders and itinerant elders, or elders at large. But both kinds of elder functioned as shepherds of the congregations."
The above understanding is part of the basis of Adventist organizational structure. The world wide Seventh-day Adventist church is organized into local districts, conferences or missions, union conferences or union missions, divisions, and finally at the top is the general conference. At each level (with exception to the local districts), there is an elder who is elected president and a group of elders who serve on the executive committee with the elected president. Those who have been elected president would in effect be the "bishop" while never actually carrying the title or ordained as such because the term is usually associated with the episcopal style of church governance most often found in Catholic, Anglican, Methodist and some Pentecostal/Charismatic circles.
Some Baptists also have begun taking on the title of bishop. In some smaller Protestant denominations and independent churches, the term bishop is used in the same way as pastor, to refer to the leader of the local congregation, and may be male or female. This usage is especially common in African-American churches in the US.
In the Church of Scotland, which has a Presbyterian church structure, the word "bishop" refers to an ordained person, usually a normal parish minister, who has temporary oversight of a trainee minister. In the Presbyterian Church (USA), the term bishop is an expressive name for a Minister of Word and Sacrament who serves a congregation and exercises "the oversight of the flock of Christ." The term is traceable to the 1789 Form of Government of the PC (USA) and the Presbyterian understanding of the pastoral office.
While not considered orthodox Christian, the Ecclesia Gnostica Catholica uses roles and titles derived from Christianity for its clerical hierarchy, including bishops who have much the same authority and responsibilities as in Catholicism.
The Salvation Army does not have bishops but has appointed leaders of geographical areas, known as Divisional Commanders. Larger geographical areas, called Territories, are led by a Territorial Commander, who is the highest-ranking officer in that Territory.
Jehovah's Witnesses do not use the title 'Bishop' within their organizational structure, but appoint elders to be overseers (to fulfill the role of oversight) within their congregations.
The Batak Christian Protestant Church of Indonesia, the most prominent Protestant denomination in Indonesia, uses the term Ephorus instead of bishop.
In the Vietnamese syncretist religion of Caodaism, bishops (giáo sư) comprise the fifth of nine hierarchical levels, and are responsible for spiritual and temporal education as well as record-keeping and ceremonies in their parishes. At any one time there are seventy-two bishops. Their authority is described in Section I of the text Tân Luật (revealed through seances in December 1926). Caodai bishops wear robes and headgear of embroidered silk depicting the Divine Eye and the Eight Trigrams. (The color varies according to branch.) This is the full ceremonial dress; the simple version consists of a seven-layered turban.
Traditionally, a number of items are associated with the office of a bishop, most notably the mitre and the crosier. Other vestments and insignia vary between Eastern and Western Christianity.
In the Latin Rite of the Catholic Church, the choir dress of a bishop includes the purple cassock with amaranth trim, rochet, purple zucchetto (skull cap), purple biretta, and pectoral cross. The cappa magna may be worn, but only within the bishop's own diocese and on especially solemn occasions. The mitre, zucchetto, and stole are generally worn by bishops when presiding over liturgical functions. For liturgical functions other than the Mass the bishop typically wears the cope. Within his own diocese and when celebrating solemnly elsewhere with the consent of the local ordinary, he also uses the crosier. When celebrating Mass, a bishop, like a priest, wears the chasuble. The Caeremoniale Episcoporum recommends, but does not impose, that in solemn celebrations a bishop should also wear a dalmatic, which can always be white, beneath the chasuble, especially when administering the sacrament of holy orders, blessing an abbot or abbess, and dedicating a church or an altar. The Caeremoniale Episcoporum no longer makes mention of episcopal gloves, episcopal sandals, liturgical stockings (also known as buskins), or the accoutrements that it once prescribed for the bishop's horse. The coat of arms of a Latin Church Catholic bishop usually displays a galero with a cross and crosier behind the escutcheon; the specifics differ by location and ecclesiastical rank (see Ecclesiastical heraldry).
Anglican bishops generally make use of the mitre, crosier, ecclesiastical ring, purple cassock, purple zucchetto, and pectoral cross. However, the traditional choir dress of Anglican bishops retains its late mediaeval form, and looks quite different from that of their Catholic counterparts; it consists of a long rochet which is worn with a chimere.
In the Eastern Churches (Eastern Orthodox, Eastern Rite Catholic) a bishop will wear the mandyas, panagia (and perhaps an enkolpion), sakkos, omophorion and an Eastern-style mitre. Eastern bishops do not normally wear an episcopal ring; the faithful kiss (or, alternatively, touch their forehead to) the bishop's hand. To seal official documents, he will usually use an inked stamp. An Eastern bishop's coat of arms will normally display an Eastern-style mitre, cross, eastern style crosier and a red and white (or red and gold) mantle. The arms of Oriental Orthodox bishops will display the episcopal insignia (mitre or turban) specific to their own liturgical traditions. Variations occur based upon jurisdiction and national customs.
In Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican cathedrals there is a special chair set aside for the exclusive use of the bishop. This is the bishop's cathedra and is often called the throne. In some Christian denominations, for example, the Anglican Communion, parish churches may maintain a chair for the use of the bishop when he visits; this is to signify the parish's union with the bishop.
The leader of the Buddhist Churches of America (BCA) is their bishop, The Japanese title for the bishop of the BCA is sochō, although the English title is favored over the Japanese. When it comes to many other Buddhist terms, the BCA chose to keep them in their original language (terms such as sangha and dana), but with some words (including sochō), they changed/translated these terms into English words.
Between 1899 and 1944, the BCA held the name Buddhist Mission of North America. The leader of the Buddhist Mission of North America was called kantoku (superintendent/director) between 1899 and 1918. In 1918 the kantoku was promoted to bishop (sochō). However, according to George J. Tanabe, the title "bishop" was in practice already used by Hawaiian Shin Buddhists (in Honpa Hongwanji Mission of Hawaii) even when the official title was kantoku.
Bishops are also present in other Japanese Buddhist organizations. Higashi Hongan-ji's North American District, Honpa Honganji Mission of Hawaii, Jodo Shinshu Buddhist Temples of Canada, a Jodo Shu temple in Los Angeles, the Shingon temple Koyasan Buddhist Temple, Sōtō Mission in Hawai‘i (a Soto Zen Buddhist institution), and the Sōtō Zen Buddhist Community of South America (Comunidade Budista Sōtō Zenshū da América do Sul) all have or have had leaders with the title bishop. As for the Sōtō Zen Buddhist Community of South America, the Japanese title is sōkan, but the leader is in practice referred to as "bishop".
Tenrikyo is a Japanese New Religion with influences from both Shinto and Buddhism. The leader of the Tenrikyo North American Mission has the title of bishop. | [
{
"paragraph_id": 0,
"text": "A bishop is an ordained member of the clergy who is entrusted with a position of authority and oversight in a religious institution.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In Christianity, bishops are normally responsible for the governance and administration of dioceses. The role or office of the bishop is called episcopacy. Organizationally, several Christian denominations utilize ecclesiastical structures that call for the position of bishops, while other denominations have dispensed with this office, seeing it as a symbol of power. Bishops have also exercised political authority within their dioceses.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Traditionally, bishops claim apostolic succession, a direct historical lineage dating back to the original Twelve Apostles or Saint Paul. The bishops are by doctrine understood as those who possess the full priesthood given by Jesus Christ, and therefore may ordain other clergy, including other bishops. A person ordained as a deacon, priest (i.e. presbyter), and then bishop is understood to hold the fullness of the ministerial priesthood, given responsibility by Christ to govern, teach and sanctify the Body of Christ (the Church). Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Some Pentecostal and other Protestant denominations have bishops who oversee congregations, though they do not claim apostolic succession.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The English term bishop derives from the Greek word ἐπίσκοπος, epískopos, meaning \"overseer\"; Greek was the language of the early Christian church. However, the term epískopos did not originate in Christianity. In Greek literature, the term had been used for several centuries before the advent of Christianity. It later transformed into the Latin episcopus, Old English biscop, Middle English bisshop and lastly bishop.",
"title": "Terminology"
},
{
"paragraph_id": 5,
"text": "In the early Christian era the term was not always clearly distinguished from presbýteros (literally: \"elder\" or \"senior\", origin of the modern English word priest), but is used in the sense of the order or office of bishop, distinct from that of presbyter, in the writings attributed to Ignatius of Antioch (died c. 110).",
"title": "Terminology"
},
{
"paragraph_id": 6,
"text": "The earliest organization of the Church in Jerusalem was, according to most scholars, similar to that of Jewish synagogues, but it had a council or college of ordained presbyters (πρεσβύτεροι, 'elders'). In Acts 11:30 and Acts 15:22, a collegiate system of government in Jerusalem is chaired by James the Just, according to tradition the first bishop of the city. In Acts 14:23, the Apostle Paul ordains presbyters in churches in Anatolia. The word presbyter was not yet distinguished from overseer (ἐπίσκοπος, episkopos, later used exclusively to mean bishop), as in Acts 20:17, Titus 1:5–7 and 1 Peter 5:1. The earliest writings of the Apostolic Fathers, the Didache and the First Epistle of Clement, for example, show the church used two terms for local church offices—presbyters (seen by many as an interchangeable term with episkopos or overseer) and deacon.",
"title": "History in Christianity"
},
{
"paragraph_id": 7,
"text": "In the First epistle to Timothy and Epistle to Titus in the New Testament a more clearly defined episcopate can be seen. Both letters state that Paul had left Timothy in Ephesus and Titus in Crete to oversee the local church. Paul commands Titus to ordain presbyters/bishops and to exercise general oversight.",
"title": "History in Christianity"
},
{
"paragraph_id": 8,
"text": "Early sources are unclear but various groups of Christian communities may have had the bishop surrounded by a group or college functioning as leaders of the local churches. Eventually the head or \"monarchic\" bishop came to rule more clearly, and all local churches would eventually follow the example of the other churches and structure themselves after the model of the others with the one bishop in clearer charge, though the role of the body of presbyters remained important.",
"title": "History in Christianity"
},
{
"paragraph_id": 9,
"text": "Eventually, as Christendom grew, bishops no longer directly served individual congregations. Instead, the metropolitan bishop (the bishop in a large city) appointed priests to minister each congregation, acting as the bishop's delegate.",
"title": "History in Christianity"
},
{
"paragraph_id": 10,
"text": "Around the end of the 1st century, the church's organization became clearer in historical documents. In the works of the Apostolic Fathers, and Ignatius of Antioch in particular, the role of the episkopos, or bishop, became more important or, rather, already was very important and being clearly defined. While Ignatius of Antioch offers the earliest clear description of monarchial bishops (a single bishop over all house churches in a city) he is an advocate of monepiscopal structure rather than describing an accepted reality. To the bishops and house churches to which he writes, he offers strategies on how to pressure house churches who do not recognize the bishop into compliance. Other contemporary Christian writers do not describe monarchial bishops, either continuing to equate them with the presbyters or speaking of episkopoi (bishops, plural) in a city.",
"title": "History in Christianity"
},
{
"paragraph_id": 11,
"text": "As the Church continued to expand, new churches in important cities gained their own bishop. Churches in the regions outside an important city were served by Chorbishop, an official rank of bishops. However, soon, presbyters and deacons were sent from the bishop of a city church. Gradually, priests replaced the chorbishops. Thus, in time, the bishop changed from being the leader of a single church confined to an urban area to being the leader of the churches of a given geographical area.",
"title": "History in Christianity"
},
{
"paragraph_id": 12,
"text": "Clement of Alexandria (end of the 2nd century) writes about the ordination of a certain Zachæus as bishop by the imposition of Simon Peter Bar-Jonah's hands. The words bishop and ordination are used in their technical meaning by the same Clement of Alexandria. The bishops in the 2nd century are defined also as the only clergy to whom the ordination to priesthood (presbyterate) and diaconate is entrusted: \"a priest (presbyter) lays on hands, but does not ordain.\" (cheirothetei ou cheirotonei).",
"title": "History in Christianity"
},
{
"paragraph_id": 13,
"text": "At the beginning of the 3rd century, Hippolytus of Rome describes another feature of the ministry of a bishop, which is that of the \"Spiritum primatus sacerdotii habere potestatem dimittere peccata\": the primate of sacrificial priesthood and the power to forgive sins.",
"title": "History in Christianity"
},
{
"paragraph_id": 14,
"text": "The efficient organization of the Roman Empire became the template for the organisation of the church in the 4th century, particularly after Constantine's Edict of Milan. As the church moved from the shadows of privacy into the public forum it acquired land for churches, burials and clergy. In 391, Theodosius I decreed that any land that had been confiscated from the church by Roman authorities be returned.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 15,
"text": "The most usual term for the geographic area of a bishop's authority and ministry, the diocese, began as part of the structure of the Roman Empire under Diocletian. As Roman authority began to fail in the western portion of the empire, the church took over much of the civil administration. This can be clearly seen in the ministry of two popes: Pope Leo I in the 5th century, and Pope Gregory I in the 6th century. Both of these men were statesmen and public administrators in addition to their role as Christian pastors, teachers and leaders. In the Eastern churches, latifundia entailed to a bishop's see were much less common, the state power did not collapse the way it did in the West, and thus the tendency of bishops acquiring civil power was much weaker than in the West. However, the role of Western bishops as civil authorities, often called prince bishops, continued throughout much of the Middle Ages.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 16,
"text": "As well as being Archchancellors of the Holy Roman Empire after the 9th century, bishops generally served as chancellors to medieval monarchs, acting as head of the justiciary and chief chaplain. The Lord Chancellor of England was almost always a bishop up until the dismissal of Cardinal Thomas Wolsey by Henry VIII. Similarly, the position of Kanclerz in the Polish kingdom was always held by a bishop until the 16th century.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 17,
"text": "In modern times, the principality of Andorra is headed by Co-Princes of Andorra, one of whom is the Bishop of Urgell and the other, the sitting President of France, an arrangement that began with the Paréage of Andorra (1278), and was ratified in the 1993 constitution of Andorra.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 18,
"text": "The office of the Papacy is inherently held by the sitting Roman Catholic Bishop of Rome. Though not originally intended to hold temporal authority, since the Middle Ages the power of the Papacy gradually expanded deep into the secular realm and for centuries the sitting Bishop of Rome was the most powerful governmental office in Central Italy. In modern times, the Pope is also the sovereign Prince of Vatican City, an internationally recognized micro-state located entirely within the city of Rome.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 19,
"text": "In France, prior to the Revolution, representatives of the clergy — in practice, bishops and abbots of the largest monasteries — comprised the First Estate of the Estates-General. This role was abolished after separation of Church and State was implemented during the French Revolution.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 20,
"text": "In the 21st century, the more senior bishops of the Church of England continue to sit in the House of Lords of the Parliament of the United Kingdom, as representatives of the established church, and are known as Lords Spiritual. The Bishop of Sodor and Man, whose diocese lies outside the United Kingdom, is an ex officio member of the Legislative Council of the Isle of Man. In the past, the Bishop of Durham had extensive vice-regal powers within his northern diocese, which was a county palatine, the County Palatine of Durham, (previously, Liberty of Durham) of which he was ex officio the earl. In the 19th century, a gradual process of reform was enacted, with the majority of the bishop's historic powers vested in The Crown by 1858.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 21,
"text": "Eastern Orthodox bishops, along with all other members of the clergy, are canonically forbidden to hold political office. Occasional exceptions to this rule are tolerated when the alternative is political chaos. In the Ottoman Empire, the Patriarch of Constantinople, for example, had de facto administrative, cultural and legal jurisdiction, as well as spiritual authority, over all Eastern Orthodox Christians of the empire, as part of the Ottoman millet system. An Orthodox bishop headed the Prince-Bishopric of Montenegro from 1516 to 1852, assisted by a secular guvernadur. More recently, Archbishop Makarios III of Cyprus, served as President of the Cyprus from 1960 to 1977, an extremely turbulent time period on the island.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 22,
"text": "In 2001, Peter Hollingworth, AC, OBE – then the Anglican Archbishop of Brisbane – was controversially appointed Governor-General of Australia. Although Hollingworth gave up his episcopal position to accept the appointment, it still attracted considerable opposition in a country which maintains a formal separation between Church and State.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 23,
"text": "During the period of the English Civil War, the role of bishops as wielders of political power and as upholders of the established church became a matter of heated political controversy. Presbyterianism was the polity of most Reformed Churches in Europe, and had been favored by many in England since the English Reformation. Since in the primitive church the offices of presbyter and episkopos were not clearly distinguished, many Puritans held that this was the only form of government the church should have. The Anglican divine, Richard Hooker, objected to this claim in his famous work Of the Laws of Ecclesiastic Polity while, at the same time, defending Presbyterian ordination as valid (in particular Calvin's ordination of Beza). This was the official stance of the English Church until the Commonwealth, during which time, the views of Presbyterians and Independents (Congregationalists) were more freely expressed and practiced.",
"title": "Christian bishops and civil government"
},
{
"paragraph_id": 24,
"text": "Bishops form the leadership in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, certain Lutheran churches, the Anglican Communion, the Independent Catholic churches, the Independent Anglican churches, and certain other, smaller, denominations.",
"title": "Christian churches"
},
{
"paragraph_id": 25,
"text": "The traditional role of a bishop is as pastor of a diocese (also called a bishopric, synod, eparchy or see), and so to serve as a \"diocesan bishop\", or \"eparch\" as it is called in many Eastern Christian churches. Dioceses vary considerably in size, geographically and population-wise. Some dioceses around the Mediterranean Sea which were Christianised early are rather compact, whereas dioceses in areas of rapid modern growth in Christian commitment—as in some parts of Sub-Saharan Africa, South America and the Far East—are much larger and more populous.",
"title": "Christian churches"
},
{
"paragraph_id": 26,
"text": "As well as traditional diocesan bishops, many churches have a well-developed structure of church leadership that involves a number of layers of authority and responsibility.",
"title": "Christian churches"
},
{
"paragraph_id": 27,
"text": "In Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, High Church Lutheranism, and Anglicanism, only a bishop can ordain other bishops, priests, and deacons.",
"title": "Christian churches"
},
{
"paragraph_id": 28,
"text": "In the Eastern liturgical tradition, a priest can celebrate the Divine Liturgy only with the blessing of a bishop. In Byzantine usage, an antimension signed by the bishop is kept on the altar partly as a reminder of whose altar it is and under whose omophorion the priest at a local parish is serving. In Syriac Church usage, a consecrated wooden block called a thabilitho is kept for the same reasons.",
"title": "Christian churches"
},
{
"paragraph_id": 29,
"text": "The bishop is the ordinary minister of the sacrament of confirmation in the Latin Church, and in the Old Catholic communion only a bishop may administer this sacrament. In the Lutheran and Anglican churches, the bishop normatively administers the rite of confirmation, although in those denominations that do not have an episcopal polity, confirmation is administered by the priest. However, in the Byzantine and other Eastern rites, whether Eastern or Oriental Orthodox or Eastern Catholic, chrismation is done immediately after baptism, and thus the priest is the one who confirms, using chrism blessed by a bishop.",
"title": "Christian churches"
},
{
"paragraph_id": 30,
"text": "Bishops in all of these communions are ordained by other bishops through the laying on of hands. Ordination of a bishop, and thus continuation of apostolic succession, takes place through a ritual centred on the imposition of hands and prayer.",
"title": "Christian churches"
},
{
"paragraph_id": 31,
"text": "Catholic, Eastern Orthodox, Oriental Orthodox, Anglican, Old Catholic and some Lutheran bishops claim to be part of the continuous sequence of ordained bishops since the days of the apostles referred to as apostolic succession.",
"title": "Christian churches"
},
{
"paragraph_id": 32,
"text": "In Scandinavia and the Baltic region, Lutheran churches participating in the Porvoo Communion (those of Iceland, Norway, Sweden, Finland, Estonia, and Lithuania), as well as many non-Porvoo membership Lutheran churches (including those of Kenya, Latvia, and Russia), as well as the confessional Communion of Nordic Lutheran Dioceses, believe that they ordain their bishops in the apostolic succession in lines stemming from the original apostles. The New Westminster Dictionary of Church History states that \"In Sweden the apostolic succession was preserved because the Catholic bishops were allowed to stay in office, but they had to approve changes in the ceremonies.\"",
"title": "Christian churches"
},
{
"paragraph_id": 33,
"text": "While traditional teaching maintains that any bishop with apostolic succession can validly perform the ordination of another bishop, some churches require two or three bishops participate, either to ensure sacramental validity or to conform with church law. Catholic doctrine holds that one bishop can validly ordain another (priest) as a bishop. Though a minimum of three bishops participating is desirable (there are usually several more) in order to demonstrate collegiality, canonically only one bishop is necessary. The practice of only one bishop ordaining was normal in countries where the church was persecuted under Communist rule.",
"title": "Christian churches"
},
{
"paragraph_id": 34,
"text": "The title of archbishop or metropolitan may be granted to a senior bishop, usually one who is in charge of a large ecclesiastical jurisdiction. He may, or may not, have provincial oversight of suffragan bishops and may possibly have auxiliary bishops assisting him.",
"title": "Christian churches"
},
{
"paragraph_id": 35,
"text": "Apart from the ordination, which is always done by other bishops, there are different methods as to the actual selection of a candidate for ordination as bishop. In the Catholic Church the Congregation for Bishops generally oversees the selection of new bishops with the approval of the pope. The papal nuncio usually solicits names from the bishops of a country, consults with priests and leading members of a laity, and then selects three to be forwarded to the Holy See. In Europe, some cathedral chapters have duties to elect bishops. The Eastern Catholic churches generally elect their own bishops. Most Eastern Orthodox churches allow varying amounts of formalised laity or lower clergy influence on the choice of bishops. This also applies in those Eastern churches which are in union with the pope, though it is required that he give assent.",
"title": "Christian churches"
},
{
"paragraph_id": 36,
"text": "The pope, in addition to being the Bishop of Rome and spiritual head of the Catholic Church, is also the Patriarch of the Latin Church. Each bishop within the Latin Church is answerable directly to the Pope and not any other bishop except to metropolitans in certain oversight instances. The pope previously used the title Patriarch of the West, but this title was dropped from use in 2006, a move which caused some concern within the Eastern Orthodox Communion as, to them, it implied wider papal jurisdiction.",
"title": "Christian churches"
},
{
"paragraph_id": 37,
"text": "The Catholic Church does recognise as valid (though illicit) ordinations done by breakaway Catholic, Old Catholic or Oriental bishops, and groups descended from them; it also regards as both valid and licit those ordinations done by bishops of the Eastern churches, so long as those receiving the ordination conform to other canonical requirements (for example, is an adult male) and an eastern orthodox rite of episcopal ordination, expressing the proper functions and sacramental status of a bishop, is used; this has given rise to the phenomenon of episcopi vagantes (for example, clergy of the Independent Catholic groups which claim apostolic succession, though this claim is rejected by both Catholicism and Eastern Orthodoxy). With respect to Lutheranism, \"the Catholic Church has never officially expressed its judgement on the validity of orders as they have been handed down by episcopal succession in these two national Lutheran churches\" (the Evangelical Lutheran Church of Sweden and the Evangelical Lutheran Church of Finland) though it does \"question how the ecclesiastical break in the 16th century has affected the apostolicity of the churches of the Reformation and thus the apostolicity of their ministry\". Since Pope Leo XIII issued the bull Apostolicae curae in 1896, the Catholic Church has insisted that Anglican orders are invalid because of the Reformed changes in the Anglican ordination rites of the 16th century and divergence in understanding of the theology of priesthood, episcopacy and Eucharist. However, since the 1930s, Utrecht Old Catholic bishops (recognised by the Holy See as validly ordained) have sometimes taken part in the ordination of Anglican bishops. According to the writer Timothy Dufort, by 1969, all Church of England bishops had acquired Old Catholic lines of apostolic succession recognised by the Holy See. This development has been used to argue that the strain of apostolic succession has been re-introduced into Anglicanism, at least within the Church of England. However, other issues, such as the Anglican ordination of women, is at variance with Catholic understanding of Christian teaching, and have contributed to the reaffirmation of Catholic rejection of Anglican ordinations.",
"title": "Christian churches"
},
{
"paragraph_id": 38,
"text": "The Eastern Orthodox Churches do not accept the validity of any ordinations performed by the Independent Catholic groups, as Eastern Orthodoxy considers to be spurious any consecration outside the church as a whole. Eastern Orthodoxy considers apostolic succession to exist only within the Universal Church, and not through any authority held by individual bishops; thus, if a bishop ordains someone to serve outside the (Eastern Orthodox) Church, the ceremony is ineffectual, and no ordination has taken place regardless of the ritual used or the ordaining prelate's position within the Eastern Orthodox Churches.",
"title": "Christian churches"
},
{
"paragraph_id": 39,
"text": "The position of the Catholic Church is slightly different. Whilst it does recognise the validity of the orders of certain groups which separated from communion with Holy See (for instance, the ordinations of the Old Catholics in communion with Utrecht, as well as the Polish National Catholic Church - which received its orders directly from Utrecht, and was until recently part of that communion), Catholicism does not recognise the orders of any group whose teaching is at variance with what they consider the core tenets of Christianity; this is the case even though the clergy of the Independent Catholic groups may use the proper ordination ritual. There are also other reasons why the Holy See does not recognise the validity of the orders of the Independent clergy:",
"title": "Christian churches"
},
{
"paragraph_id": 40,
"text": "Whilst members of the Independent Catholic movement take seriously the issue of valid orders, it is highly significant that the relevant Vatican Congregations tend not to respond to petitions from Independent Catholic bishops and clergy who seek to be received into communion with the Holy See, hoping to continue in some sacramental role. In those instances where the pope does grant reconciliation, those deemed to be clerics within the Independent Old Catholic movement are invariably admitted as laity and not priests or bishops.",
"title": "Christian churches"
},
{
"paragraph_id": 41,
"text": "There is a mutual recognition of the validity of orders amongst Catholic, Eastern Orthodox, Old Catholic, Oriental Orthodox and Assyrian Church of the East churches.",
"title": "Christian churches"
},
{
"paragraph_id": 42,
"text": "Some provinces of the Anglican Communion have begun ordaining women as bishops in recent decades – for example, England, Ireland, Scotland, Wales, the United States, Australia, New Zealand, Canada and Cuba. The first woman to be consecrated a bishop within Anglicanism was Barbara Harris, who was ordained in the United States in 1989. In 2006, Katharine Jefferts Schori, the Episcopal Bishop of Nevada, became the first woman to become the presiding bishop of the Episcopal Church.",
"title": "Christian churches"
},
{
"paragraph_id": 43,
"text": "In the Evangelical Lutheran Church in America (ELCA) and the Evangelical Lutheran Church in Canada (ELCIC), the largest Lutheran Church bodies in the United States and Canada, respectively, and roughly based on the Nordic Lutheran national churches (similar to that of the Church of England), bishops are elected by Synod Assemblies, consisting of both lay members and clergy, for a term of six years, which can be renewed, depending upon the local synod's \"constitution\" (which is mirrored on either the ELCA or ELCIC's national constitution). Since the implementation of concordats between the ELCA and the Episcopal Church of the United States and the ELCIC and the Anglican Church of Canada, all bishops, including the presiding bishop (ELCA) or the national bishop (ELCIC), have been consecrated using the historic succession in line with bishops from the Evangelical Lutheran Church of Sweden, with at least one Anglican bishop serving as co-consecrator.",
"title": "Christian churches"
},
{
"paragraph_id": 44,
"text": "Since going into ecumenical communion with their respective Anglican body, bishops in the ELCA or the ELCIC not only approve the \"rostering\" of all ordained pastors, diaconal ministers, and associates in ministry, but they serve as the principal celebrant of all pastoral ordination and installation ceremonies, diaconal consecration ceremonies, as well as serving as the \"chief pastor\" of the local synod, upholding the teachings of Martin Luther as well as the documentations of the Ninety-Five Theses and the Augsburg Confession. Unlike their counterparts in the United Methodist Church, ELCA and ELCIC synod bishops do not appoint pastors to local congregations (pastors, like their counterparts in the Episcopal Church, are called by local congregations). The presiding bishop of the ELCA and the national bishop of the ELCIC, the national bishops of their respective bodies, are elected for a single 6-year term and may be elected to an additional term.",
"title": "Christian churches"
},
{
"paragraph_id": 45,
"text": "Although ELCA agreed with the Episcopal Church to limit ordination to the bishop \"ordinarily\", ELCA pastor-ordinators are given permission to perform the rites in \"extraordinary\" circumstance. In practice, \"extraordinary\" circumstance have included disagreeing with Episcopalian views of the episcopate, and as a result, ELCA pastors ordained by other pastors are not permitted to be deployed to Episcopal Churches (they can, however, serve in Presbyterian Church USA, United Methodist Church, Reformed Church in America, and Moravian Church congregations, as the ELCA is in full communion with these denominations). The Lutheran Church–Missouri Synod (LCMS) and the Wisconsin Evangelical Lutheran Synod (WELS), the second and third largest Lutheran bodies in the United States and the two largest Confessional Lutheran bodies in North America, do not follow an episcopal form of governance, settling instead on a form of quasi-congregationalism patterned off what they believe to be the practice of the early church. The second largest of the three predecessor bodies of the ELCA, the American Lutheran Church, was a congregationalist body, with national and synod presidents before they were re-titled as bishops (borrowing from the Lutheran churches in Germany) in the 1980s. With regard to ecclesial discipline and oversight, national and synod presidents typically function similarly to bishops in episcopal bodies.",
"title": "Christian churches"
},
{
"paragraph_id": 46,
"text": "In the African Methodist Episcopal Church, \"Bishops are the Chief Officers of the Connectional Organization. They are elected for life by a majority vote of the General Conference which meets every four years.\"",
"title": "Christian churches"
},
{
"paragraph_id": 47,
"text": "In the Christian Methodist Episcopal Church in the United States, bishops are administrative superintendents of the church; they are elected by \"delegate\" votes for as many years deemed until the age of 74, then the bishop must retire. Among their duties, are responsibility for appointing clergy to serve local churches as pastor, for performing ordinations, and for safeguarding the doctrine and discipline of the church. The General Conference, a meeting every four years, has an equal number of clergy and lay delegates. In each Annual Conference, CME bishops serve for four-year terms. CME Church bishops may be male or female.",
"title": "Christian churches"
},
{
"paragraph_id": 48,
"text": "In the United Methodist Church (the largest branch of Methodism in the world) bishops serve as administrative and pastoral superintendents of the church. They are elected for life from among the ordained elders (presbyters) by vote of the delegates in regional (called jurisdictional) conferences, and are consecrated by the other bishops present at the conference through the laying on of hands. In the United Methodist Church bishops remain members of the \"Order of Elders\" while being consecrated to the \"Office of the Episcopacy\". Within the United Methodist Church only bishops are empowered to consecrate bishops and ordain clergy. Among their most critical duties is the ordination and appointment of clergy to serve local churches as pastor, presiding at sessions of the Annual, Jurisdictional, and General Conferences, providing pastoral ministry for the clergy under their charge, and safeguarding the doctrine and discipline of the church. Furthermore, individual bishops, or the Council of Bishops as a whole, often serve a prophetic role, making statements on important social issues and setting forth a vision for the denomination, though they have no legislative authority of their own. In all of these areas, bishops of the United Methodist Church function very much in the historic meaning of the term. According to the Book of Discipline of the United Methodist Church, a bishop's responsibilities are:",
"title": "Christian churches"
},
{
"paragraph_id": 49,
"text": "Leadership.—Spiritual and Temporal—",
"title": "Christian churches"
},
{
"paragraph_id": 50,
"text": "Presidential Duties.—1. To preside in the General, Jurisdictional, Central, and Annual Conferences. 2. To form the districts after consultation with the district superintendents and after the number of the same has been determined by vote of the Annual Conference. 3. To appoint the district superintendents annually (¶¶ 517–518). 4. To consecrate bishops, to ordain elders and deacons, to consecrate diaconal ministers, to commission deaconesses and home missionaries, and to see that the names of the persons commissioned and consecrated are entered on the journals of the conference and that proper credentials are furnished to these persons.",
"title": "Christian churches"
},
{
"paragraph_id": 51,
"text": "Working with Ministers.—1. To make and fix the appointments in the Annual Conferences, Provisional Annual Conferences, and Missions as the Discipline may direct (¶¶ 529–533).",
"title": "Christian churches"
},
{
"paragraph_id": 52,
"text": "2. To divide or to unite a circuit(s), stations(s), or mission(s) as judged necessary for missionary strategy and then to make appropriate appointments. 3. To read the appointments of deaconesses, diaconal ministers, lay persons in service under the World Division of the General Board of Global Ministries, and home missionaries. 4. To fix the Charge Conference membership of all ordained ministers appointed to ministries other than the local church in keeping with ¶443.3. 5. To transfer, upon the request of the receiving bishop, ministerial member(s) of one Annual Conference to another, provided said member(s) agrees to transfer; and to send immediately to the secretaries of both conferences involved, to the conference Boards of Ordained Ministry, and to the clearing house of the General Board of Pensions written notices of the transfer of members and of their standing in the course of study if they are undergraduates.",
"title": "Christian churches"
},
{
"paragraph_id": 53,
"text": "In each Annual Conference, United Methodist bishops serve for four-year terms, and may serve up to three terms before either retirement or appointment to a new Conference. United Methodist bishops may be male or female, with Marjorie Matthews being the first woman to be consecrated a bishop in 1980.",
"title": "Christian churches"
},
{
"paragraph_id": 54,
"text": "The collegial expression of episcopal leadership in the United Methodist Church is known as the Council of Bishops. The Council of Bishops speaks to the church and through the church into the world and gives leadership in the quest for Christian unity and interreligious relationships. The Conference of Methodist Bishops includes the United Methodist Council of Bishops plus bishops from affiliated autonomous Methodist or United Churches.",
"title": "Christian churches"
},
{
"paragraph_id": 55,
"text": "John Wesley consecrated Thomas Coke a \"General Superintendent\", and directed that Francis Asbury also be consecrated for the United States of America in 1784, where the Methodist Episcopal Church first became a separate denomination apart from the Church of England. Coke soon returned to England, but Asbury was the primary builder of the new church. At first he did not call himself bishop, but eventually submitted to the usage by the denomination.",
"title": "Christian churches"
},
{
"paragraph_id": 56,
"text": "Notable bishops in United Methodist history include Coke, Asbury, Richard Whatcoat, Philip William Otterbein, Martin Boehm, Jacob Albright, John Seybert, Matthew Simpson, John S. Stamm, William Ragsdale Cannon, Marjorie Matthews, Leontine T. Kelly, William B. Oden, Ntambo Nkulu Ntanda, Joseph Sprague, William Henry Willimon, and Thomas Bickerton.",
"title": "Christian churches"
},
{
"paragraph_id": 57,
"text": "In the Church of Jesus Christ of Latter-day Saints, the Bishop is the leader of a local congregation, called a ward. As with most LDS priesthood holders, the bishop is a part-time lay minister and earns a living through other employment. As such, it is his duty to preside, call local leaders, and judge the worthiness of members for certain activities. The bishop does not deliver sermons at every service (generally asking members to do so), but is expected to be a spiritual guide for his congregation. It is therefore believed that he has both the right and ability to receive divine inspiration (through the Holy Spirit) for the ward under his direction. Because it is a part-time position, all able members are expected to assist in the management of the ward by holding delegated lay positions (for example, women's and youth leaders, teachers) referred to as callings. The bishop is especially responsible for leading the youth, in connection with the fact that a bishop is the president of the Aaronic priesthood in his ward (and is thus a form of Mormon Kohen). Although members are asked to confess serious sins to him, unlike the Catholic Church, he is not the instrument of divine forgiveness, but merely a guide through the repentance process (and a judge in case transgressions warrant excommunication or other official discipline). The bishop is also responsible for the physical welfare of the ward, and thus collects tithing and fast offerings and distributes financial assistance where needed.",
"title": "Christian churches"
},
{
"paragraph_id": 58,
"text": "A literal descendant of Aaron has \"legal right\" to act as a bishop after being found worthy and ordained by the First Presidency. In the absence of a literal descendant of Aaron, a high priest in the Melchizedek priesthood is called to be a bishop. Each bishop is selected from resident members of the ward by the stake presidency with approval of the First Presidency, and chooses two counselors to form a bishopric. An priesthood holder called as bishop must be ordained a high priest if he is not already one, unlike the similar function of branch president. In special circumstances (such as a ward consisting entirely of young university students), a bishop may be chosen from outside the ward. Traditionally, bishops are married, though this is not always the case. A bishop is typically released after about five years and a new bishop is called to the position. Although the former bishop is released from his duties, he continues to hold the Aaronic priesthood office of bishop. Church members frequently refer to a former bishop as \"Bishop\" as a sign of respect and affection.",
"title": "Christian churches"
},
{
"paragraph_id": 59,
"text": "Latter-day Saint bishops do not wear any special clothing or insignia the way clergy in many other churches do, but are expected to dress and groom themselves neatly and conservatively per their local culture, especially when performing official duties. Bishops (as well as other members of the priesthood) can trace their line of authority back to Joseph Smith, who, according to church doctrine, was ordained to lead the church in modern times by the ancient apostles Peter, James, and John, who were ordained to lead the Church by Jesus Christ.",
"title": "Christian churches"
},
{
"paragraph_id": 60,
"text": "At the global level, the presiding bishop oversees the temporal affairs (buildings, properties, commercial corporations, and so on) of the worldwide church, including the church's massive global humanitarian aid and social welfare programs. The presiding bishop has two counselors; the three together form the presiding bishopric. As opposed to ward bishoprics, where the counselors do not hold the office of bishop, all three men in the presiding bishopric hold the office of bishop, and thus the counselors, as with the presiding bishop, are formally referred to as \"Bishop\".",
"title": "Christian churches"
},
{
"paragraph_id": 61,
"text": "The New Apostolic Church (NAC) knows three classes of ministries: Deacons, Priests and Apostles. The Apostles, who are all included in the apostolate with the Chief Apostle as head, are the highest ministries.",
"title": "Christian churches"
},
{
"paragraph_id": 62,
"text": "Of the several kinds of priest....ministries, the bishop is the highest. Nearly all bishops are set in line directly from the chief apostle. They support and help their superior apostle.",
"title": "Christian churches"
},
{
"paragraph_id": 63,
"text": "In the Church of God in Christ (COGIC), the ecclesiastical structure is composed of large dioceses that are called \"jurisdictions\" within COGIC, each under the authority of a bishop, sometimes called \"state bishops\". They can either be made up of large geographical regions of churches or churches that are grouped and organized together as their own separate jurisdictions because of similar affiliations, regardless of geographical location or dispersion. Each state in the U.S. has at least one jurisdiction while others may have several more, and each jurisdiction is usually composed of between 30 and 100 churches. Each jurisdiction is then broken down into several districts, which are smaller groups of churches (either grouped by geographical situation or by similar affiliations) which are each under the authority of District Superintendents who answer to the authority of their jurisdictional/state bishop. There are currently over 170 jurisdictions in the United States, and over 30 jurisdictions in other countries. The bishops of each jurisdiction, according to the COGIC Manual, are considered to be the modern day equivalent in the church of the early apostles and overseers of the New Testament church, and as the highest ranking clergymen in the COGIC, they are tasked with the responsibilities of being the head overseers of all religious, civil, and economic ministries and protocol for the church denomination. They also have the authority to appoint and ordain local pastors, elders, ministers, and reverends within the denomination. The bishops of the COGIC denomination are all collectively called \"The Board of Bishops\". From the Board of Bishops, and the General Assembly of the COGIC, the body of the church composed of clergy and lay delegates that are responsible for making and enforcing the bylaws of the denomination, every four years, twelve bishops from the COGIC are elected as \"The General Board\" of the church, who work alongside the delegates of the General Assembly and Board of Bishops to provide administration over the denomination as the church's head executive leaders. One of twelve bishops of the General Board is also elected the \"presiding bishop\" of the church, and two others are appointed by the presiding bishop himself, as his first and second assistant presiding bishops.",
"title": "Christian churches"
},
{
"paragraph_id": 64,
"text": "Bishops in the Church of God in Christ usually wear black clergy suits which consist of a black suit blazer, black pants, a purple or scarlet clergy shirt and a white clerical collar, which is usually referred to as \"Class B Civic attire\". Bishops in COGIC also typically wear the Anglican Choir Dress style vestments of a long purple or scarlet chimere, cuffs, and tippet worn over a long white rochet, and a gold pectoral cross worn around the neck with the tippet. This is usually referred to as \"Class A Ceremonial attire\". The bishops of COGIC alternate between Class A Ceremonial attire and Class B Civic attire depending on the protocol of the religious services and other events they have to attend.",
"title": "Christian churches"
},
{
"paragraph_id": 65,
"text": "In the polity of the Church of God (Cleveland, Tennessee), the international leader is the presiding bishop, and the members of the executive committee are executive bishops. Collectively, they supervise and appoint national and state leaders across the world. Leaders of individual states and regions are administrative bishops, who have jurisdiction over local churches in their respective states and are vested with appointment authority for local pastorates. All ministers are credentialed at one of three levels of licensure, the most senior of which is the rank of ordained bishop. To be eligible to serve in state, national, or international positions of authority, a minister must hold the rank of ordained bishop.",
"title": "Christian churches"
},
{
"paragraph_id": 66,
"text": "In 2002, the general convention of the Pentecostal Church of God came to a consensus to change the title of their overseer from general superintendent to bishop. The change was brought on because internationally, the term bishop is more commonly related to religious leaders than the previous title.",
"title": "Christian churches"
},
{
"paragraph_id": 67,
"text": "The title bishop is used for both the general (international leader) and the district (state) leaders. The title is sometimes used in conjunction with the previous, thus becoming general (district) superintendent/bishop.",
"title": "Christian churches"
},
{
"paragraph_id": 68,
"text": "According to the Seventh-day Adventist understanding of the doctrine of the church:",
"title": "Christian churches"
},
{
"paragraph_id": 69,
"text": "\"The \"elders\" (Greek, presbuteros) or \"bishops\" (episkopos) were the most important officers of the church. The term elder means older one, implying dignity and respect. His position was similar to that of the one who had supervision of the synagogue. The term bishop means \"overseer\". Paul used these terms interchangeably, equating elders with overseers or bishops (Acts 20:17,28; Titus 1:5, 7).",
"title": "Christian churches"
},
{
"paragraph_id": 70,
"text": "\"Those who held this position supervised the newly formed churches. Elder referred to the status or rank of the office, while bishop denoted the duty or responsibility of the office—\"overseer\". Since the apostles also called themselves elders (1 Peter 5:1; 2 John 1; 3 John 1), it is apparent that there were both local elders and itinerant elders, or elders at large. But both kinds of elder functioned as shepherds of the congregations.\"",
"title": "Christian churches"
},
{
"paragraph_id": 71,
"text": "The above understanding is part of the basis of Adventist organizational structure. The world wide Seventh-day Adventist church is organized into local districts, conferences or missions, union conferences or union missions, divisions, and finally at the top is the general conference. At each level (with exception to the local districts), there is an elder who is elected president and a group of elders who serve on the executive committee with the elected president. Those who have been elected president would in effect be the \"bishop\" while never actually carrying the title or ordained as such because the term is usually associated with the episcopal style of church governance most often found in Catholic, Anglican, Methodist and some Pentecostal/Charismatic circles.",
"title": "Christian churches"
},
{
"paragraph_id": 72,
"text": "Some Baptists also have begun taking on the title of bishop. In some smaller Protestant denominations and independent churches, the term bishop is used in the same way as pastor, to refer to the leader of the local congregation, and may be male or female. This usage is especially common in African-American churches in the US.",
"title": "Christian churches"
},
{
"paragraph_id": 73,
"text": "In the Church of Scotland, which has a Presbyterian church structure, the word \"bishop\" refers to an ordained person, usually a normal parish minister, who has temporary oversight of a trainee minister. In the Presbyterian Church (USA), the term bishop is an expressive name for a Minister of Word and Sacrament who serves a congregation and exercises \"the oversight of the flock of Christ.\" The term is traceable to the 1789 Form of Government of the PC (USA) and the Presbyterian understanding of the pastoral office.",
"title": "Christian churches"
},
{
"paragraph_id": 74,
"text": "While not considered orthodox Christian, the Ecclesia Gnostica Catholica uses roles and titles derived from Christianity for its clerical hierarchy, including bishops who have much the same authority and responsibilities as in Catholicism.",
"title": "Christian churches"
},
{
"paragraph_id": 75,
"text": "The Salvation Army does not have bishops but has appointed leaders of geographical areas, known as Divisional Commanders. Larger geographical areas, called Territories, are led by a Territorial Commander, who is the highest-ranking officer in that Territory.",
"title": "Christian churches"
},
{
"paragraph_id": 76,
"text": "Jehovah's Witnesses do not use the title 'Bishop' within their organizational structure, but appoint elders to be overseers (to fulfill the role of oversight) within their congregations.",
"title": "Christian churches"
},
{
"paragraph_id": 77,
"text": "The Batak Christian Protestant Church of Indonesia, the most prominent Protestant denomination in Indonesia, uses the term Ephorus instead of bishop.",
"title": "Christian churches"
},
{
"paragraph_id": 78,
"text": "In the Vietnamese syncretist religion of Caodaism, bishops (giáo sư) comprise the fifth of nine hierarchical levels, and are responsible for spiritual and temporal education as well as record-keeping and ceremonies in their parishes. At any one time there are seventy-two bishops. Their authority is described in Section I of the text Tân Luật (revealed through seances in December 1926). Caodai bishops wear robes and headgear of embroidered silk depicting the Divine Eye and the Eight Trigrams. (The color varies according to branch.) This is the full ceremonial dress; the simple version consists of a seven-layered turban.",
"title": "Christian churches"
},
{
"paragraph_id": 79,
"text": "Traditionally, a number of items are associated with the office of a bishop, most notably the mitre and the crosier. Other vestments and insignia vary between Eastern and Western Christianity.",
"title": "Dress and insignia in Christianity"
},
{
"paragraph_id": 80,
"text": "In the Latin Rite of the Catholic Church, the choir dress of a bishop includes the purple cassock with amaranth trim, rochet, purple zucchetto (skull cap), purple biretta, and pectoral cross. The cappa magna may be worn, but only within the bishop's own diocese and on especially solemn occasions. The mitre, zucchetto, and stole are generally worn by bishops when presiding over liturgical functions. For liturgical functions other than the Mass the bishop typically wears the cope. Within his own diocese and when celebrating solemnly elsewhere with the consent of the local ordinary, he also uses the crosier. When celebrating Mass, a bishop, like a priest, wears the chasuble. The Caeremoniale Episcoporum recommends, but does not impose, that in solemn celebrations a bishop should also wear a dalmatic, which can always be white, beneath the chasuble, especially when administering the sacrament of holy orders, blessing an abbot or abbess, and dedicating a church or an altar. The Caeremoniale Episcoporum no longer makes mention of episcopal gloves, episcopal sandals, liturgical stockings (also known as buskins), or the accoutrements that it once prescribed for the bishop's horse. The coat of arms of a Latin Church Catholic bishop usually displays a galero with a cross and crosier behind the escutcheon; the specifics differ by location and ecclesiastical rank (see Ecclesiastical heraldry).",
"title": "Dress and insignia in Christianity"
},
{
"paragraph_id": 81,
"text": "Anglican bishops generally make use of the mitre, crosier, ecclesiastical ring, purple cassock, purple zucchetto, and pectoral cross. However, the traditional choir dress of Anglican bishops retains its late mediaeval form, and looks quite different from that of their Catholic counterparts; it consists of a long rochet which is worn with a chimere.",
"title": "Dress and insignia in Christianity"
},
{
"paragraph_id": 82,
"text": "In the Eastern Churches (Eastern Orthodox, Eastern Rite Catholic) a bishop will wear the mandyas, panagia (and perhaps an enkolpion), sakkos, omophorion and an Eastern-style mitre. Eastern bishops do not normally wear an episcopal ring; the faithful kiss (or, alternatively, touch their forehead to) the bishop's hand. To seal official documents, he will usually use an inked stamp. An Eastern bishop's coat of arms will normally display an Eastern-style mitre, cross, eastern style crosier and a red and white (or red and gold) mantle. The arms of Oriental Orthodox bishops will display the episcopal insignia (mitre or turban) specific to their own liturgical traditions. Variations occur based upon jurisdiction and national customs.",
"title": "Dress and insignia in Christianity"
},
{
"paragraph_id": 83,
"text": "In Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican cathedrals there is a special chair set aside for the exclusive use of the bishop. This is the bishop's cathedra and is often called the throne. In some Christian denominations, for example, the Anglican Communion, parish churches may maintain a chair for the use of the bishop when he visits; this is to signify the parish's union with the bishop.",
"title": "Dress and insignia in Christianity"
},
{
"paragraph_id": 84,
"text": "The leader of the Buddhist Churches of America (BCA) is their bishop, The Japanese title for the bishop of the BCA is sochō, although the English title is favored over the Japanese. When it comes to many other Buddhist terms, the BCA chose to keep them in their original language (terms such as sangha and dana), but with some words (including sochō), they changed/translated these terms into English words.",
"title": "The term's use in non-Christian religions"
},
{
"paragraph_id": 85,
"text": "Between 1899 and 1944, the BCA held the name Buddhist Mission of North America. The leader of the Buddhist Mission of North America was called kantoku (superintendent/director) between 1899 and 1918. In 1918 the kantoku was promoted to bishop (sochō). However, according to George J. Tanabe, the title \"bishop\" was in practice already used by Hawaiian Shin Buddhists (in Honpa Hongwanji Mission of Hawaii) even when the official title was kantoku.",
"title": "The term's use in non-Christian religions"
},
{
"paragraph_id": 86,
"text": "Bishops are also present in other Japanese Buddhist organizations. Higashi Hongan-ji's North American District, Honpa Honganji Mission of Hawaii, Jodo Shinshu Buddhist Temples of Canada, a Jodo Shu temple in Los Angeles, the Shingon temple Koyasan Buddhist Temple, Sōtō Mission in Hawai‘i (a Soto Zen Buddhist institution), and the Sōtō Zen Buddhist Community of South America (Comunidade Budista Sōtō Zenshū da América do Sul) all have or have had leaders with the title bishop. As for the Sōtō Zen Buddhist Community of South America, the Japanese title is sōkan, but the leader is in practice referred to as \"bishop\".",
"title": "The term's use in non-Christian religions"
},
{
"paragraph_id": 87,
"text": "Tenrikyo is a Japanese New Religion with influences from both Shinto and Buddhism. The leader of the Tenrikyo North American Mission has the title of bishop.",
"title": "The term's use in non-Christian religions"
}
] | A bishop is an ordained member of the clergy who is entrusted with a position of authority and oversight in a religious institution. In Christianity, bishops are normally responsible for the governance and administration of dioceses. The role or office of the bishop is called episcopacy. Organizationally, several Christian denominations utilize ecclesiastical structures that call for the position of bishops, while other denominations have dispensed with this office, seeing it as a symbol of power. Bishops have also exercised political authority within their dioceses. Traditionally, bishops claim apostolic succession, a direct historical lineage dating back to the original Twelve Apostles or Saint Paul. The bishops are by doctrine understood as those who possess the full priesthood given by Jesus Christ, and therefore may ordain other clergy, including other bishops. A person ordained as a deacon, priest, and then bishop is understood to hold the fullness of the ministerial priesthood, given responsibility by Christ to govern, teach and sanctify the Body of Christ. Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry. Some Pentecostal and other Protestant denominations have bishops who oversee congregations, though they do not claim apostolic succession. | 2001-10-02T20:35:14Z | 2023-12-20T16:19:04Z | [
"Template:Original research",
"Template:Further",
"Template:Blockquote",
"Template:Cite book",
"Template:Redirect",
"Template:Use dmy dates",
"Template:Christianity",
"Template:Transliteration",
"Template:Bibleverse",
"Template:Dead link",
"Template:Commons category",
"Template:Cite EB1911",
"Template:Christianity footer",
"Template:Short description",
"Template:Clear",
"Template:Webarchive",
"Template:Cbignore",
"Template:Lang-grc",
"Template:Main",
"Template:Term",
"Template:Div col",
"Template:About",
"Template:Refbegin",
"Template:Refend",
"Template:Div col end",
"Template:Authority control",
"Template:Lang",
"Template:More citations needed section",
"Template:Glossary",
"Template:Portal",
"Template:Cite news",
"Template:Cite legislation UK",
"Template:See also",
"Template:Defn",
"Template:Notelist",
"Template:Cite web",
"Template:Wiktionary",
"Template:Lutheran Divine Service",
"Template:Anglicanism (footer)",
"Template:Sfn",
"Template:Efn",
"Template:Reflist",
"Template:Cite journal",
"Template:C.",
"Template:Citation needed",
"Template:Glossary end",
"Template:-"
] | https://en.wikipedia.org/wiki/Bishop |
4,093 | Bertrand Andrieu | Bertrand Andrieu (24 November 1761 – 6 December 1822) was a French engraver of medals. He was born in Bordeaux. In France, he was considered as the restorer of the art, which had declined after the time of Louis XIV. During the last twenty years of his life, the French government commissioned him to undertake every major work of importance. | [
{
"paragraph_id": 0,
"text": "Bertrand Andrieu (24 November 1761 – 6 December 1822) was a French engraver of medals. He was born in Bordeaux. In France, he was considered as the restorer of the art, which had declined after the time of Louis XIV. During the last twenty years of his life, the French government commissioned him to undertake every major work of importance.",
"title": ""
},
{
"paragraph_id": 1,
"text": "",
"title": "External links"
}
] | Bertrand Andrieu was a French engraver of medals. He was born in Bordeaux. In France, he was considered as the restorer of the art, which had declined after the time of Louis XIV. During the last twenty years of his life, the French government commissioned him to undertake every major work of importance. | 2022-12-08T05:21:02Z | [
"Template:Short description",
"Template:Sfnp",
"Template:EB1911",
"Template:FrenchSculptureCensus",
"Template:France-artist-stub",
"Template:Printmaker-stub",
"Template:Reflist",
"Template:Citation",
"Template:Commons category",
"Template:Authority control (arts)"
] | https://en.wikipedia.org/wiki/Bertrand_Andrieu |
|
4,097 | Bordeaux | Bordeaux (/bɔːrˈdoʊ/ bor-DOH, French: [bɔʁdo] ; Gascon Occitan: Bordèu [buɾˈðɛw]; Basque: Bordele) is a city on the river Garonne in the Gironde department, southwestern France. A port city, it is the capital of the Nouvelle-Aquitaine region, as well as the prefecture of the Gironde department. Its inhabitants are called "Bordelais" (masculine) or "Bordelaises" (feminine). The term "Bordelais" may also refer to the city and its surrounding region.
The city of Bordeaux proper had a population of 259,809 in 2020 within its small municipal territory of 49 km (19 sq mi), but together with its suburbs and exurbs the Bordeaux metropolitan area had a population of 1,376,375 that same year (Jan. 2020 census), the sixth-most populated in France after Paris, Lyon, Marseille, Lille, and Toulouse.
Bordeaux and 27 suburban municipalities form the Bordeaux Metropolis, an indirectly elected metropolitan authority now in charge of wider metropolitan issues. The Bordeaux Metropolis, with a population of 819,604 at the January 2020 census, is the fifth most populated metropolitan council in France after those of Paris, Marseille, Lyon and Lille.
Bordeaux is a world capital of wine: many châteaux and vineyards stand on the hillsides of the Gironde, and the city is home to the world's main wine fair, Vinexpo. Bordeaux is also one of the centers of gastronomy and business tourism for the organization of international congresses. It is a central and strategic hub for the aeronautics, military and space sector, home to international companies such as Dassault Aviation, Ariane Group, Safran and Thalès. The link with aviation dates back to 1910, the year the first airplane flew over the city. A crossroads of knowledge through university research, it is home to one of the only two megajoule lasers in the world, as well as a university population of more than 130,000 students within the Bordeaux Metropolis.
Bordeaux is an international tourist destination for its architectural and cultural heritage with more than 350 historic monuments, making it, after Paris, the city with the most listed or registered monuments in France. The "Pearl of Aquitaine" has been voted European Destination of the year in a 2015 online poll. The metropolis has also received awards and rankings by international organizations such as in 1957, Bordeaux was awarded the Europe Prize for its efforts in transmitting the European ideal. In June 2007, the Port of the Moon in historic Bordeaux was inscribed on the UNESCO World Heritage List, for its outstanding architecture and urban ensemble and in recognition of Bordeaux's international importance over the last 2000 years. Bordeaux is also ranked as a Sufficiency city by the Globalization and World Cities Research Network.
Roman Republic c. 60–27 BC Roman Empire 27 BC–AD 395 Gallic Empire 260–274 Western Roman Empire 395–418 Visigothic Kingdom 395–6th century Francia 6th century–843 West Francia 843–987 Kingdom of France 987–1154 Angevin Empire 1154–1214 Kingdom of England 1214–1453 Kingdom of France 1453–1792 French First Republic 1792–1804 First French Empire 1804–1814 Kingdom of France 1814–1815 First French Empire 1815 Kingdom of France 1815–1830 July Monarchy 1830–1848 French Second Republic 1848–1852 Second French Empire 1852–1870 French Third Republic 1870–1940 Military Administration in France 1940–1944 ∟ part of German-occupied Europe from 1940 to 1944 Provisional Government of the French Republic 1944–1946 French Fourth Republic 1946–1958 French Fifth Republic 1958–present
Around 300 BC, the region was the settlement of a Celtic tribe, the Bituriges Vivisci, named the town Burdigala, probably of Aquitanian origin.
In 107 BC, the Battle of Burdigala was fought by the Romans who were defending the Allobroges, a Gallic tribe allied to Rome, and the Tigurini led by Divico. The Romans were defeated and their commander, the consul Lucius Cassius Longinus, was killed in battle.
The city came under Roman rule around 60 BC, and it became an important commercial centre for tin and lead. During this period were built the amphitheatre and the monument Les Piliers de Tutelle.
In 276, it was sacked by the Vandals. The Vandals attacked again in 409, followed by the Visigoths in 414, and the Franks in 498, and afterwards the city fell into a period of relative obscurity.
In the late sixth century the city re-emerged as the seat of a county and an archdiocese within the Merovingian kingdom of the Franks, but royal Frankish power was never strong. The city started to play a regional role as a major urban center on the fringes of the newly founded Frankish Duchy of Vasconia. Around 585 Gallactorius was made Count of Bordeaux and fought the Basques.
In 732, the city was plundered by the troops of Abd er Rahman who stormed the fortifications and overwhelmed the Aquitanian garrison. Duke Eudes mustered a force to engage the Umayyads, eventually engaging them in the Battle of the River Garonne somewhere near the river Dordogne. The battle had a high death toll, and although Eudes was defeated he had enough troops to engage in the Battle of Poitiers and so retain his grip on Aquitaine.
In 737, following his father Eudes's death, the Aquitanian duke Hunald led a rebellion to which Charles responded by launching an expedition that captured Bordeaux. However, it was not retained for long, during the following year the Frankish commander clashed in battle with the Aquitanians but then left to take on hostile Burgundian authorities and magnates. In 745 Aquitaine faced another expedition where Charles's sons Pepin and Carloman challenged Hunald's power and defeated him. Hunald's son Waifer replaced him and confirmed Bordeaux as the capital city (along with Bourges in the north).
During the last stage of the war against Aquitaine (760–768), it was one of Waifer's last important strongholds to fall to the troops of King Pepin the Short. Charlemagne built the fortress of Fronsac (Frontiacus, Franciacus) near Bordeaux on a hill across the border with the Basques (Wascones), where Basque commanders came and pledged their loyalty (769).
In 778, Seguin (or Sihimin) was appointed count of Bordeaux, probably undermining the power of the Duke Lupo, and possibly leading to the Battle of Roncevaux Pass. In 814, Seguin was made Duke of Vasconia, but was deposed in 816 for failing to suppress a Basque rebellion. Under the Carolingians, sometimes the Counts of Bordeaux held the title concomitantly with that of Duke of Vasconia. They were to keep the Basques in check and defend the mouth of the Garonne from the Vikings when they appeared in c. 844. In Autumn 845, the Vikings were raiding Bordeaux and Saintes, count Seguin II marched on them but was captured and executed.
Although the port of Bordeaux was a buzzing trade center, the stability and success of the city was threatened by Viking and Norman incursions and political instability. The restoration of the Ramnulfid Dukes of Aquitaine under William IV and his successors (known as the House of Poitiers) brought continuity of government.
From the 12th to the 15th century, Bordeaux flourished once more following the marriage of Eléonore, Duchess of Aquitaine and the last of the House of Poitiers, to Henry II Plantagenêt, Count of Anjou and the grandson of Henry I of England, who succeeded to the English crown months after their wedding, bringing into being the vast Angevin Empire, which stretched from the Pyrenees to Ireland. After granting a tax-free trade status with England, Henry was adored by the locals as they could be even more profitable in the wine trade, their main source of income, and the city benefited from imports of cloth and wheat. The belfry (Grosse Cloche) and city cathedral St-André were built, the latter in 1227, incorporating the artisan quarter of Saint-Paul. Under the terms of the Treaty of Brétigny it became briefly the capital of an independent state (1362–1372) under Edward, the Black Prince, but after the Battle of Castillon (1453) it was annexed by France.
In 1462, Bordeaux created a local parliament.
Bordeaux adhered to the Fronde, being effectively annexed to the Kingdom of France only in 1653, when the army of Louis XIV entered the city.
The 18th century saw another golden age of Bordeaux. The Port of the Moon supplied the majority of Europe with coffee, cocoa, sugar, cotton and indigo, becoming France's busiest port and the second busiest port in the world after London. Many downtown buildings (about 5,000), including those on the quays, are from this period.
Bordeaux was also a major trading centre for slaves. In total, the Bordeaux shipowners deported 150,000 Africans in some 500 expeditions.
At the beginning of the French Revolution (1789), many local revolutionaries were members of the Girondists. This Party represented the provincial bourgeoisie, favorable towards abolishing aristocracy privileges, but opposed to the Revolution's social dimension. In 1793, the Montagnards led by Robespierre and Marat came to power. Fearing a bourgeois misappropriation of the Revolution, they executed a great number of Girondists. During the purge, the local Montagnard Section renamed the city of Bordeaux "Commune-Franklin" (Franklin-municipality) in homage to Benjamin Franklin.
At the same time, in 1791, a slave revolt broke out at Saint-Domingue (current Haiti), the most profitable of the French colonies. Three years later, the Montagnard Convention abolished slavery. In 1802, Napoleon revoked the manumission law but lost the war against the army of former slaves. In 1804, Haiti became independent. The loss of this "Pearl" of the West Indies generated the collapse of Bordeaux's port economy, which was dependent on the colonial trade and trade in slaves.
Towards the end of the Peninsular War of 1814, the Duke of Wellington sent William Beresford with two divisions and seized Bordeaux, encountering little resistance. Bordeaux was largely anti-Bonapartist and the majority supported the Bourbons. The British troops were treated as liberators.
From the Bourbon Restoration, the economy of Bordeaux was rebuilt by traders and shipowners. They engaged to construct the first bridge of Bordeaux, and customs warehouses. The shipping traffic grew through the new African colonies.
Georges-Eugène Haussmann, a longtime prefect of Bordeaux, used Bordeaux's 18th-century large-scale rebuilding as a model when he was asked by Emperor Napoleon III to transform the quasi-medieval Paris into a "modern" capital that would make France proud. Victor Hugo found the town so beautiful he said: "Take Versailles, add Antwerp, and you have Bordeaux".
In 1870, at the beginning of the Franco-Prussian war against Prussia, the French government temporarily relocated to Bordeaux from Paris. That recurred during World War I and again very briefly during World War II, when it became clear that Paris would fall into German hands.
During World War II, Bordeaux fell under German occupation.
In May and June 1940, Bordeaux was the site of the life-saving actions of the Portuguese consul-general, Aristides de Sousa Mendes, who illegally granted thousands of Portuguese visas, which were needed to pass the Spanish border, to refugees fleeing the German occupation.
From 1941 to 1943, the Italian Royal Navy established BETASOM, a submarine base at Bordeaux. Italian submarines participated in the Battle of the Atlantic from that base, which was also a major base for German U-boats as headquarters of 12th U-boat Flotilla. The massive, reinforced concrete U-boat pens have proved impractical to demolish and are now partly used as a cultural center for exhibitions.
In 2007, 40% of the city surface area, located around the Port of the Moon, was listed as World heritage sites. Unesco inscribed Bordeaux as "an inhabited historic city, an outstanding urban and architectural ensemble, created in the age of the Enlightenment, whose values continued up to the first half of the 20th century, with more protected buildings than any other French city except Paris".
Bordeaux is located close to the European Atlantic coast, in the southwest of France and in the north of the Aquitaine region. It is around 500 km (310 mi) southwest of Paris. The city is built on a bend of the river Garonne, and is divided into two parts: the right bank to the east and left bank in the west. Historically the left bank is more developed because when flowing outside the bend, the water makes a furrow of the required depth to allow the passing of merchant ships, which used to offload on this side of the river. But, today, the right bank is developing, including new urban projects. In Bordeaux, the Garonne River is accessible to ocean liners through the Gironde estuary. The right bank of the Garonne is a low-lying, often marshy plain.
Bordeaux's climate can be classified as oceanic (Köppen climate classification Cfb), bordering on a humid subtropical climate (Cfa). However, the Trewartha climate classification system classifies the city as solely humid subtropical, due to a recent rise in temperatures related - to some degree or another - to climate change and the city's urban heat island.
The city enjoys cool to mild, wet winters, due to its relatively southerly latitude, and the prevalence of mild, westerly winds from the Atlantic. Its summers are warm and somewhat drier, although wet enough to avoid a Mediterranean classification. Frosts occur annually, but snowfall is quite infrequent, occurring for no more than 3-4 days a year. The summer of 2003 set a record with an average temperature of 23.3 °C (73.9 °F), while February 1956 was the coldest month on record with an average temperature of −2.00 °C at Bordeaux Mérignac-Airport.
Bordeaux is a major centre for business in France as it has the sixth largest metropolitan population in France. It serves as a major regional center for trade, administration, services and industry.
The vine was introduced to the Bordeaux region by the Romans, probably in the mid-first century, to provide wine for local consumption, and wine production has been continuous in the region since.
Bordeaux wine growing area has about 116,160 hectares (287,000 acres) of vineyards, 57 appellations, 10,000 wine-producing estates (châteaux) and 13,000 grape growers. With an annual production of approximately 960 million bottles, the Bordeaux area produces large quantities of everyday wine as well as some of the most expensive wines in the world. Included among the latter are the area's five premier cru (First Growth) red wines (four from Médoc and one, Château Haut-Brion, from Graves), established by the Bordeaux Wine Official Classification of 1855:
Both red and white wines are made in the Bordeaux region. Red Bordeaux wine is called claret in the United Kingdom. Red wines are generally made from a blend of grapes, and may be made from Cabernet Sauvignon, Merlot, Cabernet Franc, Petit verdot, Malbec, and, less commonly in recent years, Carménère.
White Bordeaux is made from Sauvignon blanc, Sémillon, and Muscadelle. Sauternes is a sub-region of Graves known for its intensely sweet, white, dessert wines such as Château d'Yquem.
Because of a wine glut (wine lake) in the generic production, the price squeeze induced by an increasingly strong international competition, and vine pull schemes, the number of growers has recently dropped from 14,000 and the area under vine has also decreased significantly. In the meantime, the global demand for first growths and the most famous labels markedly increased and their prices skyrocketed.
The Cité du Vin, a museum as well as a place of exhibitions, shows, movie projections and academic seminars on the theme of wine opened its doors in June 2016.
The Laser Mégajoule will be one of the most powerful lasers in the world, allowing fundamental research and the development of the laser and plasma technologies.
Some 20,000 people work for the aeronautic industry in Bordeaux. The city has some of the biggest companies including Dassault, EADS Sogerma, Snecma, Thales, SNPE, and others. The Dassault Falcon private jets are built there as well as the military aircraft Rafale and Mirage 2000, the Airbus A380 cockpit, the boosters of Ariane 5, and the M51 SLBM missile.
Tourism, especially wine tourism, is a major industry. Globelink.co.uk mentioned Bordeaux as the best tourist destination in Europe in 2015. Gourmet Touring is a tourism company operating in the Bordeaux wine region.
Access to the port from the Atlantic is via the Gironde estuary. Almost nine million tonnes of goods arrive and leave each year.
This list includes indigenous Bordeaux-based companies and companies that have major presence in Bordeaux, but are not necessarily headquartered there.
In January 2020, there were 259,809 inhabitants in the city proper (commune) of Bordeaux. The commune (including Caudéran which was annexed by Bordeaux in 1965) had its largest population of 284,494 at the 1954 census. The majority of the population is French, but there are sizable groups of Italians, Spaniards (Up to 20% of the Bordeaux population claim some degree of Spanish heritage), Portuguese, Turks, Germans.
The built-up area has grown for more than a century beyond the municipal borders of Bordeaux due to the small size of the commune (49 km (19 sq mi)) and urban sprawl, so that by January 2020 there were 1,376,375 people living in the overall 6,316 km (2,439 sq mi) metropolitan area (aire d'attraction) of Bordeaux, only a fifth of whom lived in the city proper.
The Mayor of the city is the environmentalist Pierre Hurmic.
Bordeaux is the capital of five cantons and the Prefecture of the Gironde and Aquitaine.
The town is divided into three districts, the first three of Gironde. The headquarters of Urban Community of Bordeaux Mériadeck is located in the neighbourhood and the city is at the head of the Chamber of Commerce and Industry that bears his name.
The number of inhabitants of Bordeaux is greater than 250,000 and less than 299,999 so the number of municipal councilors is 65. They are divided according to the following composition:
Since the Liberation (1944), there have been six mayors of Bordeaux:
At the 2007 presidential election, the Bordelais gave 31.37% of their votes to Ségolène Royal of the Socialist Party against 30.84% to Nicolas Sarkozy, president of the UMP. Then came François Bayrou with 22.01%, followed by Jean-Marie Le Pen who recorded 5.42%. None of the other candidates exceeded the 5% mark. Nationally, Nicolas Sarkozy led with 31.18%, then Ségolène Royal with 25.87%, followed by François Bayrou with 18.57%. After these came Jean-Marie Le Pen with 10.44%, none of the other candidates exceeded the 5% mark. In the second round, the city of Bordeaux gave Ségolène Royal 52.44% against 47.56% for Nicolas Sarkozy, the latter being elected President of the Republic with 53.06% against 46.94% for Ségolène Royal. The abstention rates for Bordeaux were 14.52% in the first round and 15.90% in the second round.
In the parliamentary elections of 2007, the left won eight constituencies against only three for the right. It should be added that after the partial 2008 elections, the eighth district of Gironde switched to the left, bringing the count to nine. In Bordeaux, the left was for the first time in its history the majority as it held two of three constituencies following the elections. In the first division of the Gironde, the outgoing UMP MP Chantal Bourragué was well ahead with 44.81% against 25.39% for the Socialist candidate Beatrice Desaigues. In the second round, it was Chantal Bourragué who was re-elected with 54.45% against 45.55% for his socialist opponent. In the second district of Gironde the UMP mayor and all new Minister of Ecology, Energy, Sustainable Development and the Sea Alain Juppé confronted the General Counsel PS Michèle Delaunay. In the first round, Alain Juppé was well ahead with 43.73% against 31.36% for Michèle Delaunay. In the second round, it was finally Michèle Delaunay who won the election with 50.93% of the votes against 49.07% for Alain Juppé, the margin being only 670 votes. The defeat of the so-called constituency "Mayor" showed that Bordeaux was rocking increasingly left. Finally, in the third constituency of the Gironde, Noël Mamère was well ahead with 39.82% against 28.42% for the UMP candidate Elizabeth Vine. In the second round, Noël Mamère was re-elected with 62.82% against 37.18% for his right-wing rival.
In 2008 municipal elections saw the clash between mayor of Bordeaux, Alain Juppé and the President of the Regional Council of Aquitaine Socialist Alain Rousset. The PS had put up a Socialist heavyweight in the Gironde and had put great hopes in this election after the victory of Ségolène Royal and Michèle Delaunay in 2007. However, after a rather exciting campaign it was Alain Juppé who was widely elected in the first round with 56.62%, far ahead of Alain Rousset who has managed to get 34.14%. At present, of the eight cantons that has Bordeaux, five are held by the PS and three by the UMP, the left eating a little each time into the right's numbers.
In the European elections of 2009, Bordeaux voters largely voted for the UMP candidate Dominique Baudis, who won 31.54% against 15.00% for PS candidate Kader Arif. The candidate of Europe Ecology José Bové came second with 22.34%. None of the other candidates reached the 10% mark. The 2009 European elections were like the previous ones in eight constituencies. Bordeaux is located in the district "Southwest", here are the results:
UMP candidate Dominique Baudis: 26.89%. His party gained four seats. PS candidate Kader Arif: 17.79%, gaining two seats in the European Parliament. Europe Ecology candidate Bove: 15.83%, obtaining two seats. MoDem candidate Robert Rochefort: 8.61%, winning a seat. Left Front candidate Jean-Luc Mélenchon: 8.16%, gaining the last seat. At regional elections in 2010, the Socialist incumbent president Alain Rousset won the first round by totaling 35.19% in Bordeaux, but this score was lower than the plan for Gironde and Aquitaine. Xavier Darcos, Minister of Labour followed with 28.40% of the votes, scoring above the regional and departmental average. Then came Monique De Marco, Green candidate with 13.40%, followed by the member of Pyrenees-Atlantiques and candidate of the MoDem Jean Lassalle who registered a low 6.78% while qualifying to the second round on the whole Aquitaine, closely followed by Jacques Colombier, candidate of the National Front, who gained 6.48%. Finally the candidate of the Left Front Gérard Boulanger with 5.64%, no other candidate above the 5% mark. In the second round, Alain Rousset had a tidal wave win as national totals rose to 55.83%. If Xavier Darcos largely lost the election, he nevertheless achieved a score above the regional and departmental average obtaining 33.40%. Jean Lassalle, who qualified for the second round, passed the 10% mark by totaling 10.77%. The ballot was marked by abstention amounting to 55.51% in the first round and 53.59% in the second round.
Only candidates obtaining more than 5% are listed
Bordeaux voted for Emmanuel Macron in the presidential election. In the 2017 parliamentary election, La République En Marche! won most of the constituencies in Bordeaux.
Bordeaux voted in the 2019 European Parliament election in France.
After 73 years of right-of-centre rule, the ecologist Pierre Hurmic (EELV) came in ahead of Nicolas Florian (LR/LaREM).
The city area is represented by the following constituencies: Gironde's 1st, Gironde's 2nd, Gironde's 3rd, Gironde's 4th, Gironde's 5th, Gironde's 6th, Gironde's 7th.
During Antiquity, a first university had been created by the Romans in 286. The city was an important administrative centre and the new university had to train administrators. Only rhetoric and grammar were taught. Ausonius and Sulpicius Severus were two of the teachers.
In 1441, when Bordeaux was an English town, the Pope Eugene IV created a university by demand of the archbishop Pey Berland. In 1793, during the French Revolution, the National Convention abolished the university, and replace them with the École centrale in 1796. In Bordeaux, this one was located in the former buildings of the college of Guyenne. In 1808, the university reappeared with Napoleon. Bordeaux accommodates approximately 70,000 students on one of the largest campuses of Europe (235 ha).
Bordeaux has numerous public and private schools offering undergraduate and postgraduate programs.
Engineering schools:
Business and management schools:
Other:
The École Compleméntaire Japonaise de Bordeaux (ボルドー日本語補習授業校, Borudō Nihongo Hoshū Jugyō Kō), a part-time Japanese supplementary school, is held in the Salle de L'Athénée Municipal in Bordeaux.
Bordeaux is classified "City of Art and History". The city is home to 362 monuments historiques (only Paris has more in France) with some buildings dating back to Roman times. Bordeaux, Port of the Moon, has been inscribed on UNESCO World Heritage List as "an outstanding urban and architectural ensemble".
Bordeaux is home to one of Europe's biggest 18th-century architectural urban areas, making it a sought-after destination for tourists and cinema production crews. It stands out as one of the first French cities, after Nancy, to have entered an era of urbanism and metropolitan big scale projects, with the team Gabriel father and son, architects for King Louis XV, under the supervision of two intendants (Governors), first Nicolas-François Dupré de Saint-Maur then the Marquis de Tourny.
Saint-André Cathedral, Saint-Michel Basilica and Saint-Seurin Basilica are part of the World Heritage Sites of the Routes of Santiago de Compostela in France. The organ in Saint-Louis-des-Chartrons is registered on the French monuments historiques.
Main sights include:
Slavery was part of a growing drive for the city. Firstly, during the 18th and 19th centuries, Bordeaux was an important slave port, which saw some 500 slave expeditions that cause the deportation of 150,000 Africans by Bordeaux shipowners. Secondly, even though the "Triangular trade" represented only 5% of Bordeaux's wealth, the city's direct trade with the Caribbean, that accounted for the other 95%, concerns the colonial stuffs made by the slave (sugar, coffee, cocoa). And thirdly, in that same period, a major migratory movement by Aquitanians took place to the Caribbean colonies, with Saint-Domingue (now Haiti) being the most popular destination. 40% of the white population of the island came from Aquitaine. They prospered with plantations incomes, until the first slave revolts which concluded in 1848 in the final abolition of slavery in France.
A statue of Modeste Testas, an Ethiopian woman who was enslaved by the Bordeaux-based Testas brothers was unveiled in 2019. She was trafficked by them from West Africa, to Philadelphia (where one of the brother coerced her to have two children by him) and was ultimately freed and lived in Haiti. The bronze sculpture was created by the Haitian artists Woodly Caymitte.
A number of traces and memorial sites are visible in the city. Moreover, in May 2009, the Museum of Aquitaine opened the spaces dedicated to "Bordeaux in the 18th century, trans-Atlantic trading and slavery". This work, richly illustrated with original documents, contributes to disseminate the state of knowledge on this question, presenting above all the facts and their chronology.
The region of Bordeaux was also the land of several prominent abolitionists, as Montesquieu, Laffon de Ladébat and Elisée Reclus. Others were members of the Society of the Friends of the Blacks as the revolutionaries Boyer-Fonfrède, Gensonné, Guadet and Ducos.
Europe's longest-span vertical-lift bridge, the Pont Jacques Chaban-Delmas, was opened in 2013 in Bordeaux, spanning the River Garonne. The central lift span is 117-metre-long (384-foot), weighs 4,600 tons and can be lifted vertically up to 53 metres (174 feet) to let tall ships pass underneath. The €160 million bridge was inaugurated by President François Hollande and Mayor Alain Juppé on 16 March 2013. The bridge was named after the late Jacques Chaban-Delmas, who was a former Prime Minister and Mayor of Bordeaux.
Bordeaux has many shopping options. In the heart of Bordeaux is Rue Sainte-Catherine. This pedestrian-only shopping street has 1.2 kilometers (0.75 mi) of shops, restaurants and cafés; it is also one of the longest shopping streets in Europe. Rue Sainte-Catherine starts at Place de la Victoire and ends at Place de la Comédie by the Grand Théâtre. The shops become progressively more upmarket as one moves towards Place de la Comédie and the nearby Cours de l'Intendance is where one finds the more exclusive shops and boutiques.
Bordeaux is also the first city in France to have created, in the 1980s, an architecture exhibition and research centre, Arc en rêve. Bordeaux offers a large number of cinemas, theatres, and is the home of the Opéra national de Bordeaux. There are many music venues of varying capacity. The city also offers several festivals throughout the year. In October 2021, Bordeaux was shortlisted for the European Commission's 2022 European Capital of Smart Tourism award along with Copenhagen, Dublin, Florence, Ljubljana, Palma de Mallorca and Valencia.
Bordeaux is an important road and motorway junction. The city is connected to Paris by the A10 motorway, with Lyon by the A89, with Toulouse by the A62, and with Spain by the A63. There is a 45 km (28 mi) ring road called the "Rocade" which is often very busy. Another ring road is under consideration.
Bordeaux has five road bridges that cross the Garonne, the Pont de pierre built in the 1820s and three modern bridges built after 1960: the Pont Saint Jean, just south of the Pont de pierre (both located downtown), the Pont d'Aquitaine, a suspension bridge downstream from downtown, and the Pont François Mitterrand, located upstream of downtown. These two bridges are part of the ring-road around Bordeaux. A fifth bridge, the Pont Jacques-Chaban-Delmas, was constructed in 2009–2012 and opened to traffic in March 2013. Located halfway between the Pont de pierre and the Pont d'Aquitaine and serving downtown rather than highway traffic, it is a vertical-lift bridge with a height in closed position comparable to that of Pont de pierre, and to the Pont d'Aquitaine when open. All five road bridges, including the two highway bridges, are open to cyclists and pedestrians as well. Another bridge, the Pont Jean-Jacques Bosc, is to be built in 2018.
Lacking any steep hills, Bordeaux is relatively friendly to cyclists. Cycle paths (separate from the roadways) exist on the highway bridges, along the riverfront, on the university campuses, and incidentally elsewhere in the city. Cycle lanes and bus lanes that explicitly allow cyclists exist on many of the city's boulevards. A paid bicycle-sharing system with automated stations was established in 2010.
The main railway station, Gare de Bordeaux Saint-Jean, near the center of the city, has 12 million passengers a year. It is served by the French national (SNCF) railway's high speed train, the TGV, that gets to Paris in two hours, with connections to major European centers such as Lille, Brussels, Amsterdam, Cologne, Geneva and London. The TGV also serves Toulouse and Irun (Spain) from Bordeaux. A regular train service is provided to Nantes, Nice, Marseille and Lyon. The Gare Saint-Jean is the major hub for regional trains (TER) operated by the SNCF to Arcachon, Limoges, Agen, Périgueux, Langon, Pau, Le Médoc, Angoulême and Bayonne.
Historically the train line used to terminate at a station on the right bank of the river Garonne near the Pont de Pierre, and passengers crossed the bridge to get into the city. Subsequently, a double-track steel railway bridge was constructed in the 1850s, by Gustave Eiffel, to bring trains across the river direct into Gare de Bordeaux Saint-Jean. The old station was later converted and in 2010 comprised a cinema and restaurants.
The two-track Eiffel bridge with a speed limit of 30 km/h (19 mph) became a bottleneck and a new bridge was built, opening in 2009. The new bridge has four tracks and allows trains to pass at 60 km/h (37 mph). During the planning there was much lobbying by the Eiffel family and other supporters to preserve the old bridge as a footbridge across the Garonne, with possibly a museum to document the history of the bridge and Gustave Eiffel's contribution. The decision was taken to save the bridge, but by early 2010 no plans had been announced as to its future use. The bridge remains intact, but unused and without any means of access.
Since July 2017, the LGV Sud Europe Atlantique is fully operational and makes Bordeaux city 2h04 from Paris.
Bordeaux is served by Bordeaux–Mérignac Airport, located 8 km (5.0 mi) from the city centre in the suburban city of Mérignac.
Bordeaux has an important public transport system called Transports Bordeaux Métropole (TBM). This company is run by the Keolis group. The network consists of:
This network is operated from 5 am to 2 am.
There had been several plans for a subway network to be set up, but they stalled for both geological and financial reasons. Work on the Tramway de Bordeaux system was started in the autumn of 2000, and services started in December 2003 connecting Bordeaux with its suburban areas. The tram system uses Alstom APS a form of ground-level power supply technology developed by French company Alstom and designed to preserve the aesthetic environment by eliminating overhead cables in the historic city. Conventional overhead cables are used outside the city. The system was controversial for its considerable cost of installation, maintenance and also for the numerous initial technical problems that paralysed the network. Many streets and squares along the tramway route became pedestrian areas, with limited access for cars.
The planned Bordeaux tramway system is to link with the airport to the city centre towards the end of 2019.
There are more than 400 taxicabs in Bordeaux.
The average amount of time people spend commuting with public transit in Bordeaux, for example to and from work, on a weekday is 51 min. 12.% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 13 min, while 15.5% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is 7 km (4.3 mi), while 8% travel for over 12 km (7.5 mi) in a single direction.
The 41,458-capacity Nouveau Stade de Bordeaux is the largest stadium in Bordeaux. The stadium was opened in 2015 and replaced the Stade Chaban-Delmas, which was a venue for the FIFA World Cup in 1938 and 1998, as well as the 2007 Rugby World Cup. In the 1938 FIFA World Cup, it hosted a violent quarter-final known as the Battle of Bordeaux. The ground was formerly known as the Stade du Parc Lescure until 2001, when it was renamed in honour of the city's long-time mayor, Jacques Chaban-Delmas.
There are two major sport teams in Bordeaux, Girondins de Bordeaux is the football team, playing in Ligue 2, the second tier of French football. Union Bordeaux Bègles is a rugby team in the Top 14 in the Ligue Nationale de Rugby. Skateboarding, rollerblading, and BMX biking are activities enjoyed by many young inhabitants of the city. Bordeaux is home to a quay which runs along the Garonne river. On the quay there is a skate-park divided into three sections. One section is for Vert tricks, one for street style tricks, and one for little action sports athletes with easier features and softer materials. The skate-park is very well maintained by the municipality.
Bordeaux is also the home to one of the strongest cricket teams in France and are champions of the South West League.
There is a 250 m (820 ft) wooden velodrome, Vélodrome du Lac, in Bordeaux which hosts international cycling competition in the form of UCI Track Cycling World Cup events.
The 2015 Trophee Eric Bompard was in Bordeaux. But the Free Skate was cancelled in all of the divisions due to the Paris bombing(s) and aftermath. The Short Program occurred hours before the bombing. French skaters Chafik Besseghier (68.36) in tenth place, Romain Ponsart (62.86) in 11th. Mae-Berenice-Meite (46.82) in 11th and Laurine Lecavelier (46.53) in 12th. Vanessa James/Morgan Cipres (65.75) in second.
Between 1951 and 1955, an annual Formula 1 motor race was held on a 2.5-kilometre circuit which looped around the Esplanade des Quinconces and along the waterfront, attracting drivers such as Juan Manuel Fangio, Stirling Moss, Jean Behra and Maurice Trintignant.
Bordeaux is twinned with: | [
{
"paragraph_id": 0,
"text": "Bordeaux (/bɔːrˈdoʊ/ bor-DOH, French: [bɔʁdo] ; Gascon Occitan: Bordèu [buɾˈðɛw]; Basque: Bordele) is a city on the river Garonne in the Gironde department, southwestern France. A port city, it is the capital of the Nouvelle-Aquitaine region, as well as the prefecture of the Gironde department. Its inhabitants are called \"Bordelais\" (masculine) or \"Bordelaises\" (feminine). The term \"Bordelais\" may also refer to the city and its surrounding region.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The city of Bordeaux proper had a population of 259,809 in 2020 within its small municipal territory of 49 km (19 sq mi), but together with its suburbs and exurbs the Bordeaux metropolitan area had a population of 1,376,375 that same year (Jan. 2020 census), the sixth-most populated in France after Paris, Lyon, Marseille, Lille, and Toulouse.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Bordeaux and 27 suburban municipalities form the Bordeaux Metropolis, an indirectly elected metropolitan authority now in charge of wider metropolitan issues. The Bordeaux Metropolis, with a population of 819,604 at the January 2020 census, is the fifth most populated metropolitan council in France after those of Paris, Marseille, Lyon and Lille.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Bordeaux is a world capital of wine: many châteaux and vineyards stand on the hillsides of the Gironde, and the city is home to the world's main wine fair, Vinexpo. Bordeaux is also one of the centers of gastronomy and business tourism for the organization of international congresses. It is a central and strategic hub for the aeronautics, military and space sector, home to international companies such as Dassault Aviation, Ariane Group, Safran and Thalès. The link with aviation dates back to 1910, the year the first airplane flew over the city. A crossroads of knowledge through university research, it is home to one of the only two megajoule lasers in the world, as well as a university population of more than 130,000 students within the Bordeaux Metropolis.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Bordeaux is an international tourist destination for its architectural and cultural heritage with more than 350 historic monuments, making it, after Paris, the city with the most listed or registered monuments in France. The \"Pearl of Aquitaine\" has been voted European Destination of the year in a 2015 online poll. The metropolis has also received awards and rankings by international organizations such as in 1957, Bordeaux was awarded the Europe Prize for its efforts in transmitting the European ideal. In June 2007, the Port of the Moon in historic Bordeaux was inscribed on the UNESCO World Heritage List, for its outstanding architecture and urban ensemble and in recognition of Bordeaux's international importance over the last 2000 years. Bordeaux is also ranked as a Sufficiency city by the Globalization and World Cities Research Network.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Roman Republic c. 60–27 BC Roman Empire 27 BC–AD 395 Gallic Empire 260–274 Western Roman Empire 395–418 Visigothic Kingdom 395–6th century Francia 6th century–843 West Francia 843–987 Kingdom of France 987–1154 Angevin Empire 1154–1214 Kingdom of England 1214–1453 Kingdom of France 1453–1792 French First Republic 1792–1804 First French Empire 1804–1814 Kingdom of France 1814–1815 First French Empire 1815 Kingdom of France 1815–1830 July Monarchy 1830–1848 French Second Republic 1848–1852 Second French Empire 1852–1870 French Third Republic 1870–1940 Military Administration in France 1940–1944 ∟ part of German-occupied Europe from 1940 to 1944 Provisional Government of the French Republic 1944–1946 French Fourth Republic 1946–1958 French Fifth Republic 1958–present",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Around 300 BC, the region was the settlement of a Celtic tribe, the Bituriges Vivisci, named the town Burdigala, probably of Aquitanian origin.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 107 BC, the Battle of Burdigala was fought by the Romans who were defending the Allobroges, a Gallic tribe allied to Rome, and the Tigurini led by Divico. The Romans were defeated and their commander, the consul Lucius Cassius Longinus, was killed in battle.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The city came under Roman rule around 60 BC, and it became an important commercial centre for tin and lead. During this period were built the amphitheatre and the monument Les Piliers de Tutelle.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 276, it was sacked by the Vandals. The Vandals attacked again in 409, followed by the Visigoths in 414, and the Franks in 498, and afterwards the city fell into a period of relative obscurity.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In the late sixth century the city re-emerged as the seat of a county and an archdiocese within the Merovingian kingdom of the Franks, but royal Frankish power was never strong. The city started to play a regional role as a major urban center on the fringes of the newly founded Frankish Duchy of Vasconia. Around 585 Gallactorius was made Count of Bordeaux and fought the Basques.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 732, the city was plundered by the troops of Abd er Rahman who stormed the fortifications and overwhelmed the Aquitanian garrison. Duke Eudes mustered a force to engage the Umayyads, eventually engaging them in the Battle of the River Garonne somewhere near the river Dordogne. The battle had a high death toll, and although Eudes was defeated he had enough troops to engage in the Battle of Poitiers and so retain his grip on Aquitaine.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 737, following his father Eudes's death, the Aquitanian duke Hunald led a rebellion to which Charles responded by launching an expedition that captured Bordeaux. However, it was not retained for long, during the following year the Frankish commander clashed in battle with the Aquitanians but then left to take on hostile Burgundian authorities and magnates. In 745 Aquitaine faced another expedition where Charles's sons Pepin and Carloman challenged Hunald's power and defeated him. Hunald's son Waifer replaced him and confirmed Bordeaux as the capital city (along with Bourges in the north).",
"title": "History"
},
{
"paragraph_id": 13,
"text": "During the last stage of the war against Aquitaine (760–768), it was one of Waifer's last important strongholds to fall to the troops of King Pepin the Short. Charlemagne built the fortress of Fronsac (Frontiacus, Franciacus) near Bordeaux on a hill across the border with the Basques (Wascones), where Basque commanders came and pledged their loyalty (769).",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 778, Seguin (or Sihimin) was appointed count of Bordeaux, probably undermining the power of the Duke Lupo, and possibly leading to the Battle of Roncevaux Pass. In 814, Seguin was made Duke of Vasconia, but was deposed in 816 for failing to suppress a Basque rebellion. Under the Carolingians, sometimes the Counts of Bordeaux held the title concomitantly with that of Duke of Vasconia. They were to keep the Basques in check and defend the mouth of the Garonne from the Vikings when they appeared in c. 844. In Autumn 845, the Vikings were raiding Bordeaux and Saintes, count Seguin II marched on them but was captured and executed.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Although the port of Bordeaux was a buzzing trade center, the stability and success of the city was threatened by Viking and Norman incursions and political instability. The restoration of the Ramnulfid Dukes of Aquitaine under William IV and his successors (known as the House of Poitiers) brought continuity of government.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "From the 12th to the 15th century, Bordeaux flourished once more following the marriage of Eléonore, Duchess of Aquitaine and the last of the House of Poitiers, to Henry II Plantagenêt, Count of Anjou and the grandson of Henry I of England, who succeeded to the English crown months after their wedding, bringing into being the vast Angevin Empire, which stretched from the Pyrenees to Ireland. After granting a tax-free trade status with England, Henry was adored by the locals as they could be even more profitable in the wine trade, their main source of income, and the city benefited from imports of cloth and wheat. The belfry (Grosse Cloche) and city cathedral St-André were built, the latter in 1227, incorporating the artisan quarter of Saint-Paul. Under the terms of the Treaty of Brétigny it became briefly the capital of an independent state (1362–1372) under Edward, the Black Prince, but after the Battle of Castillon (1453) it was annexed by France.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1462, Bordeaux created a local parliament.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Bordeaux adhered to the Fronde, being effectively annexed to the Kingdom of France only in 1653, when the army of Louis XIV entered the city.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The 18th century saw another golden age of Bordeaux. The Port of the Moon supplied the majority of Europe with coffee, cocoa, sugar, cotton and indigo, becoming France's busiest port and the second busiest port in the world after London. Many downtown buildings (about 5,000), including those on the quays, are from this period.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Bordeaux was also a major trading centre for slaves. In total, the Bordeaux shipowners deported 150,000 Africans in some 500 expeditions.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "At the beginning of the French Revolution (1789), many local revolutionaries were members of the Girondists. This Party represented the provincial bourgeoisie, favorable towards abolishing aristocracy privileges, but opposed to the Revolution's social dimension. In 1793, the Montagnards led by Robespierre and Marat came to power. Fearing a bourgeois misappropriation of the Revolution, they executed a great number of Girondists. During the purge, the local Montagnard Section renamed the city of Bordeaux \"Commune-Franklin\" (Franklin-municipality) in homage to Benjamin Franklin.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "At the same time, in 1791, a slave revolt broke out at Saint-Domingue (current Haiti), the most profitable of the French colonies. Three years later, the Montagnard Convention abolished slavery. In 1802, Napoleon revoked the manumission law but lost the war against the army of former slaves. In 1804, Haiti became independent. The loss of this \"Pearl\" of the West Indies generated the collapse of Bordeaux's port economy, which was dependent on the colonial trade and trade in slaves.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Towards the end of the Peninsular War of 1814, the Duke of Wellington sent William Beresford with two divisions and seized Bordeaux, encountering little resistance. Bordeaux was largely anti-Bonapartist and the majority supported the Bourbons. The British troops were treated as liberators.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "From the Bourbon Restoration, the economy of Bordeaux was rebuilt by traders and shipowners. They engaged to construct the first bridge of Bordeaux, and customs warehouses. The shipping traffic grew through the new African colonies.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Georges-Eugène Haussmann, a longtime prefect of Bordeaux, used Bordeaux's 18th-century large-scale rebuilding as a model when he was asked by Emperor Napoleon III to transform the quasi-medieval Paris into a \"modern\" capital that would make France proud. Victor Hugo found the town so beautiful he said: \"Take Versailles, add Antwerp, and you have Bordeaux\".",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In 1870, at the beginning of the Franco-Prussian war against Prussia, the French government temporarily relocated to Bordeaux from Paris. That recurred during World War I and again very briefly during World War II, when it became clear that Paris would fall into German hands.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "During World War II, Bordeaux fell under German occupation.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In May and June 1940, Bordeaux was the site of the life-saving actions of the Portuguese consul-general, Aristides de Sousa Mendes, who illegally granted thousands of Portuguese visas, which were needed to pass the Spanish border, to refugees fleeing the German occupation.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "From 1941 to 1943, the Italian Royal Navy established BETASOM, a submarine base at Bordeaux. Italian submarines participated in the Battle of the Atlantic from that base, which was also a major base for German U-boats as headquarters of 12th U-boat Flotilla. The massive, reinforced concrete U-boat pens have proved impractical to demolish and are now partly used as a cultural center for exhibitions.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In 2007, 40% of the city surface area, located around the Port of the Moon, was listed as World heritage sites. Unesco inscribed Bordeaux as \"an inhabited historic city, an outstanding urban and architectural ensemble, created in the age of the Enlightenment, whose values continued up to the first half of the 20th century, with more protected buildings than any other French city except Paris\".",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Bordeaux is located close to the European Atlantic coast, in the southwest of France and in the north of the Aquitaine region. It is around 500 km (310 mi) southwest of Paris. The city is built on a bend of the river Garonne, and is divided into two parts: the right bank to the east and left bank in the west. Historically the left bank is more developed because when flowing outside the bend, the water makes a furrow of the required depth to allow the passing of merchant ships, which used to offload on this side of the river. But, today, the right bank is developing, including new urban projects. In Bordeaux, the Garonne River is accessible to ocean liners through the Gironde estuary. The right bank of the Garonne is a low-lying, often marshy plain.",
"title": "Geography"
},
{
"paragraph_id": 32,
"text": "Bordeaux's climate can be classified as oceanic (Köppen climate classification Cfb), bordering on a humid subtropical climate (Cfa). However, the Trewartha climate classification system classifies the city as solely humid subtropical, due to a recent rise in temperatures related - to some degree or another - to climate change and the city's urban heat island.",
"title": "Geography"
},
{
"paragraph_id": 33,
"text": "The city enjoys cool to mild, wet winters, due to its relatively southerly latitude, and the prevalence of mild, westerly winds from the Atlantic. Its summers are warm and somewhat drier, although wet enough to avoid a Mediterranean classification. Frosts occur annually, but snowfall is quite infrequent, occurring for no more than 3-4 days a year. The summer of 2003 set a record with an average temperature of 23.3 °C (73.9 °F), while February 1956 was the coldest month on record with an average temperature of −2.00 °C at Bordeaux Mérignac-Airport.",
"title": "Geography"
},
{
"paragraph_id": 34,
"text": "Bordeaux is a major centre for business in France as it has the sixth largest metropolitan population in France. It serves as a major regional center for trade, administration, services and industry.",
"title": "Economy"
},
{
"paragraph_id": 35,
"text": "The vine was introduced to the Bordeaux region by the Romans, probably in the mid-first century, to provide wine for local consumption, and wine production has been continuous in the region since.",
"title": "Economy"
},
{
"paragraph_id": 36,
"text": "Bordeaux wine growing area has about 116,160 hectares (287,000 acres) of vineyards, 57 appellations, 10,000 wine-producing estates (châteaux) and 13,000 grape growers. With an annual production of approximately 960 million bottles, the Bordeaux area produces large quantities of everyday wine as well as some of the most expensive wines in the world. Included among the latter are the area's five premier cru (First Growth) red wines (four from Médoc and one, Château Haut-Brion, from Graves), established by the Bordeaux Wine Official Classification of 1855:",
"title": "Economy"
},
{
"paragraph_id": 37,
"text": "Both red and white wines are made in the Bordeaux region. Red Bordeaux wine is called claret in the United Kingdom. Red wines are generally made from a blend of grapes, and may be made from Cabernet Sauvignon, Merlot, Cabernet Franc, Petit verdot, Malbec, and, less commonly in recent years, Carménère.",
"title": "Economy"
},
{
"paragraph_id": 38,
"text": "White Bordeaux is made from Sauvignon blanc, Sémillon, and Muscadelle. Sauternes is a sub-region of Graves known for its intensely sweet, white, dessert wines such as Château d'Yquem.",
"title": "Economy"
},
{
"paragraph_id": 39,
"text": "Because of a wine glut (wine lake) in the generic production, the price squeeze induced by an increasingly strong international competition, and vine pull schemes, the number of growers has recently dropped from 14,000 and the area under vine has also decreased significantly. In the meantime, the global demand for first growths and the most famous labels markedly increased and their prices skyrocketed.",
"title": "Economy"
},
{
"paragraph_id": 40,
"text": "The Cité du Vin, a museum as well as a place of exhibitions, shows, movie projections and academic seminars on the theme of wine opened its doors in June 2016.",
"title": "Economy"
},
{
"paragraph_id": 41,
"text": "The Laser Mégajoule will be one of the most powerful lasers in the world, allowing fundamental research and the development of the laser and plasma technologies.",
"title": "Economy"
},
{
"paragraph_id": 42,
"text": "Some 20,000 people work for the aeronautic industry in Bordeaux. The city has some of the biggest companies including Dassault, EADS Sogerma, Snecma, Thales, SNPE, and others. The Dassault Falcon private jets are built there as well as the military aircraft Rafale and Mirage 2000, the Airbus A380 cockpit, the boosters of Ariane 5, and the M51 SLBM missile.",
"title": "Economy"
},
{
"paragraph_id": 43,
"text": "Tourism, especially wine tourism, is a major industry. Globelink.co.uk mentioned Bordeaux as the best tourist destination in Europe in 2015. Gourmet Touring is a tourism company operating in the Bordeaux wine region.",
"title": "Economy"
},
{
"paragraph_id": 44,
"text": "Access to the port from the Atlantic is via the Gironde estuary. Almost nine million tonnes of goods arrive and leave each year.",
"title": "Economy"
},
{
"paragraph_id": 45,
"text": "This list includes indigenous Bordeaux-based companies and companies that have major presence in Bordeaux, but are not necessarily headquartered there.",
"title": "Economy"
},
{
"paragraph_id": 46,
"text": "In January 2020, there were 259,809 inhabitants in the city proper (commune) of Bordeaux. The commune (including Caudéran which was annexed by Bordeaux in 1965) had its largest population of 284,494 at the 1954 census. The majority of the population is French, but there are sizable groups of Italians, Spaniards (Up to 20% of the Bordeaux population claim some degree of Spanish heritage), Portuguese, Turks, Germans.",
"title": "Population"
},
{
"paragraph_id": 47,
"text": "The built-up area has grown for more than a century beyond the municipal borders of Bordeaux due to the small size of the commune (49 km (19 sq mi)) and urban sprawl, so that by January 2020 there were 1,376,375 people living in the overall 6,316 km (2,439 sq mi) metropolitan area (aire d'attraction) of Bordeaux, only a fifth of whom lived in the city proper.",
"title": "Population"
},
{
"paragraph_id": 48,
"text": "The Mayor of the city is the environmentalist Pierre Hurmic.",
"title": "Politics"
},
{
"paragraph_id": 49,
"text": "Bordeaux is the capital of five cantons and the Prefecture of the Gironde and Aquitaine.",
"title": "Politics"
},
{
"paragraph_id": 50,
"text": "The town is divided into three districts, the first three of Gironde. The headquarters of Urban Community of Bordeaux Mériadeck is located in the neighbourhood and the city is at the head of the Chamber of Commerce and Industry that bears his name.",
"title": "Politics"
},
{
"paragraph_id": 51,
"text": "The number of inhabitants of Bordeaux is greater than 250,000 and less than 299,999 so the number of municipal councilors is 65. They are divided according to the following composition:",
"title": "Politics"
},
{
"paragraph_id": 52,
"text": "Since the Liberation (1944), there have been six mayors of Bordeaux:",
"title": "Politics"
},
{
"paragraph_id": 53,
"text": "At the 2007 presidential election, the Bordelais gave 31.37% of their votes to Ségolène Royal of the Socialist Party against 30.84% to Nicolas Sarkozy, president of the UMP. Then came François Bayrou with 22.01%, followed by Jean-Marie Le Pen who recorded 5.42%. None of the other candidates exceeded the 5% mark. Nationally, Nicolas Sarkozy led with 31.18%, then Ségolène Royal with 25.87%, followed by François Bayrou with 18.57%. After these came Jean-Marie Le Pen with 10.44%, none of the other candidates exceeded the 5% mark. In the second round, the city of Bordeaux gave Ségolène Royal 52.44% against 47.56% for Nicolas Sarkozy, the latter being elected President of the Republic with 53.06% against 46.94% for Ségolène Royal. The abstention rates for Bordeaux were 14.52% in the first round and 15.90% in the second round.",
"title": "Politics"
},
{
"paragraph_id": 54,
"text": "In the parliamentary elections of 2007, the left won eight constituencies against only three for the right. It should be added that after the partial 2008 elections, the eighth district of Gironde switched to the left, bringing the count to nine. In Bordeaux, the left was for the first time in its history the majority as it held two of three constituencies following the elections. In the first division of the Gironde, the outgoing UMP MP Chantal Bourragué was well ahead with 44.81% against 25.39% for the Socialist candidate Beatrice Desaigues. In the second round, it was Chantal Bourragué who was re-elected with 54.45% against 45.55% for his socialist opponent. In the second district of Gironde the UMP mayor and all new Minister of Ecology, Energy, Sustainable Development and the Sea Alain Juppé confronted the General Counsel PS Michèle Delaunay. In the first round, Alain Juppé was well ahead with 43.73% against 31.36% for Michèle Delaunay. In the second round, it was finally Michèle Delaunay who won the election with 50.93% of the votes against 49.07% for Alain Juppé, the margin being only 670 votes. The defeat of the so-called constituency \"Mayor\" showed that Bordeaux was rocking increasingly left. Finally, in the third constituency of the Gironde, Noël Mamère was well ahead with 39.82% against 28.42% for the UMP candidate Elizabeth Vine. In the second round, Noël Mamère was re-elected with 62.82% against 37.18% for his right-wing rival.",
"title": "Politics"
},
{
"paragraph_id": 55,
"text": "In 2008 municipal elections saw the clash between mayor of Bordeaux, Alain Juppé and the President of the Regional Council of Aquitaine Socialist Alain Rousset. The PS had put up a Socialist heavyweight in the Gironde and had put great hopes in this election after the victory of Ségolène Royal and Michèle Delaunay in 2007. However, after a rather exciting campaign it was Alain Juppé who was widely elected in the first round with 56.62%, far ahead of Alain Rousset who has managed to get 34.14%. At present, of the eight cantons that has Bordeaux, five are held by the PS and three by the UMP, the left eating a little each time into the right's numbers.",
"title": "Politics"
},
{
"paragraph_id": 56,
"text": "In the European elections of 2009, Bordeaux voters largely voted for the UMP candidate Dominique Baudis, who won 31.54% against 15.00% for PS candidate Kader Arif. The candidate of Europe Ecology José Bové came second with 22.34%. None of the other candidates reached the 10% mark. The 2009 European elections were like the previous ones in eight constituencies. Bordeaux is located in the district \"Southwest\", here are the results:",
"title": "Politics"
},
{
"paragraph_id": 57,
"text": "UMP candidate Dominique Baudis: 26.89%. His party gained four seats. PS candidate Kader Arif: 17.79%, gaining two seats in the European Parliament. Europe Ecology candidate Bove: 15.83%, obtaining two seats. MoDem candidate Robert Rochefort: 8.61%, winning a seat. Left Front candidate Jean-Luc Mélenchon: 8.16%, gaining the last seat. At regional elections in 2010, the Socialist incumbent president Alain Rousset won the first round by totaling 35.19% in Bordeaux, but this score was lower than the plan for Gironde and Aquitaine. Xavier Darcos, Minister of Labour followed with 28.40% of the votes, scoring above the regional and departmental average. Then came Monique De Marco, Green candidate with 13.40%, followed by the member of Pyrenees-Atlantiques and candidate of the MoDem Jean Lassalle who registered a low 6.78% while qualifying to the second round on the whole Aquitaine, closely followed by Jacques Colombier, candidate of the National Front, who gained 6.48%. Finally the candidate of the Left Front Gérard Boulanger with 5.64%, no other candidate above the 5% mark. In the second round, Alain Rousset had a tidal wave win as national totals rose to 55.83%. If Xavier Darcos largely lost the election, he nevertheless achieved a score above the regional and departmental average obtaining 33.40%. Jean Lassalle, who qualified for the second round, passed the 10% mark by totaling 10.77%. The ballot was marked by abstention amounting to 55.51% in the first round and 53.59% in the second round.",
"title": "Politics"
},
{
"paragraph_id": 58,
"text": "Only candidates obtaining more than 5% are listed",
"title": "Politics"
},
{
"paragraph_id": 59,
"text": "Bordeaux voted for Emmanuel Macron in the presidential election. In the 2017 parliamentary election, La République En Marche! won most of the constituencies in Bordeaux.",
"title": "Politics"
},
{
"paragraph_id": 60,
"text": "Bordeaux voted in the 2019 European Parliament election in France.",
"title": "Politics"
},
{
"paragraph_id": 61,
"text": "After 73 years of right-of-centre rule, the ecologist Pierre Hurmic (EELV) came in ahead of Nicolas Florian (LR/LaREM).",
"title": "Politics"
},
{
"paragraph_id": 62,
"text": "The city area is represented by the following constituencies: Gironde's 1st, Gironde's 2nd, Gironde's 3rd, Gironde's 4th, Gironde's 5th, Gironde's 6th, Gironde's 7th.",
"title": "Politics"
},
{
"paragraph_id": 63,
"text": "During Antiquity, a first university had been created by the Romans in 286. The city was an important administrative centre and the new university had to train administrators. Only rhetoric and grammar were taught. Ausonius and Sulpicius Severus were two of the teachers.",
"title": "Education"
},
{
"paragraph_id": 64,
"text": "In 1441, when Bordeaux was an English town, the Pope Eugene IV created a university by demand of the archbishop Pey Berland. In 1793, during the French Revolution, the National Convention abolished the university, and replace them with the École centrale in 1796. In Bordeaux, this one was located in the former buildings of the college of Guyenne. In 1808, the university reappeared with Napoleon. Bordeaux accommodates approximately 70,000 students on one of the largest campuses of Europe (235 ha).",
"title": "Education"
},
{
"paragraph_id": 65,
"text": "Bordeaux has numerous public and private schools offering undergraduate and postgraduate programs.",
"title": "Education"
},
{
"paragraph_id": 66,
"text": "Engineering schools:",
"title": "Education"
},
{
"paragraph_id": 67,
"text": "Business and management schools:",
"title": "Education"
},
{
"paragraph_id": 68,
"text": "Other:",
"title": "Education"
},
{
"paragraph_id": 69,
"text": "The École Compleméntaire Japonaise de Bordeaux (ボルドー日本語補習授業校, Borudō Nihongo Hoshū Jugyō Kō), a part-time Japanese supplementary school, is held in the Salle de L'Athénée Municipal in Bordeaux.",
"title": "Education"
},
{
"paragraph_id": 70,
"text": "Bordeaux is classified \"City of Art and History\". The city is home to 362 monuments historiques (only Paris has more in France) with some buildings dating back to Roman times. Bordeaux, Port of the Moon, has been inscribed on UNESCO World Heritage List as \"an outstanding urban and architectural ensemble\".",
"title": "Main sights"
},
{
"paragraph_id": 71,
"text": "Bordeaux is home to one of Europe's biggest 18th-century architectural urban areas, making it a sought-after destination for tourists and cinema production crews. It stands out as one of the first French cities, after Nancy, to have entered an era of urbanism and metropolitan big scale projects, with the team Gabriel father and son, architects for King Louis XV, under the supervision of two intendants (Governors), first Nicolas-François Dupré de Saint-Maur then the Marquis de Tourny.",
"title": "Main sights"
},
{
"paragraph_id": 72,
"text": "Saint-André Cathedral, Saint-Michel Basilica and Saint-Seurin Basilica are part of the World Heritage Sites of the Routes of Santiago de Compostela in France. The organ in Saint-Louis-des-Chartrons is registered on the French monuments historiques.",
"title": "Main sights"
},
{
"paragraph_id": 73,
"text": "Main sights include:",
"title": "Main sights"
},
{
"paragraph_id": 74,
"text": "Slavery was part of a growing drive for the city. Firstly, during the 18th and 19th centuries, Bordeaux was an important slave port, which saw some 500 slave expeditions that cause the deportation of 150,000 Africans by Bordeaux shipowners. Secondly, even though the \"Triangular trade\" represented only 5% of Bordeaux's wealth, the city's direct trade with the Caribbean, that accounted for the other 95%, concerns the colonial stuffs made by the slave (sugar, coffee, cocoa). And thirdly, in that same period, a major migratory movement by Aquitanians took place to the Caribbean colonies, with Saint-Domingue (now Haiti) being the most popular destination. 40% of the white population of the island came from Aquitaine. They prospered with plantations incomes, until the first slave revolts which concluded in 1848 in the final abolition of slavery in France.",
"title": "Main sights"
},
{
"paragraph_id": 75,
"text": "A statue of Modeste Testas, an Ethiopian woman who was enslaved by the Bordeaux-based Testas brothers was unveiled in 2019. She was trafficked by them from West Africa, to Philadelphia (where one of the brother coerced her to have two children by him) and was ultimately freed and lived in Haiti. The bronze sculpture was created by the Haitian artists Woodly Caymitte.",
"title": "Main sights"
},
{
"paragraph_id": 76,
"text": "A number of traces and memorial sites are visible in the city. Moreover, in May 2009, the Museum of Aquitaine opened the spaces dedicated to \"Bordeaux in the 18th century, trans-Atlantic trading and slavery\". This work, richly illustrated with original documents, contributes to disseminate the state of knowledge on this question, presenting above all the facts and their chronology.",
"title": "Main sights"
},
{
"paragraph_id": 77,
"text": "The region of Bordeaux was also the land of several prominent abolitionists, as Montesquieu, Laffon de Ladébat and Elisée Reclus. Others were members of the Society of the Friends of the Blacks as the revolutionaries Boyer-Fonfrède, Gensonné, Guadet and Ducos.",
"title": "Main sights"
},
{
"paragraph_id": 78,
"text": "Europe's longest-span vertical-lift bridge, the Pont Jacques Chaban-Delmas, was opened in 2013 in Bordeaux, spanning the River Garonne. The central lift span is 117-metre-long (384-foot), weighs 4,600 tons and can be lifted vertically up to 53 metres (174 feet) to let tall ships pass underneath. The €160 million bridge was inaugurated by President François Hollande and Mayor Alain Juppé on 16 March 2013. The bridge was named after the late Jacques Chaban-Delmas, who was a former Prime Minister and Mayor of Bordeaux.",
"title": "Main sights"
},
{
"paragraph_id": 79,
"text": "Bordeaux has many shopping options. In the heart of Bordeaux is Rue Sainte-Catherine. This pedestrian-only shopping street has 1.2 kilometers (0.75 mi) of shops, restaurants and cafés; it is also one of the longest shopping streets in Europe. Rue Sainte-Catherine starts at Place de la Victoire and ends at Place de la Comédie by the Grand Théâtre. The shops become progressively more upmarket as one moves towards Place de la Comédie and the nearby Cours de l'Intendance is where one finds the more exclusive shops and boutiques.",
"title": "Main sights"
},
{
"paragraph_id": 80,
"text": "Bordeaux is also the first city in France to have created, in the 1980s, an architecture exhibition and research centre, Arc en rêve. Bordeaux offers a large number of cinemas, theatres, and is the home of the Opéra national de Bordeaux. There are many music venues of varying capacity. The city also offers several festivals throughout the year. In October 2021, Bordeaux was shortlisted for the European Commission's 2022 European Capital of Smart Tourism award along with Copenhagen, Dublin, Florence, Ljubljana, Palma de Mallorca and Valencia.",
"title": "Main sights"
},
{
"paragraph_id": 81,
"text": "Bordeaux is an important road and motorway junction. The city is connected to Paris by the A10 motorway, with Lyon by the A89, with Toulouse by the A62, and with Spain by the A63. There is a 45 km (28 mi) ring road called the \"Rocade\" which is often very busy. Another ring road is under consideration.",
"title": "Transport"
},
{
"paragraph_id": 82,
"text": "Bordeaux has five road bridges that cross the Garonne, the Pont de pierre built in the 1820s and three modern bridges built after 1960: the Pont Saint Jean, just south of the Pont de pierre (both located downtown), the Pont d'Aquitaine, a suspension bridge downstream from downtown, and the Pont François Mitterrand, located upstream of downtown. These two bridges are part of the ring-road around Bordeaux. A fifth bridge, the Pont Jacques-Chaban-Delmas, was constructed in 2009–2012 and opened to traffic in March 2013. Located halfway between the Pont de pierre and the Pont d'Aquitaine and serving downtown rather than highway traffic, it is a vertical-lift bridge with a height in closed position comparable to that of Pont de pierre, and to the Pont d'Aquitaine when open. All five road bridges, including the two highway bridges, are open to cyclists and pedestrians as well. Another bridge, the Pont Jean-Jacques Bosc, is to be built in 2018.",
"title": "Transport"
},
{
"paragraph_id": 83,
"text": "Lacking any steep hills, Bordeaux is relatively friendly to cyclists. Cycle paths (separate from the roadways) exist on the highway bridges, along the riverfront, on the university campuses, and incidentally elsewhere in the city. Cycle lanes and bus lanes that explicitly allow cyclists exist on many of the city's boulevards. A paid bicycle-sharing system with automated stations was established in 2010.",
"title": "Transport"
},
{
"paragraph_id": 84,
"text": "The main railway station, Gare de Bordeaux Saint-Jean, near the center of the city, has 12 million passengers a year. It is served by the French national (SNCF) railway's high speed train, the TGV, that gets to Paris in two hours, with connections to major European centers such as Lille, Brussels, Amsterdam, Cologne, Geneva and London. The TGV also serves Toulouse and Irun (Spain) from Bordeaux. A regular train service is provided to Nantes, Nice, Marseille and Lyon. The Gare Saint-Jean is the major hub for regional trains (TER) operated by the SNCF to Arcachon, Limoges, Agen, Périgueux, Langon, Pau, Le Médoc, Angoulême and Bayonne.",
"title": "Transport"
},
{
"paragraph_id": 85,
"text": "Historically the train line used to terminate at a station on the right bank of the river Garonne near the Pont de Pierre, and passengers crossed the bridge to get into the city. Subsequently, a double-track steel railway bridge was constructed in the 1850s, by Gustave Eiffel, to bring trains across the river direct into Gare de Bordeaux Saint-Jean. The old station was later converted and in 2010 comprised a cinema and restaurants.",
"title": "Transport"
},
{
"paragraph_id": 86,
"text": "The two-track Eiffel bridge with a speed limit of 30 km/h (19 mph) became a bottleneck and a new bridge was built, opening in 2009. The new bridge has four tracks and allows trains to pass at 60 km/h (37 mph). During the planning there was much lobbying by the Eiffel family and other supporters to preserve the old bridge as a footbridge across the Garonne, with possibly a museum to document the history of the bridge and Gustave Eiffel's contribution. The decision was taken to save the bridge, but by early 2010 no plans had been announced as to its future use. The bridge remains intact, but unused and without any means of access.",
"title": "Transport"
},
{
"paragraph_id": 87,
"text": "Since July 2017, the LGV Sud Europe Atlantique is fully operational and makes Bordeaux city 2h04 from Paris.",
"title": "Transport"
},
{
"paragraph_id": 88,
"text": "Bordeaux is served by Bordeaux–Mérignac Airport, located 8 km (5.0 mi) from the city centre in the suburban city of Mérignac.",
"title": "Transport"
},
{
"paragraph_id": 89,
"text": "Bordeaux has an important public transport system called Transports Bordeaux Métropole (TBM). This company is run by the Keolis group. The network consists of:",
"title": "Transport"
},
{
"paragraph_id": 90,
"text": "This network is operated from 5 am to 2 am.",
"title": "Transport"
},
{
"paragraph_id": 91,
"text": "There had been several plans for a subway network to be set up, but they stalled for both geological and financial reasons. Work on the Tramway de Bordeaux system was started in the autumn of 2000, and services started in December 2003 connecting Bordeaux with its suburban areas. The tram system uses Alstom APS a form of ground-level power supply technology developed by French company Alstom and designed to preserve the aesthetic environment by eliminating overhead cables in the historic city. Conventional overhead cables are used outside the city. The system was controversial for its considerable cost of installation, maintenance and also for the numerous initial technical problems that paralysed the network. Many streets and squares along the tramway route became pedestrian areas, with limited access for cars.",
"title": "Transport"
},
{
"paragraph_id": 92,
"text": "The planned Bordeaux tramway system is to link with the airport to the city centre towards the end of 2019.",
"title": "Transport"
},
{
"paragraph_id": 93,
"text": "There are more than 400 taxicabs in Bordeaux.",
"title": "Transport"
},
{
"paragraph_id": 94,
"text": "The average amount of time people spend commuting with public transit in Bordeaux, for example to and from work, on a weekday is 51 min. 12.% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 13 min, while 15.5% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is 7 km (4.3 mi), while 8% travel for over 12 km (7.5 mi) in a single direction.",
"title": "Transport"
},
{
"paragraph_id": 95,
"text": "The 41,458-capacity Nouveau Stade de Bordeaux is the largest stadium in Bordeaux. The stadium was opened in 2015 and replaced the Stade Chaban-Delmas, which was a venue for the FIFA World Cup in 1938 and 1998, as well as the 2007 Rugby World Cup. In the 1938 FIFA World Cup, it hosted a violent quarter-final known as the Battle of Bordeaux. The ground was formerly known as the Stade du Parc Lescure until 2001, when it was renamed in honour of the city's long-time mayor, Jacques Chaban-Delmas.",
"title": "Sport"
},
{
"paragraph_id": 96,
"text": "There are two major sport teams in Bordeaux, Girondins de Bordeaux is the football team, playing in Ligue 2, the second tier of French football. Union Bordeaux Bègles is a rugby team in the Top 14 in the Ligue Nationale de Rugby. Skateboarding, rollerblading, and BMX biking are activities enjoyed by many young inhabitants of the city. Bordeaux is home to a quay which runs along the Garonne river. On the quay there is a skate-park divided into three sections. One section is for Vert tricks, one for street style tricks, and one for little action sports athletes with easier features and softer materials. The skate-park is very well maintained by the municipality.",
"title": "Sport"
},
{
"paragraph_id": 97,
"text": "Bordeaux is also the home to one of the strongest cricket teams in France and are champions of the South West League.",
"title": "Sport"
},
{
"paragraph_id": 98,
"text": "There is a 250 m (820 ft) wooden velodrome, Vélodrome du Lac, in Bordeaux which hosts international cycling competition in the form of UCI Track Cycling World Cup events.",
"title": "Sport"
},
{
"paragraph_id": 99,
"text": "The 2015 Trophee Eric Bompard was in Bordeaux. But the Free Skate was cancelled in all of the divisions due to the Paris bombing(s) and aftermath. The Short Program occurred hours before the bombing. French skaters Chafik Besseghier (68.36) in tenth place, Romain Ponsart (62.86) in 11th. Mae-Berenice-Meite (46.82) in 11th and Laurine Lecavelier (46.53) in 12th. Vanessa James/Morgan Cipres (65.75) in second.",
"title": "Sport"
},
{
"paragraph_id": 100,
"text": "Between 1951 and 1955, an annual Formula 1 motor race was held on a 2.5-kilometre circuit which looped around the Esplanade des Quinconces and along the waterfront, attracting drivers such as Juan Manuel Fangio, Stirling Moss, Jean Behra and Maurice Trintignant.",
"title": "Sport"
},
{
"paragraph_id": 101,
"text": "Bordeaux is twinned with:",
"title": "International relationships"
}
] | Bordeaux is a city on the river Garonne in the Gironde department, southwestern France. A port city, it is the capital of the Nouvelle-Aquitaine region, as well as the prefecture of the Gironde department. Its inhabitants are called "Bordelais" (masculine) or "Bordelaises" (feminine). The term "Bordelais" may also refer to the city and its surrounding region. The city of Bordeaux proper had a population of 259,809 in 2020 within its small municipal territory of 49 km2 (19 sq mi), but together with its suburbs and exurbs the Bordeaux metropolitan area had a population of 1,376,375 that same year, the sixth-most populated in France after Paris, Lyon, Marseille, Lille, and Toulouse. Bordeaux and 27 suburban municipalities form the Bordeaux Metropolis, an indirectly elected metropolitan authority now in charge of wider metropolitan issues. The Bordeaux Metropolis, with a population of 819,604 at the January 2020 census, is the fifth most populated metropolitan council in France after those of Paris, Marseille, Lyon and Lille. Bordeaux is a world capital of wine: many châteaux and vineyards stand on the hillsides of the Gironde, and the city is home to the world's main wine fair, Vinexpo. Bordeaux is also one of the centers of gastronomy and business tourism for the organization of international congresses. It is a central and strategic hub for the aeronautics, military and space sector, home to international companies such as Dassault Aviation, Ariane Group, Safran and Thalès. The link with aviation dates back to 1910, the year the first airplane flew over the city. A crossroads of knowledge through university research, it is home to one of the only two megajoule lasers in the world, as well as a university population of more than 130,000 students within the Bordeaux Metropolis. Bordeaux is an international tourist destination for its architectural and cultural heritage with more than 350 historic monuments, making it, after Paris, the city with the most listed or registered monuments in France. The "Pearl of Aquitaine" has been voted European Destination of the year in a 2015 online poll. The metropolis has also received awards and rankings by international organizations such as in 1957, Bordeaux was awarded the Europe Prize for its efforts in transmitting the European ideal. In June 2007, the Port of the Moon in historic Bordeaux was inscribed on the UNESCO World Heritage List, for its outstanding architecture and urban ensemble and in recognition of Bordeaux's international importance over the last 2000 years. Bordeaux is also ranked as a Sufficiency city by the Globalization and World Cities Research Network. | 2001-08-31T01:35:19Z | 2023-11-27T23:14:36Z | [
"Template:Div col end",
"Template:Cite book",
"Template:Use dmy dates",
"Template:Efn",
"Template:Respell",
"Template:Cn",
"Template:Reflist",
"Template:Cite news",
"Template:Cities in France",
"Template:Gironde communes",
"Template:Short description",
"Template:IPAc-en",
"Template:For timeline",
"Template:Main",
"Template:Notelist",
"Template:Wikivoyage",
"Template:Prefectures of regions of France",
"Template:Infobox French commune",
"Template:IPA-fr",
"Template:Nihongo",
"Template:Div col",
"Template:See also",
"Template:Cite web",
"Template:Official website",
"Template:-\"",
"Template:Weather box",
"Template:Commons",
"Template:Urban Community of Bordeaux",
"Template:World Heritage Sites in France",
"Template:Prefectures of departments of France",
"Template:Redirect",
"Template:Unreferenced section",
"Template:Lang",
"Template:Webarchive",
"Template:Higher Education in Bordeaux",
"Template:About",
"Template:Convert",
"Template:Lang-eu",
"Template:Historical populations",
"Template:Flagicon",
"Template:In lang",
"Template:Lang-oc",
"Template:IPA-oc",
"Template:Flag",
"Template:Illm",
"Template:Not a typo",
"Template:Dead link",
"Template:Cbignore",
"Template:Authority control",
"Template:Quote box",
"Template:Citation needed"
] | https://en.wikipedia.org/wiki/Bordeaux |
4,098 | Puzzle Bobble | Puzzle Bobble, internationally known as Bust-A-Move, is a 1994 tile-matching puzzle arcade game developed and published by Taito. It is based on the 1986 arcade game Bubble Bobble, featuring characters and themes from that game. Its characteristically cute Japanese animation and music, along with its play mechanics and level designs, made it successful as an arcade title and spawned several sequels and ports to home gaming systems.
At the start of each round, the rectangular playing arena contains a prearranged pattern of colored "bubbles". At the bottom of the screen, the player controls a device called a "pointer", which aims and fires bubbles up the screen. The color of bubbles fired is randomly generated and chosen from the colors of bubbles still left on the screen.
The objective of the game is to clear all the bubbles from the arena without any bubble crossing the bottom line. Bubbles will fire automatically if the player remains idle. After clearing the arena, the next round begins with a new pattern of bubbles to clear. The game consists of 32 levels. The fired bubbles travel in straight lines (possibly bouncing off the sidewalls of the arena), stopping when they touch other bubbles or reach the top of the arena. If a bubble touches identically-colored bubbles, forming a group of three or more, those bubbles—as well as any bubbles hanging from them—are removed from the field of play, and points are awarded. After every few shots, the "ceiling" of the playing arena drops downwards slightly, along with all the bubbles stuck to it. The number of shots between each drop of the ceiling is influenced by the number of bubble colors remaining. The closer the bubbles get to the bottom of the screen, the faster the music plays and if they cross the line at the bottom then the game is over.
Two different versions of the original game were released. Puzzle Bobble was originally released in Japan only in June 1994 by Taito, running on Taito B System hardware (with the preliminary title "Bubble Buster"). Then, 6 months later in December, the international Neo Geo version of Puzzle Bobble was released. It was almost identical aside from being in stereo and having some different sound effects and translated text.
In Japan, Game Machine listed the Neo Geo version of Puzzle Bobble on their February 15, 1995 issue as being the second most-popular arcade game at the time. It went on to become Japan's second highest-grossing arcade printed circuit board (PCB) software of 1995, below Virtua Fighter 2. In North America, RePlay reported the Neo Geo version of Puzzle Bobble to be the fourth most-popular arcade game in February 1995.
Reviewing the Super NES version, Mike Weigand of Electronic Gaming Monthly called it "a thoroughly enjoyable and incredibly addicting puzzle game". He considered the two player mode the highlight, but also said that the one player mode provides a solid challenge. GamePro gave it a generally negative review, saying it starts out fun but that ultimately lacks intricacy and longevity. They elaborated that in one player mode all the levels feel the same, and that two player matches are over too quickly to build up any excitement. They also criticized the lack of any 3D effects in the graphics. Next Generation reviewed the SNES version of the game and called it "addictive as hell".
A reviewer for Next Generation, while questioning the continued viability of the action puzzle genre, admitted that the game is "very simple and very addictive". He remarked that though the 3DO version makes no significant additions, none are called for by a game with such simple enjoyment. GamePro's brief review of the 3DO version commented that the game's controls are responsive, and they also praised visuals and music. Edge magazine ranked the game 73rd on their 100 Best Video Games in 2007. IGN rated the SNES version 54th in its Top 100 SNES Games.
The simplicity of the concept has led to many clones, both commercial and otherwise. 1996's Snood replaced the bubbles with small creatures and has been successful in its own right. Worms Blast was Team 17's take on the concept. On September 24, 2000, British game publisher Empire Interactive released a similar game, Spin Jam, for the original PlayStation console. Mobile clones include Bubble Witch Saga and Bubble Shooter. Frozen Bubble is a free software clone. For Bubble Bobble's 35th anniversary, Taito launched Puzzle Bobble VR: Vacation Odyssey on the Oculus Quest and Oculus Quest 2, later coming to PlayStation 4 and PlayStation 5 as Puzzle Bobble 3D: Vacation Odyssey in 2021.
Puzzle Bobble Everybubble! was released on May 23, 2023 for Nintendo Switch. The game also comes with an extra mode called "Puzzle Bobble vs. Space Invaders", where up to four players can work together to erase bubble-encased invaders before they reach the player while only being able to aim straight up. | [
{
"paragraph_id": 0,
"text": "Puzzle Bobble, internationally known as Bust-A-Move, is a 1994 tile-matching puzzle arcade game developed and published by Taito. It is based on the 1986 arcade game Bubble Bobble, featuring characters and themes from that game. Its characteristically cute Japanese animation and music, along with its play mechanics and level designs, made it successful as an arcade title and spawned several sequels and ports to home gaming systems.",
"title": ""
},
{
"paragraph_id": 1,
"text": "At the start of each round, the rectangular playing arena contains a prearranged pattern of colored \"bubbles\". At the bottom of the screen, the player controls a device called a \"pointer\", which aims and fires bubbles up the screen. The color of bubbles fired is randomly generated and chosen from the colors of bubbles still left on the screen.",
"title": "Gameplay"
},
{
"paragraph_id": 2,
"text": "The objective of the game is to clear all the bubbles from the arena without any bubble crossing the bottom line. Bubbles will fire automatically if the player remains idle. After clearing the arena, the next round begins with a new pattern of bubbles to clear. The game consists of 32 levels. The fired bubbles travel in straight lines (possibly bouncing off the sidewalls of the arena), stopping when they touch other bubbles or reach the top of the arena. If a bubble touches identically-colored bubbles, forming a group of three or more, those bubbles—as well as any bubbles hanging from them—are removed from the field of play, and points are awarded. After every few shots, the \"ceiling\" of the playing arena drops downwards slightly, along with all the bubbles stuck to it. The number of shots between each drop of the ceiling is influenced by the number of bubble colors remaining. The closer the bubbles get to the bottom of the screen, the faster the music plays and if they cross the line at the bottom then the game is over.",
"title": "Gameplay"
},
{
"paragraph_id": 3,
"text": "Two different versions of the original game were released. Puzzle Bobble was originally released in Japan only in June 1994 by Taito, running on Taito B System hardware (with the preliminary title \"Bubble Buster\"). Then, 6 months later in December, the international Neo Geo version of Puzzle Bobble was released. It was almost identical aside from being in stereo and having some different sound effects and translated text.",
"title": "Release"
},
{
"paragraph_id": 4,
"text": "In Japan, Game Machine listed the Neo Geo version of Puzzle Bobble on their February 15, 1995 issue as being the second most-popular arcade game at the time. It went on to become Japan's second highest-grossing arcade printed circuit board (PCB) software of 1995, below Virtua Fighter 2. In North America, RePlay reported the Neo Geo version of Puzzle Bobble to be the fourth most-popular arcade game in February 1995.",
"title": "Reception"
},
{
"paragraph_id": 5,
"text": "Reviewing the Super NES version, Mike Weigand of Electronic Gaming Monthly called it \"a thoroughly enjoyable and incredibly addicting puzzle game\". He considered the two player mode the highlight, but also said that the one player mode provides a solid challenge. GamePro gave it a generally negative review, saying it starts out fun but that ultimately lacks intricacy and longevity. They elaborated that in one player mode all the levels feel the same, and that two player matches are over too quickly to build up any excitement. They also criticized the lack of any 3D effects in the graphics. Next Generation reviewed the SNES version of the game and called it \"addictive as hell\".",
"title": "Reception"
},
{
"paragraph_id": 6,
"text": "A reviewer for Next Generation, while questioning the continued viability of the action puzzle genre, admitted that the game is \"very simple and very addictive\". He remarked that though the 3DO version makes no significant additions, none are called for by a game with such simple enjoyment. GamePro's brief review of the 3DO version commented that the game's controls are responsive, and they also praised visuals and music. Edge magazine ranked the game 73rd on their 100 Best Video Games in 2007. IGN rated the SNES version 54th in its Top 100 SNES Games.",
"title": "Reception"
},
{
"paragraph_id": 7,
"text": "The simplicity of the concept has led to many clones, both commercial and otherwise. 1996's Snood replaced the bubbles with small creatures and has been successful in its own right. Worms Blast was Team 17's take on the concept. On September 24, 2000, British game publisher Empire Interactive released a similar game, Spin Jam, for the original PlayStation console. Mobile clones include Bubble Witch Saga and Bubble Shooter. Frozen Bubble is a free software clone. For Bubble Bobble's 35th anniversary, Taito launched Puzzle Bobble VR: Vacation Odyssey on the Oculus Quest and Oculus Quest 2, later coming to PlayStation 4 and PlayStation 5 as Puzzle Bobble 3D: Vacation Odyssey in 2021.",
"title": "Legacy"
},
{
"paragraph_id": 8,
"text": "Puzzle Bobble Everybubble! was released on May 23, 2023 for Nintendo Switch. The game also comes with an extra mode called \"Puzzle Bobble vs. Space Invaders\", where up to four players can work together to erase bubble-encased invaders before they reach the player while only being able to aim straight up.",
"title": "Legacy"
}
] | Puzzle Bobble, internationally known as Bust-A-Move, is a 1994 tile-matching puzzle arcade game developed and published by Taito. It is based on the 1986 arcade game Bubble Bobble, featuring characters and themes from that game. Its characteristically cute Japanese animation and music, along with its play mechanics and level designs, made it successful as an arcade title and spawned several sequels and ports to home gaming systems. | 2001-09-06T17:48:03Z | 2023-12-30T08:43:52Z | [
"Template:Cite magazine",
"Template:Cite book",
"Template:Bubble Bobble series",
"Template:Cite web",
"Template:KLOV game",
"Template:Redirect",
"Template:Infobox video game",
"Template:Nihongo foot",
"Template:Video game reviews",
"Template:Reflist",
"Template:Citation",
"Template:'",
"Template:Notelist",
"Template:Cite news",
"Template:Mobygames",
"Template:Portal bar"
] | https://en.wikipedia.org/wiki/Puzzle_Bobble |
4,099 | Bone | A bone is a rigid organ that constitutes part of the skeleton in most vertebrate animals. Bones protect the various other organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. Bones come in a variety of shapes and sizes and have complex internal and external structures. They are lightweight yet strong and hard and serve multiple functions.
Bone tissue (osseous tissue), which is also called bone in the uncountable sense of that word, is hard tissue, a type of specialised connective tissue. It has a honeycomb-like matrix internally, which helps to give the bone rigidity. Bone tissue is made up of different types of bone cells. Osteoblasts and osteocytes are involved in the formation and mineralisation of bone; osteoclasts are involved in the resorption of bone tissue. Modified (flattened) osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralised matrix of bone tissue has an organic component of mainly collagen called ossein and an inorganic component of bone mineral made up of various salts. Bone tissue is mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage.
In the human body at birth, there are approximately 300 bones present; many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the stapes in the middle ear.
The Greek word for bone is ὀστέον ("osteon"), hence the many terms that use it as a prefix—such as osteopathy. In anatomical terminology, including the Terminologia Anatomica international standard, the word for a bone is os (for example, os breve, os longum, os sesamoideum).
Bone is not uniformly solid, but consists of a flexible matrix (about 30%) and bound minerals (about 70%) which are intricately woven and endlessly remodeled by a group of specialized bone cells. Their unique composition and design allows bones to be relatively hard and strong, while remaining lightweight.
Bone matrix is 90 to 95% composed of elastic collagen fibers, also known as ossein, and the remainder is ground substance. The elasticity of collagen improves fracture resistance. The matrix is hardened by the binding of inorganic mineral salt, calcium phosphate, in a chemical arrangement known as bone mineral, a form of calcium apatite. It is the mineralization that gives bones rigidity.
Bone is actively constructed and remodeled throughout life by special bone cells known as osteoblasts and osteoclasts. Within any single bone, the tissue is woven into two main patterns, known as cortical and cancellous bone, each with a different appearance and characteristics.
The hard outer layer of bones is composed of cortical bone, which is also called compact bone as it is much denser than cancellous bone. It forms the hard exterior (cortex) of bones. The cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80% of the total bone mass of an adult human skeleton. It facilitates bone's main functions—to support the whole body, to protect organs, to provide levers for movement, and to store and release chemical elements, mainly calcium. It consists of multiple microscopic columns, each called an osteon or Haversian system. Each column is multiple layers of osteoblasts and osteocytes around a central canal called the haversian canal. Volkmann's canals at right angles connect the osteons together. The columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon will change. Cortical bone is covered by a periosteum on its outer surface, and an endosteum on its inner surface. The endosteum is the boundary between the cortical bone and the cancellous bone. The primary anatomical and functional unit of cortical bone is the osteon.
Cancellous bone, or spongy bone, also known as trabecular bone, is the internal tissue of the skeletal bone and is an open cell porous network that follows the material properties of biofoams. Cancellous bone has a higher surface-area-to-volume ratio than cortical bone and it is less dense. This makes it weaker and more flexible. The greater surface area also makes it suitable for metabolic activities such as the exchange of calcium ions. Cancellous bone is typically found at the ends of long bones, near joints, and in the interior of vertebrae. Cancellous bone is highly vascular and often contains red bone marrow where hematopoiesis, the production of blood cells, occurs. The primary anatomical and functional unit of cancellous bone is the trabecula. The trabeculae are aligned towards the mechanical load distribution that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone.
The words cancellous and trabecular refer to the tiny lattice-shaped units (trabeculae) that form the tissue. It was first illustrated accurately in the engravings of Crisóstomo Martinez.
Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones.
Bone receives about 10% of cardiac output. Blood enters the endosteum, flows through the marrow, and exits through small vessels in the cortex. In humans, blood oxygen tension in bone marrow is about 6.6%, compared to about 12% in arterial blood, and 5% in venous and capillary blood.
Bone is metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets.
Osteoblasts are mononucleate bone-forming cells. They are located on the surface of osteon seams and make a protein mixture known as osteoid, which mineralizes to become bone. The osteoid seam is a narrow region of a newly formed organic matrix, not yet mineralized, located on the surface of a bone. Osteoid is primarily composed of Type I collagen. Osteoblasts also manufacture hormones, such as prostaglandins, to act on the bone itself. The osteoblast creates and repairs new bone by actually building around itself. First, the osteoblast puts up collagen fibers. These collagen fibers are used as a framework for the osteoblasts' work. The osteoblast then deposits calcium phosphate which is hardened by hydroxide and bicarbonate ions. The brand-new bone created by the osteoblast is called osteoid. Once the osteoblast is finished working it is actually trapped inside the bone once it hardens. When the osteoblast becomes trapped, it becomes known as an osteocyte. Other osteoblasts remain on the top of the new bone and are used to protect the underlying bone, these become known as bone lining cells.
Osteocytes are cells of mesenchymal origin and originate from osteoblasts that have migrated into and become trapped and surrounded by a bone matrix that they themselves produced. The spaces the cell body of osteocytes occupy within the mineralized collagen type I matrix are known as lacunae, while the osteocyte cell processes occupy channels called canaliculi. The many processes of osteocytes reach out to meet osteoblasts, osteoclasts, bone lining cells, and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other osteocytes in the bone through gap junctions—coupled cell processes which pass through the canalicular channels.
Osteoclasts are very large multinucleate cells that are responsible for the breakdown of bones by the process of bone resorption. New bone is then formed by the osteoblasts. Bone is constantly remodeled by the resorption of osteoclasts and created by osteoblasts. Osteoclasts are large cells with multiple nuclei located on bone surfaces in what are called Howship's lacunae (or resorption pits). These lacunae are the result of surrounding bone tissue that has been reabsorbed. Because the osteoclasts are derived from a monocyte stem-cell lineage, they are equipped with phagocytic-like mechanisms similar to circulating macrophages. Osteoclasts mature and/or migrate to discrete bone surfaces. Upon arrival, active enzymes, such as tartrate-resistant acid phosphatase, are secreted against the mineral substrate. The reabsorption of bone by osteoclasts also plays a role in calcium homeostasis.
Bones consist of living cells (osteoblasts and osteocytes) embedded in a mineralized organic matrix. The primary inorganic component of human bone is hydroxyapatite, the dominant bone mineral, having the nominal composition of Ca10(PO4)6(OH)2. The organic components of this matrix consist mainly of type I collagen—"organic" referring to materials produced as a result of the human body—and inorganic components, which alongside the dominant hydroxyapatite phase, include other compounds of calcium and phosphate including salts. Approximately 30% of the acellular component of bone consists of organic matter, while roughly 70% by mass is attributed to the inorganic phase. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic. The exact composition of the matrix may be subject to change over time due to nutrition and biomineralization, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also be found.
Type I collagen composes 90–95% of the organic matrix, with the remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate, as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein. Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar.
Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults, woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed woven. It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as "bony substitution." Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 µm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers.
The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These cells synthesise collagen alpha polypetpide chains and then secrete collagen molecules. The collagen molecules associate with their neighbors and crosslink via lysyl oxidase to form collagen fibrils. At this stage, they are not yet mineralized, and this zone of unmineralized collagen fibrils is called "osteoid". Around and inside collagen fibrils calcium and phosphate eventually precipitate within days to weeks becoming then fully mineralized bone with an overall carbonate substituted hydroxyapatite inorganic phase.
In order to mineralise the bone, the osteoblasts secrete alkaline phosphatase, some of which is carried by vesicles. This cleaves the inhibitory pyrophosphate and simultaneously generates free phosphate ions for mineralization, acting as the foci for calcium and phosphate deposition. Vesicles may initiate some of the early mineralization events by rupturing and acting as a centre for crystals to grow on. Bone mineral may be formed from globular and plate structures, and via initially amorphous phases.
There are five types of bones in the human body: long, short, flat, irregular, and sesamoid.
In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term "osseous", and the prefix "osteo-", referring to things related to bone, are still used commonly today.
Some examples of terms used to describe bones include the term "foramen" to describe a hole through which something passes, and a "canal" or "meatus" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a "condyle", "crest", "spine", "eminence", "tubercle" or "tuberosity", depending on the protrusion's shape and location. In general, long bones are said to have a "head", "neck", and "body".
When two bones join, they are said to "articulate". If the two bones have a fibrous connection and are relatively immobile, then the joint is called a "suture".
The formation of bone is called ossification. During the fetal stage of development this occurs by two processes: intramembranous ossification and endochondral ossification. Intramembranous ossification involves the formation of bone from connective tissue whereas endochondral ossification involves the formation of bone from cartilage.
Intramembranous ossification mainly occurs during formation of the flat bones of the skull but also the mandible, maxilla, and clavicles; the bone is formed from connective tissue such as mesenchyme tissue rather than from cartilage. The process includes: the development of the ossification center, calcification, trabeculae formation and the development of the periosteum.
Endochondral ossification occurs in long bones and most other bones in the body; it involves the development of bone from cartilage. This process includes the development of a cartilage model, its growth and development, development of the primary and secondary ossification centers, and the formation of articular cartilage and the epiphyseal plates.
Endochondral ossification begins with points in the cartilage called "primary ossification centers." They mostly appear during fetal development, though a few short bones begin their primary ossification after birth. They are responsible for the formation of the diaphyses of long bones, short bones and certain parts of irregular bones. Secondary ossification occurs after birth and forms the epiphyses of long bones and the extremities of irregular and flat bones. The diaphysis and both epiphyses of a long bone are separated by a growing zone of cartilage (the epiphyseal plate). At skeletal maturity (18 to 25 years of age), all of the cartilage is replaced by bone, fusing the diaphysis and both epiphyses together (epiphyseal closure). In the upper limbs, only the diaphyses of the long bones and scapula are ossified. The epiphyses, carpal bones, coracoid process, medial border of the scapula, and acromion are still cartilaginous.
The following steps are followed in the conversion of cartilage to bone:
Bones have a variety of functions:
Bones serve a variety of mechanical functions. Together the bones in the body form the skeleton. They provide a frame to keep the body supported, and an attachment point for skeletal muscles, tendons, ligaments and joints, which function together to generate and transfer forces so that individual body parts or the whole body can be manipulated in three-dimensional space (the interaction between bone and muscle is studied in biomechanics).
Bones protect internal organs, such as the skull protecting the brain or the ribs protecting the heart and lungs. Because of the way that bone is formed, bone has a high compressive strength of about 170 MPa (1,700 kgf/cm), poor tensile strength of 104–121 MPa, and a very low shear stress strength (51.6 MPa). This means that bone resists pushing (compressional) stress well, resist pulling (tensional) stress less well, but only poorly resists shear stress (such as due to torsional loads). While bone is essentially brittle, bone does have a significant degree of elasticity, contributed chiefly by collagen.
Mechanically, bones also have a special role in hearing. The ossicles are three small bones in the middle ear which are involved in sound transduction.
The cancellous part of bones contain bone marrow. Bone marrow produces blood cells in a process called hematopoiesis. Blood cells that are created in bone marrow include red blood cells, platelets and white blood cells. Progenitor cells such as the hematopoietic stem cell divide in a process called mitosis to produce precursor cells. These include precursors which eventually give rise to white blood cells, and erythroblasts which give rise to red blood cells. Unlike red and white blood cells, created by mitosis, platelets are shed from very large cells called megakaryocytes. This process of progressive differentiation occurs within the bone marrow. After the cells are matured, they enter the circulation. Every day, over 2.5 billion red blood cells and platelets, and 50–100 billion granulocytes are produced in this way.
As well as creating cells, bone marrow is also one of the major sites where defective or aged red blood cells are destroyed.
Determined by the species, age, and the type of bone, bone cells make up to 15 percent of the bone. Growth factor storage—mineralized bone matrix stores important growth factors such as insulin-like growth factors, transforming growth factor, bone morphogenetic proteins and others.
Bone is constantly being created and replaced in a process known as remodeling. This ongoing turnover of bone is a process of resorption followed by replacement of bone with little change in shape. This is accomplished through osteoblasts and osteoclasts. Cells are stimulated by a variety of signals, and together referred to as a remodeling unit. Approximately 10% of the skeletal mass of an adult is remodelled each year. The purpose of remodeling is to regulate calcium homeostasis, repair microdamaged bones from everyday stress, and to shape the skeleton during growth. Repeated stress, such as weight-bearing exercise or bone healing, results in the bone thickening at the points of maximum stress (Wolff's law). It has been hypothesized that this is a result of bone's piezoelectric properties, which cause bone to generate small electrical potentials under stress.
The action of osteoblasts and osteoclasts are controlled by a number of chemical enzymes that either promote or inhibit the activity of the bone remodeling cells, controlling the rate at which bone is made, destroyed, or changed in shape. The cells also use paracrine signalling to control the activity of each other. For example, the rate at which osteoclasts resorb bone is inhibited by calcitonin and osteoprotegerin. Calcitonin is produced by parafollicular cells in the thyroid gland, and can bind to receptors on osteoclasts to directly inhibit osteoclast activity. Osteoprotegerin is secreted by osteoblasts and is able to bind RANK-L, inhibiting osteoclast stimulation.
Osteoblasts can also be stimulated to increase bone mass through increased secretion of osteoid and by inhibiting the ability of osteoclasts to break down osseous tissue. Increased secretion of osteoid is stimulated by the secretion of growth hormone by the pituitary, thyroid hormone and the sex hormones (estrogens and androgens). These hormones also promote increased secretion of osteoprotegerin. Osteoblasts can also be induced to secrete a number of cytokines that promote reabsorption of bone by stimulating osteoclast activity and differentiation from progenitor cells. Vitamin D, parathyroid hormone and stimulation from osteocytes induce osteoblasts to increase secretion of RANK-ligand and interleukin 6, which cytokines then stimulate increased reabsorption of bone by osteoclasts. These same compounds also increase secretion of macrophage colony-stimulating factor by osteoblasts, which promotes the differentiation of progenitor cells into osteoclasts, and decrease secretion of osteoprotegerin.
Bone volume is determined by the rates of bone formation and bone resorption. Recent research has suggested that certain growth factors may work to locally alter bone formation by increasing osteoblast activity. Numerous bone-derived growth factors have been isolated and classified via bone cultures. These factors include insulin-like growth factors I and II, transforming growth factor-beta, fibroblast growth factor, platelet-derived growth factor, and bone morphogenetic proteins. Evidence suggests that bone cells produce growth factors for extracellular storage in the bone matrix. The release of these growth factors from the bone matrix could cause the proliferation of osteoblast precursors. Essentially, bone growth factors may act as potential determinants of local bone formation. Research has suggested that cancellous bone volume in postmenopausal osteoporosis may be determined by the relationship between the total bone forming surface and the percent of surface resorption.
A number of diseases can affect bone, including arthritis, fractures, infections, osteoporosis and tumors. Conditions relating to bone can be managed by a variety of doctors, including rheumatologists for joints, and orthopedic surgeons, who may conduct surgery to fix broken bones. Other doctors, such as rehabilitation specialists may be involved in recovery, radiologists in interpreting the findings on imaging, and pathologists in investigating the cause of the disease, and family doctors may play a role in preventing complications of bone disease such as osteoporosis.
When a doctor sees a patient, a history and exam will be taken. Bones are then often imaged, called radiography. This might include ultrasound X-ray, CT scan, MRI scan and other imaging such as a Bone scan, which may be used to investigate cancer. Other tests such as a blood test for autoimmune markers may be taken, or a synovial fluid aspirate may be taken.
In normal bone, fractures occur when there is significant force applied or repetitive trauma over a long time. Fractures can also occur when a bone is weakened, such as with osteoporosis, or when there is a structural problem, such as when the bone remodels excessively (such as Paget's disease) or is the site of the growth of cancer. Common fractures include wrist fractures and hip fractures, associated with osteoporosis, vertebral fractures associated with high-energy trauma and cancer, and fractures of long-bones. Not all fractures are painful. When serious, depending on the fractures type and location, complications may include flail chest, compartment syndromes or fat embolism. Compound fractures involve the bone's penetration through the skin. Some complex fractures can be treated by the use of bone grafting procedures that replace missing bone portions.
Fractures and their underlying causes can be investigated by X-rays, CT scans and MRIs. Fractures are described by their location and shape, and several classification systems exist, depending on the location of the fracture. A common long bone fracture in children is a Salter–Harris fracture. When fractures are managed, pain relief is often given, and the fractured area is often immobilised. This is to promote bone healing. In addition, surgical measures such as internal fixation may be used. Because of the immobilisation, people with fractures are often advised to undergo rehabilitation.
There are several types of tumor that can affect bone; examples of benign bone tumors include osteoma, osteoid osteoma, osteochondroma, osteoblastoma, enchondroma, giant-cell tumor of bone, and aneurysmal bone cyst.
Cancer can arise in bone tissue, and bones are also a common site for other cancers to spread (metastasise) to. Cancers that arise in bone are called "primary" cancers, although such cancers are rare. Metastases within bone are "secondary" cancers, with the most common being breast cancer, lung cancer, prostate cancer, thyroid cancer, and kidney cancer. Secondary cancers that affect bone can either destroy bone (called a "lytic" cancer) or create bone (a "sclerotic" cancer). Cancers of the bone marrow inside the bone can also affect bone tissue, examples including leukemia and multiple myeloma. Bone may also be affected by cancers in other parts of the body. Cancers in other parts of the body may release parathyroid hormone or parathyroid hormone-related peptide. This increases bone reabsorption, and can lead to bone fractures.
Bone tissue that is destroyed or altered as a result of cancers is distorted, weakened, and more prone to fracture. This may lead to compression of the spinal cord, destruction of the marrow resulting in bruising, bleeding and immunosuppression, and is one cause of bone pain. If the cancer is metastatic, then there might be other symptoms depending on the site of the original cancer. Some bone cancers can also be felt.
Cancers of the bone are managed according to their type, their stage, prognosis, and what symptoms they cause. Many primary cancers of bone are treated with radiotherapy. Cancers of bone marrow may be treated with chemotherapy, and other forms of targeted therapy such as immunotherapy may be used. Palliative care, which focuses on maximising a person's quality of life, may play a role in management, particularly if the likelihood of survival within five years is poor.
Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined in women by the World Health Organization as a bone mineral density of 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average. This density is measured using dual energy X-ray absorptiometry (DEXA), with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and are at risk of fracture.
One of the most important risk factors for osteoporosis is advanced age. Accumulation of oxidative DNA damage in osteoblastic and osteoclastic cells appears to be a key factor in age-related osteoporosis.
Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium and trace mineral supplements may also be advised, as may Vitamin D. When medication is used, it may include bisphosphonates, Strontium ranelate, and hormone replacement therapy.
Osteopathic medicine is a school of medical thought originally developed based on the idea of the link between the musculoskeletal system and overall health, but now very similar to mainstream medicine. As of 2012, over 77,000 physicians in the United States are trained in osteopathic medical schools.
The study of bones and teeth is referred to as osteology. It is frequently used in anthropology, archeology and forensic science for a variety of tasks. This can include determining the nutritional, health, age or injury status of the individual the bones were taken from. Preparing fleshed bones for these types of studies can involve the process of maceration.
Typically anthropologists and archeologists study bone tools made by Homo sapiens and Homo neanderthalensis. Bones can serve a number of uses such as projectile points or artistic pigments, and can also be made from external bones such as antlers.
Bird skeletons are very lightweight. Their bones are smaller and thinner, to aid flight. Among mammals, bats come closest to birds in terms of bone density, suggesting that small dense bones are a flight adaptation. Many bird bones have little marrow due to them being hollow.
A bird's beak is primarily made of bone as projections of the mandibles which are covered in keratin.
Some bones, primarily formed separately in subcutaneous tissues, include headgears (such as bony core of horns, antlers, ossicones), osteoderm, and os penis/ os clitoris. A deer's antlers are composed of bone which is an unusual example of bone being outside the skin of the animal once the velvet is shed.
The extinct predatory fish Dunkleosteus had sharp edges of hard exposed bone along its jaws.
The proportion of cortical bone that is 80% in the human skeleton may be much lower in other animals, especially in marine mammals and marine turtles, or in various Mesozoic marine reptiles, such as ichthyosaurs, among others. This proportion can vary quickly in evolution; it often increases in early stages of returns to an aquatic lifestyle, as seen in early whales and pinnipeds, among others. It subsequently decreases in pelagic taxa, which typically acquire spongy bone, but aquatic taxa that live in shallow water can retain very thick, pachyostotic, osteosclerotic, or pachyosteosclerotic bones, especially if they move slowly, like sea cows. In some cases, even marine taxa that had acquired spongy bone can revert to thicker, compact bones if they become adapted to live in shallow water, or in hypersaline (denser) water.
Many animals, particularly herbivores, practice osteophagy—the eating of bones. This is presumably carried out in order to replenish lacking phosphate.
Many bone diseases that affect humans also affect other vertebrates—an example of one disorder is skeletal fluorosis.
Bones from slaughtered animals have a number of uses. In prehistoric times, they have been used for making bone tools. They have further been used in bone carving, already important in prehistoric art, and also in modern time as crafting materials for buttons, beads, handles, bobbins, calculation aids, head nuts, dice, poker chips, pick-up sticks, arrows, scrimshaw, ornaments, etc.
Bone glue can be made by prolonged boiling of ground or cracked bones, followed by filtering and evaporation to thicken the resulting fluid. Historically once important, bone glue and other animal glues today have only a few specialized uses, such as in antiques restoration. Essentially the same process, with further refinement, thickening and drying, is used to make gelatin.
Broth is made by simmering several ingredients for a long time, traditionally including bones.
Bone char, a porous, black, granular material primarily used for filtration and also as a black pigment, is produced by charring mammal bones.
Oracle bone script was a writing system used in Ancient China based on inscriptions in bones. Its name originates from oracle bones, which were mainly ox clavicle. The Ancient Chinese (mainly in the Shang dynasty), would write their questions on the oracle bone, and burn the bone, and where the bone cracked would be the answer for the questions.
To point the bone at someone is considered bad luck in some cultures, such as Australian aborigines, such as by the Kurdaitcha.
The wishbones of fowl have been used for divination, and are still customarily used in a tradition to determine which one of two people pulling on either prong of the bone may make a wish.
Various cultures throughout history have adopted the custom of shaping an infant's head by the practice of artificial cranial deformation. A widely practised custom in China was that of foot binding to limit the normal growth of the foot. | [
{
"paragraph_id": 0,
"text": "A bone is a rigid organ that constitutes part of the skeleton in most vertebrate animals. Bones protect the various other organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. Bones come in a variety of shapes and sizes and have complex internal and external structures. They are lightweight yet strong and hard and serve multiple functions.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bone tissue (osseous tissue), which is also called bone in the uncountable sense of that word, is hard tissue, a type of specialised connective tissue. It has a honeycomb-like matrix internally, which helps to give the bone rigidity. Bone tissue is made up of different types of bone cells. Osteoblasts and osteocytes are involved in the formation and mineralisation of bone; osteoclasts are involved in the resorption of bone tissue. Modified (flattened) osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralised matrix of bone tissue has an organic component of mainly collagen called ossein and an inorganic component of bone mineral made up of various salts. Bone tissue is mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the human body at birth, there are approximately 300 bones present; many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the stapes in the middle ear.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Greek word for bone is ὀστέον (\"osteon\"), hence the many terms that use it as a prefix—such as osteopathy. In anatomical terminology, including the Terminologia Anatomica international standard, the word for a bone is os (for example, os breve, os longum, os sesamoideum).",
"title": ""
},
{
"paragraph_id": 4,
"text": "Bone is not uniformly solid, but consists of a flexible matrix (about 30%) and bound minerals (about 70%) which are intricately woven and endlessly remodeled by a group of specialized bone cells. Their unique composition and design allows bones to be relatively hard and strong, while remaining lightweight.",
"title": "Structure"
},
{
"paragraph_id": 5,
"text": "Bone matrix is 90 to 95% composed of elastic collagen fibers, also known as ossein, and the remainder is ground substance. The elasticity of collagen improves fracture resistance. The matrix is hardened by the binding of inorganic mineral salt, calcium phosphate, in a chemical arrangement known as bone mineral, a form of calcium apatite. It is the mineralization that gives bones rigidity.",
"title": "Structure"
},
{
"paragraph_id": 6,
"text": "Bone is actively constructed and remodeled throughout life by special bone cells known as osteoblasts and osteoclasts. Within any single bone, the tissue is woven into two main patterns, known as cortical and cancellous bone, each with a different appearance and characteristics.",
"title": "Structure"
},
{
"paragraph_id": 7,
"text": "The hard outer layer of bones is composed of cortical bone, which is also called compact bone as it is much denser than cancellous bone. It forms the hard exterior (cortex) of bones. The cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80% of the total bone mass of an adult human skeleton. It facilitates bone's main functions—to support the whole body, to protect organs, to provide levers for movement, and to store and release chemical elements, mainly calcium. It consists of multiple microscopic columns, each called an osteon or Haversian system. Each column is multiple layers of osteoblasts and osteocytes around a central canal called the haversian canal. Volkmann's canals at right angles connect the osteons together. The columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon will change. Cortical bone is covered by a periosteum on its outer surface, and an endosteum on its inner surface. The endosteum is the boundary between the cortical bone and the cancellous bone. The primary anatomical and functional unit of cortical bone is the osteon.",
"title": "Structure"
},
{
"paragraph_id": 8,
"text": "",
"title": "Structure"
},
{
"paragraph_id": 9,
"text": "Cancellous bone, or spongy bone, also known as trabecular bone, is the internal tissue of the skeletal bone and is an open cell porous network that follows the material properties of biofoams. Cancellous bone has a higher surface-area-to-volume ratio than cortical bone and it is less dense. This makes it weaker and more flexible. The greater surface area also makes it suitable for metabolic activities such as the exchange of calcium ions. Cancellous bone is typically found at the ends of long bones, near joints, and in the interior of vertebrae. Cancellous bone is highly vascular and often contains red bone marrow where hematopoiesis, the production of blood cells, occurs. The primary anatomical and functional unit of cancellous bone is the trabecula. The trabeculae are aligned towards the mechanical load distribution that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone.",
"title": "Structure"
},
{
"paragraph_id": 10,
"text": "The words cancellous and trabecular refer to the tiny lattice-shaped units (trabeculae) that form the tissue. It was first illustrated accurately in the engravings of Crisóstomo Martinez.",
"title": "Structure"
},
{
"paragraph_id": 11,
"text": "Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones.",
"title": "Structure"
},
{
"paragraph_id": 12,
"text": "Bone receives about 10% of cardiac output. Blood enters the endosteum, flows through the marrow, and exits through small vessels in the cortex. In humans, blood oxygen tension in bone marrow is about 6.6%, compared to about 12% in arterial blood, and 5% in venous and capillary blood.",
"title": "Structure"
},
{
"paragraph_id": 13,
"text": "Bone is metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets.",
"title": "Structure"
},
{
"paragraph_id": 14,
"text": "Osteoblasts are mononucleate bone-forming cells. They are located on the surface of osteon seams and make a protein mixture known as osteoid, which mineralizes to become bone. The osteoid seam is a narrow region of a newly formed organic matrix, not yet mineralized, located on the surface of a bone. Osteoid is primarily composed of Type I collagen. Osteoblasts also manufacture hormones, such as prostaglandins, to act on the bone itself. The osteoblast creates and repairs new bone by actually building around itself. First, the osteoblast puts up collagen fibers. These collagen fibers are used as a framework for the osteoblasts' work. The osteoblast then deposits calcium phosphate which is hardened by hydroxide and bicarbonate ions. The brand-new bone created by the osteoblast is called osteoid. Once the osteoblast is finished working it is actually trapped inside the bone once it hardens. When the osteoblast becomes trapped, it becomes known as an osteocyte. Other osteoblasts remain on the top of the new bone and are used to protect the underlying bone, these become known as bone lining cells.",
"title": "Structure"
},
{
"paragraph_id": 15,
"text": "Osteocytes are cells of mesenchymal origin and originate from osteoblasts that have migrated into and become trapped and surrounded by a bone matrix that they themselves produced. The spaces the cell body of osteocytes occupy within the mineralized collagen type I matrix are known as lacunae, while the osteocyte cell processes occupy channels called canaliculi. The many processes of osteocytes reach out to meet osteoblasts, osteoclasts, bone lining cells, and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other osteocytes in the bone through gap junctions—coupled cell processes which pass through the canalicular channels.",
"title": "Structure"
},
{
"paragraph_id": 16,
"text": "Osteoclasts are very large multinucleate cells that are responsible for the breakdown of bones by the process of bone resorption. New bone is then formed by the osteoblasts. Bone is constantly remodeled by the resorption of osteoclasts and created by osteoblasts. Osteoclasts are large cells with multiple nuclei located on bone surfaces in what are called Howship's lacunae (or resorption pits). These lacunae are the result of surrounding bone tissue that has been reabsorbed. Because the osteoclasts are derived from a monocyte stem-cell lineage, they are equipped with phagocytic-like mechanisms similar to circulating macrophages. Osteoclasts mature and/or migrate to discrete bone surfaces. Upon arrival, active enzymes, such as tartrate-resistant acid phosphatase, are secreted against the mineral substrate. The reabsorption of bone by osteoclasts also plays a role in calcium homeostasis.",
"title": "Structure"
},
{
"paragraph_id": 17,
"text": "Bones consist of living cells (osteoblasts and osteocytes) embedded in a mineralized organic matrix. The primary inorganic component of human bone is hydroxyapatite, the dominant bone mineral, having the nominal composition of Ca10(PO4)6(OH)2. The organic components of this matrix consist mainly of type I collagen—\"organic\" referring to materials produced as a result of the human body—and inorganic components, which alongside the dominant hydroxyapatite phase, include other compounds of calcium and phosphate including salts. Approximately 30% of the acellular component of bone consists of organic matter, while roughly 70% by mass is attributed to the inorganic phase. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic. The exact composition of the matrix may be subject to change over time due to nutrition and biomineralization, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also be found.",
"title": "Structure"
},
{
"paragraph_id": 18,
"text": "Type I collagen composes 90–95% of the organic matrix, with the remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate, as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein. Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar.",
"title": "Structure"
},
{
"paragraph_id": 19,
"text": "Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults, woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed woven. It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as \"bony substitution.\" Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 µm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers.",
"title": "Structure"
},
{
"paragraph_id": 20,
"text": "The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These cells synthesise collagen alpha polypetpide chains and then secrete collagen molecules. The collagen molecules associate with their neighbors and crosslink via lysyl oxidase to form collagen fibrils. At this stage, they are not yet mineralized, and this zone of unmineralized collagen fibrils is called \"osteoid\". Around and inside collagen fibrils calcium and phosphate eventually precipitate within days to weeks becoming then fully mineralized bone with an overall carbonate substituted hydroxyapatite inorganic phase.",
"title": "Structure"
},
{
"paragraph_id": 21,
"text": "In order to mineralise the bone, the osteoblasts secrete alkaline phosphatase, some of which is carried by vesicles. This cleaves the inhibitory pyrophosphate and simultaneously generates free phosphate ions for mineralization, acting as the foci for calcium and phosphate deposition. Vesicles may initiate some of the early mineralization events by rupturing and acting as a centre for crystals to grow on. Bone mineral may be formed from globular and plate structures, and via initially amorphous phases.",
"title": "Structure"
},
{
"paragraph_id": 22,
"text": "There are five types of bones in the human body: long, short, flat, irregular, and sesamoid.",
"title": "Types"
},
{
"paragraph_id": 23,
"text": "In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term \"osseous\", and the prefix \"osteo-\", referring to things related to bone, are still used commonly today.",
"title": "Terminology"
},
{
"paragraph_id": 24,
"text": "Some examples of terms used to describe bones include the term \"foramen\" to describe a hole through which something passes, and a \"canal\" or \"meatus\" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a \"condyle\", \"crest\", \"spine\", \"eminence\", \"tubercle\" or \"tuberosity\", depending on the protrusion's shape and location. In general, long bones are said to have a \"head\", \"neck\", and \"body\".",
"title": "Terminology"
},
{
"paragraph_id": 25,
"text": "When two bones join, they are said to \"articulate\". If the two bones have a fibrous connection and are relatively immobile, then the joint is called a \"suture\".",
"title": "Terminology"
},
{
"paragraph_id": 26,
"text": "The formation of bone is called ossification. During the fetal stage of development this occurs by two processes: intramembranous ossification and endochondral ossification. Intramembranous ossification involves the formation of bone from connective tissue whereas endochondral ossification involves the formation of bone from cartilage.",
"title": "Development"
},
{
"paragraph_id": 27,
"text": "Intramembranous ossification mainly occurs during formation of the flat bones of the skull but also the mandible, maxilla, and clavicles; the bone is formed from connective tissue such as mesenchyme tissue rather than from cartilage. The process includes: the development of the ossification center, calcification, trabeculae formation and the development of the periosteum.",
"title": "Development"
},
{
"paragraph_id": 28,
"text": "Endochondral ossification occurs in long bones and most other bones in the body; it involves the development of bone from cartilage. This process includes the development of a cartilage model, its growth and development, development of the primary and secondary ossification centers, and the formation of articular cartilage and the epiphyseal plates.",
"title": "Development"
},
{
"paragraph_id": 29,
"text": "Endochondral ossification begins with points in the cartilage called \"primary ossification centers.\" They mostly appear during fetal development, though a few short bones begin their primary ossification after birth. They are responsible for the formation of the diaphyses of long bones, short bones and certain parts of irregular bones. Secondary ossification occurs after birth and forms the epiphyses of long bones and the extremities of irregular and flat bones. The diaphysis and both epiphyses of a long bone are separated by a growing zone of cartilage (the epiphyseal plate). At skeletal maturity (18 to 25 years of age), all of the cartilage is replaced by bone, fusing the diaphysis and both epiphyses together (epiphyseal closure). In the upper limbs, only the diaphyses of the long bones and scapula are ossified. The epiphyses, carpal bones, coracoid process, medial border of the scapula, and acromion are still cartilaginous.",
"title": "Development"
},
{
"paragraph_id": 30,
"text": "The following steps are followed in the conversion of cartilage to bone:",
"title": "Development"
},
{
"paragraph_id": 31,
"text": "Bones have a variety of functions:",
"title": "Functions"
},
{
"paragraph_id": 32,
"text": "Bones serve a variety of mechanical functions. Together the bones in the body form the skeleton. They provide a frame to keep the body supported, and an attachment point for skeletal muscles, tendons, ligaments and joints, which function together to generate and transfer forces so that individual body parts or the whole body can be manipulated in three-dimensional space (the interaction between bone and muscle is studied in biomechanics).",
"title": "Functions"
},
{
"paragraph_id": 33,
"text": "Bones protect internal organs, such as the skull protecting the brain or the ribs protecting the heart and lungs. Because of the way that bone is formed, bone has a high compressive strength of about 170 MPa (1,700 kgf/cm), poor tensile strength of 104–121 MPa, and a very low shear stress strength (51.6 MPa). This means that bone resists pushing (compressional) stress well, resist pulling (tensional) stress less well, but only poorly resists shear stress (such as due to torsional loads). While bone is essentially brittle, bone does have a significant degree of elasticity, contributed chiefly by collagen.",
"title": "Functions"
},
{
"paragraph_id": 34,
"text": "Mechanically, bones also have a special role in hearing. The ossicles are three small bones in the middle ear which are involved in sound transduction.",
"title": "Functions"
},
{
"paragraph_id": 35,
"text": "The cancellous part of bones contain bone marrow. Bone marrow produces blood cells in a process called hematopoiesis. Blood cells that are created in bone marrow include red blood cells, platelets and white blood cells. Progenitor cells such as the hematopoietic stem cell divide in a process called mitosis to produce precursor cells. These include precursors which eventually give rise to white blood cells, and erythroblasts which give rise to red blood cells. Unlike red and white blood cells, created by mitosis, platelets are shed from very large cells called megakaryocytes. This process of progressive differentiation occurs within the bone marrow. After the cells are matured, they enter the circulation. Every day, over 2.5 billion red blood cells and platelets, and 50–100 billion granulocytes are produced in this way.",
"title": "Functions"
},
{
"paragraph_id": 36,
"text": "As well as creating cells, bone marrow is also one of the major sites where defective or aged red blood cells are destroyed.",
"title": "Functions"
},
{
"paragraph_id": 37,
"text": "Determined by the species, age, and the type of bone, bone cells make up to 15 percent of the bone. Growth factor storage—mineralized bone matrix stores important growth factors such as insulin-like growth factors, transforming growth factor, bone morphogenetic proteins and others.",
"title": "Functions"
},
{
"paragraph_id": 38,
"text": "Bone is constantly being created and replaced in a process known as remodeling. This ongoing turnover of bone is a process of resorption followed by replacement of bone with little change in shape. This is accomplished through osteoblasts and osteoclasts. Cells are stimulated by a variety of signals, and together referred to as a remodeling unit. Approximately 10% of the skeletal mass of an adult is remodelled each year. The purpose of remodeling is to regulate calcium homeostasis, repair microdamaged bones from everyday stress, and to shape the skeleton during growth. Repeated stress, such as weight-bearing exercise or bone healing, results in the bone thickening at the points of maximum stress (Wolff's law). It has been hypothesized that this is a result of bone's piezoelectric properties, which cause bone to generate small electrical potentials under stress.",
"title": "Remodeling"
},
{
"paragraph_id": 39,
"text": "The action of osteoblasts and osteoclasts are controlled by a number of chemical enzymes that either promote or inhibit the activity of the bone remodeling cells, controlling the rate at which bone is made, destroyed, or changed in shape. The cells also use paracrine signalling to control the activity of each other. For example, the rate at which osteoclasts resorb bone is inhibited by calcitonin and osteoprotegerin. Calcitonin is produced by parafollicular cells in the thyroid gland, and can bind to receptors on osteoclasts to directly inhibit osteoclast activity. Osteoprotegerin is secreted by osteoblasts and is able to bind RANK-L, inhibiting osteoclast stimulation.",
"title": "Remodeling"
},
{
"paragraph_id": 40,
"text": "Osteoblasts can also be stimulated to increase bone mass through increased secretion of osteoid and by inhibiting the ability of osteoclasts to break down osseous tissue. Increased secretion of osteoid is stimulated by the secretion of growth hormone by the pituitary, thyroid hormone and the sex hormones (estrogens and androgens). These hormones also promote increased secretion of osteoprotegerin. Osteoblasts can also be induced to secrete a number of cytokines that promote reabsorption of bone by stimulating osteoclast activity and differentiation from progenitor cells. Vitamin D, parathyroid hormone and stimulation from osteocytes induce osteoblasts to increase secretion of RANK-ligand and interleukin 6, which cytokines then stimulate increased reabsorption of bone by osteoclasts. These same compounds also increase secretion of macrophage colony-stimulating factor by osteoblasts, which promotes the differentiation of progenitor cells into osteoclasts, and decrease secretion of osteoprotegerin.",
"title": "Remodeling"
},
{
"paragraph_id": 41,
"text": "Bone volume is determined by the rates of bone formation and bone resorption. Recent research has suggested that certain growth factors may work to locally alter bone formation by increasing osteoblast activity. Numerous bone-derived growth factors have been isolated and classified via bone cultures. These factors include insulin-like growth factors I and II, transforming growth factor-beta, fibroblast growth factor, platelet-derived growth factor, and bone morphogenetic proteins. Evidence suggests that bone cells produce growth factors for extracellular storage in the bone matrix. The release of these growth factors from the bone matrix could cause the proliferation of osteoblast precursors. Essentially, bone growth factors may act as potential determinants of local bone formation. Research has suggested that cancellous bone volume in postmenopausal osteoporosis may be determined by the relationship between the total bone forming surface and the percent of surface resorption.",
"title": "Volume"
},
{
"paragraph_id": 42,
"text": "A number of diseases can affect bone, including arthritis, fractures, infections, osteoporosis and tumors. Conditions relating to bone can be managed by a variety of doctors, including rheumatologists for joints, and orthopedic surgeons, who may conduct surgery to fix broken bones. Other doctors, such as rehabilitation specialists may be involved in recovery, radiologists in interpreting the findings on imaging, and pathologists in investigating the cause of the disease, and family doctors may play a role in preventing complications of bone disease such as osteoporosis.",
"title": "Clinical significance"
},
{
"paragraph_id": 43,
"text": "When a doctor sees a patient, a history and exam will be taken. Bones are then often imaged, called radiography. This might include ultrasound X-ray, CT scan, MRI scan and other imaging such as a Bone scan, which may be used to investigate cancer. Other tests such as a blood test for autoimmune markers may be taken, or a synovial fluid aspirate may be taken.",
"title": "Clinical significance"
},
{
"paragraph_id": 44,
"text": "In normal bone, fractures occur when there is significant force applied or repetitive trauma over a long time. Fractures can also occur when a bone is weakened, such as with osteoporosis, or when there is a structural problem, such as when the bone remodels excessively (such as Paget's disease) or is the site of the growth of cancer. Common fractures include wrist fractures and hip fractures, associated with osteoporosis, vertebral fractures associated with high-energy trauma and cancer, and fractures of long-bones. Not all fractures are painful. When serious, depending on the fractures type and location, complications may include flail chest, compartment syndromes or fat embolism. Compound fractures involve the bone's penetration through the skin. Some complex fractures can be treated by the use of bone grafting procedures that replace missing bone portions.",
"title": "Clinical significance"
},
{
"paragraph_id": 45,
"text": "Fractures and their underlying causes can be investigated by X-rays, CT scans and MRIs. Fractures are described by their location and shape, and several classification systems exist, depending on the location of the fracture. A common long bone fracture in children is a Salter–Harris fracture. When fractures are managed, pain relief is often given, and the fractured area is often immobilised. This is to promote bone healing. In addition, surgical measures such as internal fixation may be used. Because of the immobilisation, people with fractures are often advised to undergo rehabilitation.",
"title": "Clinical significance"
},
{
"paragraph_id": 46,
"text": "There are several types of tumor that can affect bone; examples of benign bone tumors include osteoma, osteoid osteoma, osteochondroma, osteoblastoma, enchondroma, giant-cell tumor of bone, and aneurysmal bone cyst.",
"title": "Clinical significance"
},
{
"paragraph_id": 47,
"text": "Cancer can arise in bone tissue, and bones are also a common site for other cancers to spread (metastasise) to. Cancers that arise in bone are called \"primary\" cancers, although such cancers are rare. Metastases within bone are \"secondary\" cancers, with the most common being breast cancer, lung cancer, prostate cancer, thyroid cancer, and kidney cancer. Secondary cancers that affect bone can either destroy bone (called a \"lytic\" cancer) or create bone (a \"sclerotic\" cancer). Cancers of the bone marrow inside the bone can also affect bone tissue, examples including leukemia and multiple myeloma. Bone may also be affected by cancers in other parts of the body. Cancers in other parts of the body may release parathyroid hormone or parathyroid hormone-related peptide. This increases bone reabsorption, and can lead to bone fractures.",
"title": "Clinical significance"
},
{
"paragraph_id": 48,
"text": "Bone tissue that is destroyed or altered as a result of cancers is distorted, weakened, and more prone to fracture. This may lead to compression of the spinal cord, destruction of the marrow resulting in bruising, bleeding and immunosuppression, and is one cause of bone pain. If the cancer is metastatic, then there might be other symptoms depending on the site of the original cancer. Some bone cancers can also be felt.",
"title": "Clinical significance"
},
{
"paragraph_id": 49,
"text": "Cancers of the bone are managed according to their type, their stage, prognosis, and what symptoms they cause. Many primary cancers of bone are treated with radiotherapy. Cancers of bone marrow may be treated with chemotherapy, and other forms of targeted therapy such as immunotherapy may be used. Palliative care, which focuses on maximising a person's quality of life, may play a role in management, particularly if the likelihood of survival within five years is poor.",
"title": "Clinical significance"
},
{
"paragraph_id": 50,
"text": "Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined in women by the World Health Organization as a bone mineral density of 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average. This density is measured using dual energy X-ray absorptiometry (DEXA), with the term \"established osteoporosis\" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called \"postmenopausal osteoporosis\", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and are at risk of fracture.",
"title": "Clinical significance"
},
{
"paragraph_id": 51,
"text": "One of the most important risk factors for osteoporosis is advanced age. Accumulation of oxidative DNA damage in osteoblastic and osteoclastic cells appears to be a key factor in age-related osteoporosis.",
"title": "Clinical significance"
},
{
"paragraph_id": 52,
"text": "Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium and trace mineral supplements may also be advised, as may Vitamin D. When medication is used, it may include bisphosphonates, Strontium ranelate, and hormone replacement therapy.",
"title": "Clinical significance"
},
{
"paragraph_id": 53,
"text": "Osteopathic medicine is a school of medical thought originally developed based on the idea of the link between the musculoskeletal system and overall health, but now very similar to mainstream medicine. As of 2012, over 77,000 physicians in the United States are trained in osteopathic medical schools.",
"title": "Clinical significance"
},
{
"paragraph_id": 54,
"text": "The study of bones and teeth is referred to as osteology. It is frequently used in anthropology, archeology and forensic science for a variety of tasks. This can include determining the nutritional, health, age or injury status of the individual the bones were taken from. Preparing fleshed bones for these types of studies can involve the process of maceration.",
"title": "Osteology"
},
{
"paragraph_id": 55,
"text": "Typically anthropologists and archeologists study bone tools made by Homo sapiens and Homo neanderthalensis. Bones can serve a number of uses such as projectile points or artistic pigments, and can also be made from external bones such as antlers.",
"title": "Osteology"
},
{
"paragraph_id": 56,
"text": "Bird skeletons are very lightweight. Their bones are smaller and thinner, to aid flight. Among mammals, bats come closest to birds in terms of bone density, suggesting that small dense bones are a flight adaptation. Many bird bones have little marrow due to them being hollow.",
"title": "Other animals"
},
{
"paragraph_id": 57,
"text": "A bird's beak is primarily made of bone as projections of the mandibles which are covered in keratin.",
"title": "Other animals"
},
{
"paragraph_id": 58,
"text": "Some bones, primarily formed separately in subcutaneous tissues, include headgears (such as bony core of horns, antlers, ossicones), osteoderm, and os penis/ os clitoris. A deer's antlers are composed of bone which is an unusual example of bone being outside the skin of the animal once the velvet is shed.",
"title": "Other animals"
},
{
"paragraph_id": 59,
"text": "The extinct predatory fish Dunkleosteus had sharp edges of hard exposed bone along its jaws.",
"title": "Other animals"
},
{
"paragraph_id": 60,
"text": "The proportion of cortical bone that is 80% in the human skeleton may be much lower in other animals, especially in marine mammals and marine turtles, or in various Mesozoic marine reptiles, such as ichthyosaurs, among others. This proportion can vary quickly in evolution; it often increases in early stages of returns to an aquatic lifestyle, as seen in early whales and pinnipeds, among others. It subsequently decreases in pelagic taxa, which typically acquire spongy bone, but aquatic taxa that live in shallow water can retain very thick, pachyostotic, osteosclerotic, or pachyosteosclerotic bones, especially if they move slowly, like sea cows. In some cases, even marine taxa that had acquired spongy bone can revert to thicker, compact bones if they become adapted to live in shallow water, or in hypersaline (denser) water.",
"title": "Other animals"
},
{
"paragraph_id": 61,
"text": "Many animals, particularly herbivores, practice osteophagy—the eating of bones. This is presumably carried out in order to replenish lacking phosphate.",
"title": "Other animals"
},
{
"paragraph_id": 62,
"text": "Many bone diseases that affect humans also affect other vertebrates—an example of one disorder is skeletal fluorosis.",
"title": "Other animals"
},
{
"paragraph_id": 63,
"text": "Bones from slaughtered animals have a number of uses. In prehistoric times, they have been used for making bone tools. They have further been used in bone carving, already important in prehistoric art, and also in modern time as crafting materials for buttons, beads, handles, bobbins, calculation aids, head nuts, dice, poker chips, pick-up sticks, arrows, scrimshaw, ornaments, etc.",
"title": "Society and culture"
},
{
"paragraph_id": 64,
"text": "Bone glue can be made by prolonged boiling of ground or cracked bones, followed by filtering and evaporation to thicken the resulting fluid. Historically once important, bone glue and other animal glues today have only a few specialized uses, such as in antiques restoration. Essentially the same process, with further refinement, thickening and drying, is used to make gelatin.",
"title": "Society and culture"
},
{
"paragraph_id": 65,
"text": "Broth is made by simmering several ingredients for a long time, traditionally including bones.",
"title": "Society and culture"
},
{
"paragraph_id": 66,
"text": "Bone char, a porous, black, granular material primarily used for filtration and also as a black pigment, is produced by charring mammal bones.",
"title": "Society and culture"
},
{
"paragraph_id": 67,
"text": "Oracle bone script was a writing system used in Ancient China based on inscriptions in bones. Its name originates from oracle bones, which were mainly ox clavicle. The Ancient Chinese (mainly in the Shang dynasty), would write their questions on the oracle bone, and burn the bone, and where the bone cracked would be the answer for the questions.",
"title": "Society and culture"
},
{
"paragraph_id": 68,
"text": "To point the bone at someone is considered bad luck in some cultures, such as Australian aborigines, such as by the Kurdaitcha.",
"title": "Society and culture"
},
{
"paragraph_id": 69,
"text": "The wishbones of fowl have been used for divination, and are still customarily used in a tradition to determine which one of two people pulling on either prong of the bone may make a wish.",
"title": "Society and culture"
},
{
"paragraph_id": 70,
"text": "Various cultures throughout history have adopted the custom of shaping an infant's head by the practice of artificial cranial deformation. A widely practised custom in China was that of foot binding to limit the normal growth of the foot.",
"title": "Society and culture"
}
] | A bone is a rigid organ that constitutes part of the skeleton in most vertebrate animals. Bones protect the various other organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. Bones come in a variety of shapes and sizes and have complex internal and external structures. They are lightweight yet strong and hard and serve multiple functions. Bone tissue, which is also called bone in the uncountable sense of that word, is hard tissue, a type of specialised connective tissue. It has a honeycomb-like matrix internally, which helps to give the bone rigidity. Bone tissue is made up of different types of bone cells. Osteoblasts and osteocytes are involved in the formation and mineralisation of bone; osteoclasts are involved in the resorption of bone tissue. Modified (flattened) osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralised matrix of bone tissue has an organic component of mainly collagen called ossein and an inorganic component of bone mineral made up of various salts. Bone tissue is mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage. In the human body at birth, there are approximately 300 bones present; many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the stapes in the middle ear. The Greek word for bone is ὀστέον ("osteon"), hence the many terms that use it as a prefix—such as osteopathy. In anatomical terminology, including the Terminologia Anatomica international standard, the word for a bone is os. | 2001-08-28T10:31:49Z | 2023-12-06T04:47:09Z | [
"Template:Anchor",
"Template:Cvt",
"Template:Further",
"Template:Reflist",
"Template:Wikiquote",
"Template:Short description",
"Template:Pp-move-indef",
"Template:See also",
"Template:Citation",
"Template:Fractures",
"Template:Sfn",
"Template:CC-notice",
"Template:Main",
"Template:Commons category",
"Template:Bone and cartilage",
"Template:Hatgrp",
"Template:Use dmy dates",
"Template:As of",
"Template:Doi",
"Template:Authority control",
"Template:Infobox anatomy",
"Template:Cite book",
"Template:Cite journal",
"Template:Webarchive",
"Template:ISBN",
"Template:Human bones",
"Template:Citation needed",
"Template:Clear",
"Template:Cite web",
"Template:Harvnb"
] | https://en.wikipedia.org/wiki/Bone |
4,100 | Bretwalda | Bretwalda (also brytenwalda and bretenanwealda, sometimes capitalised) is an Old English word. The first record comes from the late 9th-century Anglo-Saxon Chronicle. It is given to some of the rulers of Anglo-Saxon kingdoms from the 5th century onwards who had achieved overlordship of some or all of the other Anglo-Saxon kingdoms. It is unclear whether the word dates back to the 5th century and was used by the kings themselves or whether it is a later, 9th-century, invention. The term bretwalda also appears in a 10th-century charter of Æthelstan. The literal meaning of the word is disputed and may translate to either 'wide-ruler' or 'Britain-ruler'.
The rulers of Mercia were generally the most powerful of the Anglo-Saxon kings from the mid 7th century to the early 9th century but are not accorded the title of bretwalda by the Chronicle, which had an anti-Mercian bias. The Annals of Wales continued to recognise the kings of Northumbria as "Kings of the Saxons" until the death of Osred I of Northumbria in 716.
The first syllable of the term bretwalda may be related to Briton or Britain. The second element is taken to mean 'ruler' or 'sovereign', though is more literally 'wielder'. Thus, this interpretation would mean 'sovereign of Britain' or 'wielder of Britain'. The word may be a compound containing the Old English adjective brytten (from the verb breotan meaning 'to break' or 'to disperse'), an element also found in the terms bryten rice ('kingdom'), bryten-grund ('the wide expanse of the earth') and bryten cyning ('king whose authority was widely extended'). Though the origin is ambiguous, the draughtsman of the charter issued by Æthelstan used the term in a way that can only mean 'wide-ruler'.
The latter etymology was first suggested by John Mitchell Kemble who alluded that "of six manuscripts in which this passage occurs, one only reads Bretwalda: of the remaining five, four have Bryten-walda or -wealda, and one Breten-anweald, which is precisely synonymous with Brytenwealda"; that Æthelstan was called brytenwealda ealles ðyses ealondes, which Kemble translates as 'ruler of all these islands'; and that bryten- is a common prefix to words meaning 'wide or general dispersion' and that the similarity to the word bretwealh ('Briton') is "merely accidental".
The first recorded use of the term Bretwalda comes from a West Saxon chronicle of the late 9th century that applied the term to Ecgberht, who ruled Wessex from 802 to 839. The chronicler also wrote down the names of seven kings that Bede listed in his Historia ecclesiastica gentis Anglorum in 731. All subsequent manuscripts of the Chronicle use the term Brytenwalda, which may have represented the original term or derived from a common error.
There is no evidence that the term was a title that had any practical use, with implications of formal rights, powers and office, or even that it had any existence before the 9th-century. Bede wrote in Latin and never used the term and his list of kings holding imperium should be treated with caution, not least in that he overlooks kings such as Penda of Mercia, who clearly held some kind of dominance during his reign. Similarly, in his list of bretwaldas, the West Saxon chronicler ignored such Mercian kings as Offa.
The use of the term Bretwalda was the attempt by a West Saxon chronicler to make some claim of West Saxon kings to the whole of Great Britain. The concept of the overlordship of the whole of Britain was at least recognised in the period, whatever was meant by the term. Quite possibly it was a survival of a Roman concept of "Britain": it is significant that, while the hyperbolic inscriptions on coins and titles in charters often included the title rex Britanniae, when England was unified the title used was rex Angulsaxonum, ('king of the Anglo-Saxons'.)
For some time, the existence of the word bretwalda in the Anglo-Saxon Chronicle, which was based in part on the list given by Bede in his Historia Ecclesiastica, led historians to think that there was perhaps a "title" held by Anglo-Saxon overlords. This was particularly attractive as it would lay the foundations for the establishment of an English monarchy. The 20th-century historian Frank Stenton said of the Anglo-Saxon chronicler that "his inaccuracy is more than compensated by his preservation of the English title applied to these outstanding kings". He argued that the term bretwalda "falls into line with the other evidence which points to the Germanic origin of the earliest English institutions".
Over the later 20th century, this assumption was increasingly challenged. Patrick Wormald interpreted it as "less an objectively realized office than a subjectively perceived status" and emphasised the partiality of its usage in favour of Southumbrian rulers. In 1991, Steven Fanning argued that "it is unlikely that the term ever existed as a title or was in common usage in Anglo-Saxon England". The fact that Bede never mentioned a special title for the kings in his list implies that he was unaware of one. In 1995, Simon Keynes observed that "if Bede's concept of the Southumbrian overlord, and the chronicler's concept of the 'Bretwalda', are to be regarded as artificial constructs, which have no validity outside the context of the literary works in which they appear, we are released from the assumptions about political development which they seem to involve... we might ask whether kings in the eighth and ninth centuries were quite so obsessed with the establishment of a pan-Southumbrian state".
Modern interpretations view the concept of bretwalda overlordship as complex and an important indicator of how a 9th-century chronicler interpreted history and attempted to insert the increasingly powerful Saxon kings into that history.
A complex array of dominance and subservience existed during the Anglo-Saxon period. A king who used charters to grant land in another kingdom indicated such a relationship. If the other kingdom were fairly large, as when the Mercians dominated the East Anglians, the relationship would have been more equal than in the case of the Mercian dominance of the Hwicce, which was a comparatively small kingdom. Mercia was arguably the most powerful Anglo-Saxon kingdom for much of the late 7th though 8th centuries, though Mercian kings are missing from the two main "lists". For Bede, Mercia was a traditional enemy of his native Northumbria and he regarded powerful kings such as the pagan Penda as standing in the way of the Christian conversion of the Anglo-Saxons. Bede omits them from his list, even though it is evident that Penda held a considerable degree of power. Similarly powerful Mercia kings such as Offa are missed out of the West Saxon Anglo-Saxon Chronicle, which sought to demonstrate the legitimacy of their kings to rule over other Anglo-Saxon peoples. | [
{
"paragraph_id": 0,
"text": "Bretwalda (also brytenwalda and bretenanwealda, sometimes capitalised) is an Old English word. The first record comes from the late 9th-century Anglo-Saxon Chronicle. It is given to some of the rulers of Anglo-Saxon kingdoms from the 5th century onwards who had achieved overlordship of some or all of the other Anglo-Saxon kingdoms. It is unclear whether the word dates back to the 5th century and was used by the kings themselves or whether it is a later, 9th-century, invention. The term bretwalda also appears in a 10th-century charter of Æthelstan. The literal meaning of the word is disputed and may translate to either 'wide-ruler' or 'Britain-ruler'.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The rulers of Mercia were generally the most powerful of the Anglo-Saxon kings from the mid 7th century to the early 9th century but are not accorded the title of bretwalda by the Chronicle, which had an anti-Mercian bias. The Annals of Wales continued to recognise the kings of Northumbria as \"Kings of the Saxons\" until the death of Osred I of Northumbria in 716.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first syllable of the term bretwalda may be related to Briton or Britain. The second element is taken to mean 'ruler' or 'sovereign', though is more literally 'wielder'. Thus, this interpretation would mean 'sovereign of Britain' or 'wielder of Britain'. The word may be a compound containing the Old English adjective brytten (from the verb breotan meaning 'to break' or 'to disperse'), an element also found in the terms bryten rice ('kingdom'), bryten-grund ('the wide expanse of the earth') and bryten cyning ('king whose authority was widely extended'). Though the origin is ambiguous, the draughtsman of the charter issued by Æthelstan used the term in a way that can only mean 'wide-ruler'.",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "The latter etymology was first suggested by John Mitchell Kemble who alluded that \"of six manuscripts in which this passage occurs, one only reads Bretwalda: of the remaining five, four have Bryten-walda or -wealda, and one Breten-anweald, which is precisely synonymous with Brytenwealda\"; that Æthelstan was called brytenwealda ealles ðyses ealondes, which Kemble translates as 'ruler of all these islands'; and that bryten- is a common prefix to words meaning 'wide or general dispersion' and that the similarity to the word bretwealh ('Briton') is \"merely accidental\".",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "The first recorded use of the term Bretwalda comes from a West Saxon chronicle of the late 9th century that applied the term to Ecgberht, who ruled Wessex from 802 to 839. The chronicler also wrote down the names of seven kings that Bede listed in his Historia ecclesiastica gentis Anglorum in 731. All subsequent manuscripts of the Chronicle use the term Brytenwalda, which may have represented the original term or derived from a common error.",
"title": "Contemporary use"
},
{
"paragraph_id": 5,
"text": "There is no evidence that the term was a title that had any practical use, with implications of formal rights, powers and office, or even that it had any existence before the 9th-century. Bede wrote in Latin and never used the term and his list of kings holding imperium should be treated with caution, not least in that he overlooks kings such as Penda of Mercia, who clearly held some kind of dominance during his reign. Similarly, in his list of bretwaldas, the West Saxon chronicler ignored such Mercian kings as Offa.",
"title": "Contemporary use"
},
{
"paragraph_id": 6,
"text": "The use of the term Bretwalda was the attempt by a West Saxon chronicler to make some claim of West Saxon kings to the whole of Great Britain. The concept of the overlordship of the whole of Britain was at least recognised in the period, whatever was meant by the term. Quite possibly it was a survival of a Roman concept of \"Britain\": it is significant that, while the hyperbolic inscriptions on coins and titles in charters often included the title rex Britanniae, when England was unified the title used was rex Angulsaxonum, ('king of the Anglo-Saxons'.)",
"title": "Contemporary use"
},
{
"paragraph_id": 7,
"text": "For some time, the existence of the word bretwalda in the Anglo-Saxon Chronicle, which was based in part on the list given by Bede in his Historia Ecclesiastica, led historians to think that there was perhaps a \"title\" held by Anglo-Saxon overlords. This was particularly attractive as it would lay the foundations for the establishment of an English monarchy. The 20th-century historian Frank Stenton said of the Anglo-Saxon chronicler that \"his inaccuracy is more than compensated by his preservation of the English title applied to these outstanding kings\". He argued that the term bretwalda \"falls into line with the other evidence which points to the Germanic origin of the earliest English institutions\".",
"title": "Modern interpretation by historians"
},
{
"paragraph_id": 8,
"text": "Over the later 20th century, this assumption was increasingly challenged. Patrick Wormald interpreted it as \"less an objectively realized office than a subjectively perceived status\" and emphasised the partiality of its usage in favour of Southumbrian rulers. In 1991, Steven Fanning argued that \"it is unlikely that the term ever existed as a title or was in common usage in Anglo-Saxon England\". The fact that Bede never mentioned a special title for the kings in his list implies that he was unaware of one. In 1995, Simon Keynes observed that \"if Bede's concept of the Southumbrian overlord, and the chronicler's concept of the 'Bretwalda', are to be regarded as artificial constructs, which have no validity outside the context of the literary works in which they appear, we are released from the assumptions about political development which they seem to involve... we might ask whether kings in the eighth and ninth centuries were quite so obsessed with the establishment of a pan-Southumbrian state\".",
"title": "Modern interpretation by historians"
},
{
"paragraph_id": 9,
"text": "Modern interpretations view the concept of bretwalda overlordship as complex and an important indicator of how a 9th-century chronicler interpreted history and attempted to insert the increasingly powerful Saxon kings into that history.",
"title": "Modern interpretation by historians"
},
{
"paragraph_id": 10,
"text": "A complex array of dominance and subservience existed during the Anglo-Saxon period. A king who used charters to grant land in another kingdom indicated such a relationship. If the other kingdom were fairly large, as when the Mercians dominated the East Anglians, the relationship would have been more equal than in the case of the Mercian dominance of the Hwicce, which was a comparatively small kingdom. Mercia was arguably the most powerful Anglo-Saxon kingdom for much of the late 7th though 8th centuries, though Mercian kings are missing from the two main \"lists\". For Bede, Mercia was a traditional enemy of his native Northumbria and he regarded powerful kings such as the pagan Penda as standing in the way of the Christian conversion of the Anglo-Saxons. Bede omits them from his list, even though it is evident that Penda held a considerable degree of power. Similarly powerful Mercia kings such as Offa are missed out of the West Saxon Anglo-Saxon Chronicle, which sought to demonstrate the legitimacy of their kings to rule over other Anglo-Saxon peoples.",
"title": "Overlordship"
}
] | Bretwalda is an Old English word. The first record comes from the late 9th-century Anglo-Saxon Chronicle. It is given to some of the rulers of Anglo-Saxon kingdoms from the 5th century onwards who had achieved overlordship of some or all of the other Anglo-Saxon kingdoms. It is unclear whether the word dates back to the 5th century and was used by the kings themselves or whether it is a later, 9th-century, invention. The term bretwalda also appears in a 10th-century charter of Æthelstan. The literal meaning of the word is disputed and may translate to either 'wide-ruler' or 'Britain-ruler'. The rulers of Mercia were generally the most powerful of the Anglo-Saxon kings from the mid 7th century to the early 9th century but are not accorded the title of bretwalda by the Chronicle, which had an anti-Mercian bias. The Annals of Wales continued to recognise the kings of Northumbria as "Kings of the Saxons" until the death of Osred I of Northumbria in 716. | 2001-10-16T10:49:23Z | 2023-10-04T22:01:48Z | [
"Template:Short description",
"Template:Circa",
"Template:Cite book",
"Template:Bretwalda",
"Template:Use dmy dates",
"Template:Use British English",
"Template:Rp",
"Template:Reflist",
"Template:Citation",
"Template:Wikisource1911Enc"
] | https://en.wikipedia.org/wiki/Bretwalda |
4,101 | Brouwer fixed-point theorem | Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function f {\displaystyle f} mapping a nonempty compact convex set to itself, there is a point x 0 {\displaystyle x_{0}} such that f ( x 0 ) = x 0 {\displaystyle f(x_{0})=x_{0}} . The simplest forms of Brouwer's theorem are for continuous functions f {\displaystyle f} from a closed interval I {\displaystyle I} in the real numbers to itself or from a closed disk D {\displaystyle D} to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset K {\displaystyle K} of Euclidean space to itself.
Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu.
The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the n-dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911.
The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows:
This can be generalized to an arbitrary finite dimension:
A slightly more general version is as follows:
An even more general form is better known under a different name:
The theorem holds only for functions that are endomorphisms (functions that have the same set as the domain and codomain) and for nonempty sets that are compact (thus, in particular, bounded and closed) and convex (or homeomorphic to convex). The following examples show why the pre-conditions are important.
Consider the function
with domain [-1,1]. The range of the function is [0,2]. Thus, f is not an endomorphism.
Consider the function
which is a continuous function from R {\displaystyle \mathbb {R} } to itself. As it shifts every point to the right, it cannot have a fixed point. The space R {\displaystyle \mathbb {R} } is convex and closed, but not bounded.
Consider the function
which is a continuous function from the open interval (−1,1) to itself. Since x = 1 is not part of the interval, there is not a fixed point of f(x) = x. The space (−1,1) is convex and bounded, but not closed. On the other hand, the function f does have a fixed point for the closed interval [−1,1], namely f(1) = 1.
Convexity is not strictly necessary for Brouwer's fixed-point theorem. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, Brouwer's fixed-point theorem is equivalent to forms in which the domain is required to be a closed unit ball D n {\displaystyle D^{n}} . For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.).
The following example shows that Brouwer's fixed-point theorem does not work for domains with holes. Consider the function f ( x ) = − x {\displaystyle f(x)=-x} , which is a continuous function from the unit circle to itself. Since -x≠x holds for any point of the unit circle, f has no fixed point. The analogous example works for the n-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function f does have a fixed point for the unit disc, since it takes the origin to itself.
A formal generalization of Brouwer's fixed-point theorem for "hole-free" domains can be derived from the Lefschetz fixed-point theorem.
The continuous function in this theorem is not required to be bijective or surjective.
The theorem has several "real world" illustrations. Here are some examples.
The theorem is supposed to have originated from Brouwer's observation of a cup of gourmet coffee. If one stirs to dissolve a lump of sugar, it appears there is always a point without motion. He drew the conclusion that at any moment, there is a point on the surface that is not moving. The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit. The result is not intuitive, since the original fixed point may become mobile when another fixed point appears.
Brouwer is said to have added: "I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet." Brouwer "flattens" his sheet as with a flat iron, without removing the folds and wrinkles. Unlike the coffee cup example, the crumpled paper example also demonstrates that more than one fixed point may exist. This distinguishes Brouwer's result from other fixed-point theorems, such as Stefan Banach's, that guarantee uniqueness.
In one dimension, the result is intuitive and easy to prove. The continuous function f is defined on a closed interval [a, b] and takes values in the same interval. Saying that this function has a fixed point amounts to saying that its graph (dark green in the figure on the right) intersects that of the function defined on the same interval [a, b] which maps x to x (light green).
Intuitively, any continuous line from the left edge of the square to the right edge must necessarily intersect the green diagonal. To prove this, consider the function g which maps x to f(x) − x. It is ≥ 0 on a and ≤ 0 on b. By the intermediate value theorem, g has a zero in [a, b]; this zero is a fixed point.
Brouwer is said to have expressed this as follows: "Instead of examining a surface, we will prove the theorem about a piece of string. Let us begin with the string in an unfolded state, then refold it. Let us flatten the refolded string. Again a point of the string has not changed its position with respect to its original position on the unfolded string."
The Brouwer fixed point theorem was one of the early achievements of algebraic topology, and is the basis of more general fixed point theorems which are important in functional analysis. The case n = 3 first was proved by Piers Bohl in 1904 (published in Journal für die reine und angewandte Mathematik). It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, and Brouwer found a different proof in the same year. Since these early proofs were all non-constructive indirect proofs, they ran contrary to Brouwer's intuitionist ideals. Although the existence of a fixed point is not constructive in the sense of constructivism in mathematics, methods to approximate fixed points guaranteed by Brouwer's theorem are now known.
At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community. Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: "Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge." He also noted that the search for an approximate solution is no more efficient: "the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision".
He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval t. If the area is a circular band, or if it is not closed, then this is not necessarily the case.
To understand differential equations better, a new branch of mathematics was born. Poincaré called it analysis situs. The French Encyclopædia Universalis defines it as the branch which "treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion.
Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction.
At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed.
It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: "Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator."
Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology.
The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory. Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem.
Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the n-dimensional sphere to R has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to set-valued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations.
Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets.
Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem.
Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping, stemming from ideas in differential topology. Several modern accounts of the proof can be found in the literature, notably Milnor (1965).
Let K = B ( 0 ) ¯ {\displaystyle K={\overline {B(0)}}} denote the closed unit ball in R n {\displaystyle \mathbb {R} ^{n}} centered at the origin. Suppose for simplicity that f : K → K {\displaystyle f:K\to K} is continuously differentiable. A regular value of f {\displaystyle f} is a point p ∈ B ( 0 ) {\displaystyle p\in B(0)} such that the Jacobian of f {\displaystyle f} is non-singular at every point of the preimage of p {\displaystyle p} . In particular, by the inverse function theorem, every point of the preimage of f {\displaystyle f} lies in B ( 0 ) {\displaystyle B(0)} (the interior of K {\displaystyle K} ). The degree of f {\displaystyle f} at a regular value p ∈ B ( 0 ) {\displaystyle p\in B(0)} is defined as the sum of the signs of the Jacobian determinant of f {\displaystyle f} over the preimages of p {\displaystyle p} under f {\displaystyle f} :
The degree is, roughly speaking, the number of "sheets" of the preimage f lying over a small open set around p, with sheets counted oppositely if they are oppositely oriented. This is thus a generalization of winding number to higher dimensions.
The degree satisfies the property of homotopy invariance: let f {\displaystyle f} and g {\displaystyle g} be two continuously differentiable functions, and H t ( x ) = t f + ( 1 − t ) g {\displaystyle H_{t}(x)=tf+(1-t)g} for 0 ≤ t ≤ 1 {\displaystyle 0\leq t\leq 1} . Suppose that the point p {\displaystyle p} is a regular value of H t {\displaystyle H_{t}} for all t. Then deg p f = deg p g {\displaystyle \deg _{p}f=\deg _{p}g} .
If there is no fixed point of the boundary of K {\displaystyle K} , then the function
is well-defined, and
H ( t , x ) = x − t f ( x ) sup x ∈ K | x − t f ( x ) | {\displaystyle H(t,x)={\frac {x-tf(x)}{\sup _{x\in K}\left|x-tf(x)\right|}}}
defines a homotopy from the identity function to it. The identity function has degree one at every point. In particular, the identity function has degree one at the origin, so g {\displaystyle g} also has degree one at the origin. As a consequence, the preimage g − 1 ( 0 ) {\displaystyle g^{-1}(0)} is not empty. The elements of g − 1 ( 0 ) {\displaystyle g^{-1}(0)} are precisely the fixed points of the original function f.
This requires some work to make fully general. The definition of degree must be extended to singular values of f, and then to continuous functions. The more modern advent of homology theory simplifies the construction of the degree, and so has become a standard proof in the literature.
The hairy ball theorem states that on the unit sphere S in an odd-dimensional Euclidean space, there is no nowhere-vanishing continuous tangent vector field w on S. (The tangency condition means that w(x) ⋅ x = 0 for every unit vector x.) Sometimes the theorem is expressed by the statement that "there is always a place on the globe with no wind". An elementary proof of the hairy ball theorem can be found in Milnor (1978).
In fact, suppose first that w is continuously differentiable. By scaling, it can be assumed that w is a continuously differentiable unit tangent vector on S. It can be extended radially to a small spherical shell A of S. For t sufficiently small, a routine computation shows that the mapping ft(x) = x + t w(x) is a contraction mapping on A and that the volume of its image is a polynomial in t. On the other hand, as a contraction mapping, ft must restrict to a homeomorphism of S onto (1 + t) S and A onto (1 + t) A. This gives a contradiction, because, if the dimension n of the Euclidean space is odd, (1 + t) is not a polynomial.
If w is only a continuous unit tangent vector on S, by the Weierstrass approximation theorem, it can be uniformly approximated by a polynomial map u of A into Euclidean space. The orthogonal projection on to the tangent space is given by v(x) = u(x) - u(x) ⋅ x. Thus v is polynomial and nowhere vanishing on A; by construction v/||v|| is a smooth unit tangent vector field on S, a contradiction.
The continuous version of the hairy ball theorem can now be used to prove the Brouwer fixed point theorem. First suppose that n is even. If there were a fixed-point-free continuous self-mapping f of the closed unit ball B of the n-dimensional Euclidean space V, set
Since f has no fixed points, it follows that, for x in the interior of B, the vector w(x) is non-zero; and for x in S, the scalar product x ⋅ w(x) = 1 – x ⋅ f(x) is strictly positive. From the original n-dimensional space Euclidean space V, construct a new auxiliary (n + 1)-dimensional space W = V x R, with coordinates y = (x, t). Set
By construction X is a continuous vector field on the unit sphere of W, satisfying the tangency condition y ⋅ X(y) = 0. Moreover, X(y) is nowhere vanishing (because, if x has norm 1, then x ⋅ w(x) is non-zero; while if x has norm strictly less than 1, then t and w(x) are both non-zero). This contradiction proves the fixed point theorem when n is even. For n odd, one can apply the fixed point theorem to the closed unit ball B in n + 1 dimensions and the mapping F(x,y) = (f(x),0). The advantage of this proof is that it uses only elementary techniques; more general results like the Borsuk-Ulam theorem require tools from algebraic topology.
The proof uses the observation that the boundary of the n-disk D is S, the (n − 1)-sphere.
Suppose, for contradiction, that a continuous function f : D → D has no fixed point. This means that, for every point x in D, the points x and f(x) are distinct. Because they are distinct, for every point x in D, we can construct a unique ray from f(x) to x and follow the ray until it intersects the boundary S (see illustration). By calling this intersection point F(x), we define a function F : D → S sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever x itself is on the boundary, then the intersection point F(x) must be x.
Consequently, F is a special type of continuous function known as a retraction: every point of the codomain (in this case S) is a fixed point of F.
Intuitively it seems unlikely that there could be a retraction of D onto S, and in the case n = 1, the impossibility is more basic, because S (i.e., the endpoints of the closed interval D) is not even connected. The case n = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce a surjective group homomorphism from the fundamental group of D to that of S, but the latter group is isomorphic to Z while the first group is trivial, so this is impossible. The case n = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields.
For n > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology Hn−1(D) is trivial, while Hn−1(S) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group.
The impossibility of a retraction can also be shown using the de Rham cohomology of open subsets of Euclidean space E. For n ≥ 2, the de Rham cohomology of U = E – (0) is one-dimensional in degree 0 and n - 1, and vanishes otherwise. If a retraction existed, then U would have to be contractible and its de Rham cohomology in degree n - 1 would have to vanish, a contradiction.
As in the proof of Brouwer's fixed-point theorem for continuous maps using homology, it is reduced to proving that there is no continuous retraction F from the ball B onto its boundary ∂B. In that case it can be assumed that F is smooth, since it can be approximated using the Weierstrass approximation theorem or by convolving with non-negative smooth bump functions of sufficiently small support and integral one (i.e. mollifying). If ω is a volume form on the boundary then by Stokes' theorem,
giving a contradiction.
More generally, this shows that there is no smooth retraction from any non-empty smooth oriented compact manifold M onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form ω generates the de Rham cohomology group H(∂M) which is isomorphic to the homology group Hn-1(∂M) by de Rham's theorem.
The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which f is a function from the standard n-simplex, Δ n , {\displaystyle \Delta ^{n},} to itself, where
For every point P ∈ Δ n , {\displaystyle P\in \Delta ^{n},} also f ( P ) ∈ Δ n . {\displaystyle f(P)\in \Delta ^{n}.} Hence the sum of their coordinates is equal:
Hence, by the pigeonhole principle, for every P ∈ Δ n , {\displaystyle P\in \Delta ^{n},} there must be an index j ∈ { 0 , … , n } {\displaystyle j\in \{0,\ldots ,n\}} such that the j {\displaystyle j} th coordinate of P {\displaystyle P} is greater than or equal to the j {\displaystyle j} th coordinate of its image under f:
Moreover, if P {\displaystyle P} lies on a k-dimensional sub-face of Δ n , {\displaystyle \Delta ^{n},} then by the same argument, the index j {\displaystyle j} can be selected from among the k + 1 coordinates which are not zero on this sub-face.
We now use this fact to construct a Sperner coloring. For every triangulation of Δ n , {\displaystyle \Delta ^{n},} the color of every vertex P {\displaystyle P} is an index j {\displaystyle j} such that f ( P ) j ≤ P j . {\displaystyle f(P)_{j}\leq P_{j}.}
By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an n-dimensional simplex whose vertices are colored with the entire set of n + 1 available colors.
Because f is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point P {\displaystyle P} which satisfies the labeling condition in all coordinates: f ( P ) j ≤ P j {\displaystyle f(P)_{j}\leq P_{j}} for all j . {\displaystyle j.}
Because the sum of the coordinates of P {\displaystyle P} and f ( P ) {\displaystyle f(P)} must be equal, all these inequalities must actually be equalities. But this means that:
That is, P {\displaystyle P} is a fixed point of f . {\displaystyle f.}
There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. The indirect proof starts by noting that the map f can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem or by convolving with smooth bump functions. One then defines a retraction as above which must now be differentiable. Such a retraction must have a non-singular value, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image would be a 1-manifold with boundary. The boundary would have to contain at least two end points, both of which would have to lie on the boundary of the original ball—which is impossible in a retraction.
R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point, q, on the boundary, (assuming it is not a fixed point) the one manifold with boundary mentioned above does exist and the only possibility is that it leads from q to a fixed point. It is an easy numerical task to follow such a path from q to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems.
A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If r : B → ∂ B {\displaystyle r\colon B\to \partial B} is a smooth retraction, one considers the smooth deformation g t ( x ) := t r ( x ) + ( 1 − t ) x , {\displaystyle g^{t}(x):=tr(x)+(1-t)x,} and the smooth function
Differentiating under the sign of integral it is not difficult to check that φ′(t) = 0 for all t, so φ is a constant function, which is a contradiction because φ(0) is the n-dimensional volume of the ball, while φ(1) is zero. The geometric idea is that φ(t) is the oriented area of g(B) (that is, the Lebesgue measure of the image of the ball via g, taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter t passes form 0 to 1 the map g transforms continuously from the identity map of the ball, to the retraction r, which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of r is necessarily 0, as its image is the boundary of the ball, a set of null measure.
A quite different proof given by David Gale is based on the game of Hex. The basic theorem regarding Hex, first proven by John Nash, is that no game of Hex can end in a draw; the first player always has a winning strategy (although this theorem is nonconstructive, and explicit strategies have not been fully developed for board sizes of dimensions 10 x 10 or greater). This turns out to be equivalent to the Brouwer fixed-point theorem for dimension 2. By considering n-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex.
The Lefschetz fixed-point theorem says that if a continuous map f from a finite simplicial complex B to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number
and in particular if the Lefschetz number is nonzero then f must have a fixed point. If B is a ball (or more generally is contractible) then the Lefschetz number is one because the only non-zero simplicial homology group is: H 0 ( B ) {\displaystyle H_{0}(B)} and f acts as the identity on this group, so f has a fixed point.
In reverse mathematics, Brouwer's theorem can be proved in the system WKL0, and conversely over the base system RCA0 Brouwer's theorem for a square implies the weak Kőnig's lemma, so this gives a precise description of the strength of Brouwer's theorem.
The Brouwer fixed-point theorem forms the starting point of a number of more general fixed-point theorems.
The straightforward generalization to infinite dimensions, i.e. using the unit ball of an arbitrary Hilbert space instead of Euclidean space, is not true. The main problem here is that the unit balls of infinite-dimensional Hilbert spaces are not compact. For example, in the Hilbert space ℓ of square-summable real (or complex) sequences, consider the map f : ℓ → ℓ which sends a sequence (xn) from the closed unit ball of ℓ to the sequence (yn) defined by
It is not difficult to check that this map is continuous, has its image in the unit sphere of ℓ, but does not have a fixed point.
The generalizations of the Brouwer fixed-point theorem to infinite dimensional spaces therefore all include a compactness assumption of some sort, and also often an assumption of convexity. See fixed-point theorems in infinite-dimensional spaces for a discussion of these theorems.
There is also finite-dimensional generalization to a larger class of spaces: If X {\displaystyle X} is a product of finitely many chainable continua, then every continuous function f : X → X {\displaystyle f:X\rightarrow X} has a fixed point, where a chainable continuum is a (usually but in this case not necessarily metric) compact Hausdorff space of which every open cover has a finite open refinement { U 1 , … , U m } {\displaystyle \{U_{1},\ldots ,U_{m}\}} , such that U i ∩ U j ≠ ∅ {\displaystyle U_{i}\cap U_{j}\neq \emptyset } if and only if | i − j | ≤ 1 {\displaystyle |i-j|\leq 1} . Examples of chainable continua include compact connected linearly ordered spaces and in particular closed intervals of real numbers.
The Kakutani fixed point theorem generalizes the Brouwer fixed-point theorem in a different direction: it stays in R, but considers upper hemi-continuous set-valued functions (functions that assign to each point of the set a subset of the set). It also requires compactness and convexity of the set.
The Lefschetz fixed-point theorem applies to (almost) arbitrary compact topological spaces, and gives a condition in terms of singular homology that guarantees the existence of fixed points; this condition is trivially satisfied for any map in the case of D.
There are several fixed-point theorems which come in three equivalent variants: an algebraic topology variant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in the top row can be deduced from the one below it in the same column. | [
{
"paragraph_id": 0,
"text": "Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function f {\\displaystyle f} mapping a nonempty compact convex set to itself, there is a point x 0 {\\displaystyle x_{0}} such that f ( x 0 ) = x 0 {\\displaystyle f(x_{0})=x_{0}} . The simplest forms of Brouwer's theorem are for continuous functions f {\\displaystyle f} from a closed interval I {\\displaystyle I} in the real numbers to itself or from a closed disk D {\\displaystyle D} to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset K {\\displaystyle K} of Euclidean space to itself.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the n-dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows:",
"title": "Statement"
},
{
"paragraph_id": 4,
"text": "This can be generalized to an arbitrary finite dimension:",
"title": "Statement"
},
{
"paragraph_id": 5,
"text": "A slightly more general version is as follows:",
"title": "Statement"
},
{
"paragraph_id": 6,
"text": "An even more general form is better known under a different name:",
"title": "Statement"
},
{
"paragraph_id": 7,
"text": "The theorem holds only for functions that are endomorphisms (functions that have the same set as the domain and codomain) and for nonempty sets that are compact (thus, in particular, bounded and closed) and convex (or homeomorphic to convex). The following examples show why the pre-conditions are important.",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 8,
"text": "Consider the function",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 9,
"text": "with domain [-1,1]. The range of the function is [0,2]. Thus, f is not an endomorphism.",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 10,
"text": "Consider the function",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 11,
"text": "which is a continuous function from R {\\displaystyle \\mathbb {R} } to itself. As it shifts every point to the right, it cannot have a fixed point. The space R {\\displaystyle \\mathbb {R} } is convex and closed, but not bounded.",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 12,
"text": "Consider the function",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 13,
"text": "which is a continuous function from the open interval (−1,1) to itself. Since x = 1 is not part of the interval, there is not a fixed point of f(x) = x. The space (−1,1) is convex and bounded, but not closed. On the other hand, the function f does have a fixed point for the closed interval [−1,1], namely f(1) = 1.",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 14,
"text": "Convexity is not strictly necessary for Brouwer's fixed-point theorem. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, Brouwer's fixed-point theorem is equivalent to forms in which the domain is required to be a closed unit ball D n {\\displaystyle D^{n}} . For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.).",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 15,
"text": "The following example shows that Brouwer's fixed-point theorem does not work for domains with holes. Consider the function f ( x ) = − x {\\displaystyle f(x)=-x} , which is a continuous function from the unit circle to itself. Since -x≠x holds for any point of the unit circle, f has no fixed point. The analogous example works for the n-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function f does have a fixed point for the unit disc, since it takes the origin to itself.",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 16,
"text": "A formal generalization of Brouwer's fixed-point theorem for \"hole-free\" domains can be derived from the Lefschetz fixed-point theorem.",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 17,
"text": "The continuous function in this theorem is not required to be bijective or surjective.",
"title": "Importance of the pre-conditions"
},
{
"paragraph_id": 18,
"text": "The theorem has several \"real world\" illustrations. Here are some examples.",
"title": "Illustrations"
},
{
"paragraph_id": 19,
"text": "The theorem is supposed to have originated from Brouwer's observation of a cup of gourmet coffee. If one stirs to dissolve a lump of sugar, it appears there is always a point without motion. He drew the conclusion that at any moment, there is a point on the surface that is not moving. The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit. The result is not intuitive, since the original fixed point may become mobile when another fixed point appears.",
"title": "Intuitive approach"
},
{
"paragraph_id": 20,
"text": "Brouwer is said to have added: \"I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet.\" Brouwer \"flattens\" his sheet as with a flat iron, without removing the folds and wrinkles. Unlike the coffee cup example, the crumpled paper example also demonstrates that more than one fixed point may exist. This distinguishes Brouwer's result from other fixed-point theorems, such as Stefan Banach's, that guarantee uniqueness.",
"title": "Intuitive approach"
},
{
"paragraph_id": 21,
"text": "In one dimension, the result is intuitive and easy to prove. The continuous function f is defined on a closed interval [a, b] and takes values in the same interval. Saying that this function has a fixed point amounts to saying that its graph (dark green in the figure on the right) intersects that of the function defined on the same interval [a, b] which maps x to x (light green).",
"title": "Intuitive approach"
},
{
"paragraph_id": 22,
"text": "Intuitively, any continuous line from the left edge of the square to the right edge must necessarily intersect the green diagonal. To prove this, consider the function g which maps x to f(x) − x. It is ≥ 0 on a and ≤ 0 on b. By the intermediate value theorem, g has a zero in [a, b]; this zero is a fixed point.",
"title": "Intuitive approach"
},
{
"paragraph_id": 23,
"text": "Brouwer is said to have expressed this as follows: \"Instead of examining a surface, we will prove the theorem about a piece of string. Let us begin with the string in an unfolded state, then refold it. Let us flatten the refolded string. Again a point of the string has not changed its position with respect to its original position on the unfolded string.\"",
"title": "Intuitive approach"
},
{
"paragraph_id": 24,
"text": "The Brouwer fixed point theorem was one of the early achievements of algebraic topology, and is the basis of more general fixed point theorems which are important in functional analysis. The case n = 3 first was proved by Piers Bohl in 1904 (published in Journal für die reine und angewandte Mathematik). It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, and Brouwer found a different proof in the same year. Since these early proofs were all non-constructive indirect proofs, they ran contrary to Brouwer's intuitionist ideals. Although the existence of a fixed point is not constructive in the sense of constructivism in mathematics, methods to approximate fixed points guaranteed by Brouwer's theorem are now known.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community. Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: \"Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge.\" He also noted that the search for an approximate solution is no more efficient: \"the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision\".",
"title": "History"
},
{
"paragraph_id": 26,
"text": "He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval t. If the area is a circular band, or if it is not closed, then this is not necessarily the case.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "To understand differential equations better, a new branch of mathematics was born. Poincaré called it analysis situs. The French Encyclopædia Universalis defines it as the branch which \"treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing\". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: \"Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator.\"",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory. Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the n-dimensional sphere to R has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to set-valued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping, stemming from ideas in differential topology. Several modern accounts of the proof can be found in the literature, notably Milnor (1965).",
"title": "Proof outlines"
},
{
"paragraph_id": 37,
"text": "Let K = B ( 0 ) ¯ {\\displaystyle K={\\overline {B(0)}}} denote the closed unit ball in R n {\\displaystyle \\mathbb {R} ^{n}} centered at the origin. Suppose for simplicity that f : K → K {\\displaystyle f:K\\to K} is continuously differentiable. A regular value of f {\\displaystyle f} is a point p ∈ B ( 0 ) {\\displaystyle p\\in B(0)} such that the Jacobian of f {\\displaystyle f} is non-singular at every point of the preimage of p {\\displaystyle p} . In particular, by the inverse function theorem, every point of the preimage of f {\\displaystyle f} lies in B ( 0 ) {\\displaystyle B(0)} (the interior of K {\\displaystyle K} ). The degree of f {\\displaystyle f} at a regular value p ∈ B ( 0 ) {\\displaystyle p\\in B(0)} is defined as the sum of the signs of the Jacobian determinant of f {\\displaystyle f} over the preimages of p {\\displaystyle p} under f {\\displaystyle f} :",
"title": "Proof outlines"
},
{
"paragraph_id": 38,
"text": "The degree is, roughly speaking, the number of \"sheets\" of the preimage f lying over a small open set around p, with sheets counted oppositely if they are oppositely oriented. This is thus a generalization of winding number to higher dimensions.",
"title": "Proof outlines"
},
{
"paragraph_id": 39,
"text": "The degree satisfies the property of homotopy invariance: let f {\\displaystyle f} and g {\\displaystyle g} be two continuously differentiable functions, and H t ( x ) = t f + ( 1 − t ) g {\\displaystyle H_{t}(x)=tf+(1-t)g} for 0 ≤ t ≤ 1 {\\displaystyle 0\\leq t\\leq 1} . Suppose that the point p {\\displaystyle p} is a regular value of H t {\\displaystyle H_{t}} for all t. Then deg p f = deg p g {\\displaystyle \\deg _{p}f=\\deg _{p}g} .",
"title": "Proof outlines"
},
{
"paragraph_id": 40,
"text": "If there is no fixed point of the boundary of K {\\displaystyle K} , then the function",
"title": "Proof outlines"
},
{
"paragraph_id": 41,
"text": "is well-defined, and",
"title": "Proof outlines"
},
{
"paragraph_id": 42,
"text": "H ( t , x ) = x − t f ( x ) sup x ∈ K | x − t f ( x ) | {\\displaystyle H(t,x)={\\frac {x-tf(x)}{\\sup _{x\\in K}\\left|x-tf(x)\\right|}}}",
"title": "Proof outlines"
},
{
"paragraph_id": 43,
"text": "defines a homotopy from the identity function to it. The identity function has degree one at every point. In particular, the identity function has degree one at the origin, so g {\\displaystyle g} also has degree one at the origin. As a consequence, the preimage g − 1 ( 0 ) {\\displaystyle g^{-1}(0)} is not empty. The elements of g − 1 ( 0 ) {\\displaystyle g^{-1}(0)} are precisely the fixed points of the original function f.",
"title": "Proof outlines"
},
{
"paragraph_id": 44,
"text": "This requires some work to make fully general. The definition of degree must be extended to singular values of f, and then to continuous functions. The more modern advent of homology theory simplifies the construction of the degree, and so has become a standard proof in the literature.",
"title": "Proof outlines"
},
{
"paragraph_id": 45,
"text": "The hairy ball theorem states that on the unit sphere S in an odd-dimensional Euclidean space, there is no nowhere-vanishing continuous tangent vector field w on S. (The tangency condition means that w(x) ⋅ x = 0 for every unit vector x.) Sometimes the theorem is expressed by the statement that \"there is always a place on the globe with no wind\". An elementary proof of the hairy ball theorem can be found in Milnor (1978).",
"title": "Proof outlines"
},
{
"paragraph_id": 46,
"text": "In fact, suppose first that w is continuously differentiable. By scaling, it can be assumed that w is a continuously differentiable unit tangent vector on S. It can be extended radially to a small spherical shell A of S. For t sufficiently small, a routine computation shows that the mapping ft(x) = x + t w(x) is a contraction mapping on A and that the volume of its image is a polynomial in t. On the other hand, as a contraction mapping, ft must restrict to a homeomorphism of S onto (1 + t) S and A onto (1 + t) A. This gives a contradiction, because, if the dimension n of the Euclidean space is odd, (1 + t) is not a polynomial.",
"title": "Proof outlines"
},
{
"paragraph_id": 47,
"text": "If w is only a continuous unit tangent vector on S, by the Weierstrass approximation theorem, it can be uniformly approximated by a polynomial map u of A into Euclidean space. The orthogonal projection on to the tangent space is given by v(x) = u(x) - u(x) ⋅ x. Thus v is polynomial and nowhere vanishing on A; by construction v/||v|| is a smooth unit tangent vector field on S, a contradiction.",
"title": "Proof outlines"
},
{
"paragraph_id": 48,
"text": "The continuous version of the hairy ball theorem can now be used to prove the Brouwer fixed point theorem. First suppose that n is even. If there were a fixed-point-free continuous self-mapping f of the closed unit ball B of the n-dimensional Euclidean space V, set",
"title": "Proof outlines"
},
{
"paragraph_id": 49,
"text": "Since f has no fixed points, it follows that, for x in the interior of B, the vector w(x) is non-zero; and for x in S, the scalar product x ⋅ w(x) = 1 – x ⋅ f(x) is strictly positive. From the original n-dimensional space Euclidean space V, construct a new auxiliary (n + 1)-dimensional space W = V x R, with coordinates y = (x, t). Set",
"title": "Proof outlines"
},
{
"paragraph_id": 50,
"text": "By construction X is a continuous vector field on the unit sphere of W, satisfying the tangency condition y ⋅ X(y) = 0. Moreover, X(y) is nowhere vanishing (because, if x has norm 1, then x ⋅ w(x) is non-zero; while if x has norm strictly less than 1, then t and w(x) are both non-zero). This contradiction proves the fixed point theorem when n is even. For n odd, one can apply the fixed point theorem to the closed unit ball B in n + 1 dimensions and the mapping F(x,y) = (f(x),0). The advantage of this proof is that it uses only elementary techniques; more general results like the Borsuk-Ulam theorem require tools from algebraic topology.",
"title": "Proof outlines"
},
{
"paragraph_id": 51,
"text": "The proof uses the observation that the boundary of the n-disk D is S, the (n − 1)-sphere.",
"title": "Proof outlines"
},
{
"paragraph_id": 52,
"text": "Suppose, for contradiction, that a continuous function f : D → D has no fixed point. This means that, for every point x in D, the points x and f(x) are distinct. Because they are distinct, for every point x in D, we can construct a unique ray from f(x) to x and follow the ray until it intersects the boundary S (see illustration). By calling this intersection point F(x), we define a function F : D → S sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever x itself is on the boundary, then the intersection point F(x) must be x.",
"title": "Proof outlines"
},
{
"paragraph_id": 53,
"text": "Consequently, F is a special type of continuous function known as a retraction: every point of the codomain (in this case S) is a fixed point of F.",
"title": "Proof outlines"
},
{
"paragraph_id": 54,
"text": "Intuitively it seems unlikely that there could be a retraction of D onto S, and in the case n = 1, the impossibility is more basic, because S (i.e., the endpoints of the closed interval D) is not even connected. The case n = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce a surjective group homomorphism from the fundamental group of D to that of S, but the latter group is isomorphic to Z while the first group is trivial, so this is impossible. The case n = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields.",
"title": "Proof outlines"
},
{
"paragraph_id": 55,
"text": "For n > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology Hn−1(D) is trivial, while Hn−1(S) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group.",
"title": "Proof outlines"
},
{
"paragraph_id": 56,
"text": "The impossibility of a retraction can also be shown using the de Rham cohomology of open subsets of Euclidean space E. For n ≥ 2, the de Rham cohomology of U = E – (0) is one-dimensional in degree 0 and n - 1, and vanishes otherwise. If a retraction existed, then U would have to be contractible and its de Rham cohomology in degree n - 1 would have to vanish, a contradiction.",
"title": "Proof outlines"
},
{
"paragraph_id": 57,
"text": "As in the proof of Brouwer's fixed-point theorem for continuous maps using homology, it is reduced to proving that there is no continuous retraction F from the ball B onto its boundary ∂B. In that case it can be assumed that F is smooth, since it can be approximated using the Weierstrass approximation theorem or by convolving with non-negative smooth bump functions of sufficiently small support and integral one (i.e. mollifying). If ω is a volume form on the boundary then by Stokes' theorem,",
"title": "Proof outlines"
},
{
"paragraph_id": 58,
"text": "giving a contradiction.",
"title": "Proof outlines"
},
{
"paragraph_id": 59,
"text": "More generally, this shows that there is no smooth retraction from any non-empty smooth oriented compact manifold M onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form ω generates the de Rham cohomology group H(∂M) which is isomorphic to the homology group Hn-1(∂M) by de Rham's theorem.",
"title": "Proof outlines"
},
{
"paragraph_id": 60,
"text": "The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which f is a function from the standard n-simplex, Δ n , {\\displaystyle \\Delta ^{n},} to itself, where",
"title": "Proof outlines"
},
{
"paragraph_id": 61,
"text": "For every point P ∈ Δ n , {\\displaystyle P\\in \\Delta ^{n},} also f ( P ) ∈ Δ n . {\\displaystyle f(P)\\in \\Delta ^{n}.} Hence the sum of their coordinates is equal:",
"title": "Proof outlines"
},
{
"paragraph_id": 62,
"text": "Hence, by the pigeonhole principle, for every P ∈ Δ n , {\\displaystyle P\\in \\Delta ^{n},} there must be an index j ∈ { 0 , … , n } {\\displaystyle j\\in \\{0,\\ldots ,n\\}} such that the j {\\displaystyle j} th coordinate of P {\\displaystyle P} is greater than or equal to the j {\\displaystyle j} th coordinate of its image under f:",
"title": "Proof outlines"
},
{
"paragraph_id": 63,
"text": "Moreover, if P {\\displaystyle P} lies on a k-dimensional sub-face of Δ n , {\\displaystyle \\Delta ^{n},} then by the same argument, the index j {\\displaystyle j} can be selected from among the k + 1 coordinates which are not zero on this sub-face.",
"title": "Proof outlines"
},
{
"paragraph_id": 64,
"text": "We now use this fact to construct a Sperner coloring. For every triangulation of Δ n , {\\displaystyle \\Delta ^{n},} the color of every vertex P {\\displaystyle P} is an index j {\\displaystyle j} such that f ( P ) j ≤ P j . {\\displaystyle f(P)_{j}\\leq P_{j}.}",
"title": "Proof outlines"
},
{
"paragraph_id": 65,
"text": "By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an n-dimensional simplex whose vertices are colored with the entire set of n + 1 available colors.",
"title": "Proof outlines"
},
{
"paragraph_id": 66,
"text": "Because f is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point P {\\displaystyle P} which satisfies the labeling condition in all coordinates: f ( P ) j ≤ P j {\\displaystyle f(P)_{j}\\leq P_{j}} for all j . {\\displaystyle j.}",
"title": "Proof outlines"
},
{
"paragraph_id": 67,
"text": "Because the sum of the coordinates of P {\\displaystyle P} and f ( P ) {\\displaystyle f(P)} must be equal, all these inequalities must actually be equalities. But this means that:",
"title": "Proof outlines"
},
{
"paragraph_id": 68,
"text": "That is, P {\\displaystyle P} is a fixed point of f . {\\displaystyle f.}",
"title": "Proof outlines"
},
{
"paragraph_id": 69,
"text": "There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. The indirect proof starts by noting that the map f can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem or by convolving with smooth bump functions. One then defines a retraction as above which must now be differentiable. Such a retraction must have a non-singular value, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image would be a 1-manifold with boundary. The boundary would have to contain at least two end points, both of which would have to lie on the boundary of the original ball—which is impossible in a retraction.",
"title": "Proof outlines"
},
{
"paragraph_id": 70,
"text": "R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point, q, on the boundary, (assuming it is not a fixed point) the one manifold with boundary mentioned above does exist and the only possibility is that it leads from q to a fixed point. It is an easy numerical task to follow such a path from q to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems.",
"title": "Proof outlines"
},
{
"paragraph_id": 71,
"text": "A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If r : B → ∂ B {\\displaystyle r\\colon B\\to \\partial B} is a smooth retraction, one considers the smooth deformation g t ( x ) := t r ( x ) + ( 1 − t ) x , {\\displaystyle g^{t}(x):=tr(x)+(1-t)x,} and the smooth function",
"title": "Proof outlines"
},
{
"paragraph_id": 72,
"text": "Differentiating under the sign of integral it is not difficult to check that φ′(t) = 0 for all t, so φ is a constant function, which is a contradiction because φ(0) is the n-dimensional volume of the ball, while φ(1) is zero. The geometric idea is that φ(t) is the oriented area of g(B) (that is, the Lebesgue measure of the image of the ball via g, taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter t passes form 0 to 1 the map g transforms continuously from the identity map of the ball, to the retraction r, which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of r is necessarily 0, as its image is the boundary of the ball, a set of null measure.",
"title": "Proof outlines"
},
{
"paragraph_id": 73,
"text": "A quite different proof given by David Gale is based on the game of Hex. The basic theorem regarding Hex, first proven by John Nash, is that no game of Hex can end in a draw; the first player always has a winning strategy (although this theorem is nonconstructive, and explicit strategies have not been fully developed for board sizes of dimensions 10 x 10 or greater). This turns out to be equivalent to the Brouwer fixed-point theorem for dimension 2. By considering n-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex.",
"title": "Proof outlines"
},
{
"paragraph_id": 74,
"text": "The Lefschetz fixed-point theorem says that if a continuous map f from a finite simplicial complex B to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number",
"title": "Proof outlines"
},
{
"paragraph_id": 75,
"text": "and in particular if the Lefschetz number is nonzero then f must have a fixed point. If B is a ball (or more generally is contractible) then the Lefschetz number is one because the only non-zero simplicial homology group is: H 0 ( B ) {\\displaystyle H_{0}(B)} and f acts as the identity on this group, so f has a fixed point.",
"title": "Proof outlines"
},
{
"paragraph_id": 76,
"text": "In reverse mathematics, Brouwer's theorem can be proved in the system WKL0, and conversely over the base system RCA0 Brouwer's theorem for a square implies the weak Kőnig's lemma, so this gives a precise description of the strength of Brouwer's theorem.",
"title": "Proof outlines"
},
{
"paragraph_id": 77,
"text": "The Brouwer fixed-point theorem forms the starting point of a number of more general fixed-point theorems.",
"title": "Generalizations"
},
{
"paragraph_id": 78,
"text": "The straightforward generalization to infinite dimensions, i.e. using the unit ball of an arbitrary Hilbert space instead of Euclidean space, is not true. The main problem here is that the unit balls of infinite-dimensional Hilbert spaces are not compact. For example, in the Hilbert space ℓ of square-summable real (or complex) sequences, consider the map f : ℓ → ℓ which sends a sequence (xn) from the closed unit ball of ℓ to the sequence (yn) defined by",
"title": "Generalizations"
},
{
"paragraph_id": 79,
"text": "It is not difficult to check that this map is continuous, has its image in the unit sphere of ℓ, but does not have a fixed point.",
"title": "Generalizations"
},
{
"paragraph_id": 80,
"text": "The generalizations of the Brouwer fixed-point theorem to infinite dimensional spaces therefore all include a compactness assumption of some sort, and also often an assumption of convexity. See fixed-point theorems in infinite-dimensional spaces for a discussion of these theorems.",
"title": "Generalizations"
},
{
"paragraph_id": 81,
"text": "There is also finite-dimensional generalization to a larger class of spaces: If X {\\displaystyle X} is a product of finitely many chainable continua, then every continuous function f : X → X {\\displaystyle f:X\\rightarrow X} has a fixed point, where a chainable continuum is a (usually but in this case not necessarily metric) compact Hausdorff space of which every open cover has a finite open refinement { U 1 , … , U m } {\\displaystyle \\{U_{1},\\ldots ,U_{m}\\}} , such that U i ∩ U j ≠ ∅ {\\displaystyle U_{i}\\cap U_{j}\\neq \\emptyset } if and only if | i − j | ≤ 1 {\\displaystyle |i-j|\\leq 1} . Examples of chainable continua include compact connected linearly ordered spaces and in particular closed intervals of real numbers.",
"title": "Generalizations"
},
{
"paragraph_id": 82,
"text": "The Kakutani fixed point theorem generalizes the Brouwer fixed-point theorem in a different direction: it stays in R, but considers upper hemi-continuous set-valued functions (functions that assign to each point of the set a subset of the set). It also requires compactness and convexity of the set.",
"title": "Generalizations"
},
{
"paragraph_id": 83,
"text": "The Lefschetz fixed-point theorem applies to (almost) arbitrary compact topological spaces, and gives a condition in terms of singular homology that guarantees the existence of fixed points; this condition is trivially satisfied for any map in the case of D.",
"title": "Generalizations"
},
{
"paragraph_id": 84,
"text": "There are several fixed-point theorems which come in three equivalent variants: an algebraic topology variant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in the top row can be deduced from the one below it in the same column.",
"title": "Equivalent results"
}
] | Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function f mapping a nonempty compact convex set to itself, there is a point x 0 such that f = x 0 . The simplest forms of Brouwer's theorem are for continuous functions f from a closed interval I in the real numbers to itself or from a closed disk D to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset K of Euclidean space to itself. Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu. The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the n-dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911. | 2001-09-05T22:59:45Z | 2023-12-22T13:41:18Z | [
"Template:SpringerEOM",
"Template:Harvnb",
"Template:Springer",
"Template:Citation needed",
"Template:Sfn",
"Template:Analogous fixed-point theorems",
"Template:ISBN",
"Template:Short description",
"Template:Nowrap",
"Template:Cite journal",
"Template:Cite book",
"Template:Cite web",
"Template:Sfrac",
"Template:Isbn",
"Template:Em",
"Template:Harvtxt",
"Template:Var",
"Template:Webarchive",
"Template:Mvar",
"Template:Prime",
"Template:Reflist",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem |
4,106 | Benzoic acid | Benzoic acid /bɛnˈzoʊ.ɪk/ is a white (or colorless) solid organic compound with the formula C6H5COOH, whose structure consists of a benzene ring (C6H6) with a carboxyl (−C(=O)OH) substituent. The benzoyl group is often abbreviated "Bz" (not to be confused with "Bn" which is used for benzyl), thus benzoic acid is also denoted as BzOH, since the benzoyl group has the formula –C6H5CO. It is the simplest aromatic carboxylic acid. The name is derived from gum benzoin, which was for a long time its only source.
Benzoic acid occurs naturally in many plants and serves as an intermediate in the biosynthesis of many secondary metabolites. Salts of benzoic acid are used as food preservatives. Benzoic acid is an important precursor for the industrial synthesis of many other organic substances. The salts and esters of benzoic acid are known as benzoates /ˈbɛnzoʊ.eɪt/.
Benzoic acid was discovered in the sixteenth century. The dry distillation of gum benzoin was first described by Nostradamus (1556), and then by Alexius Pedemontanus (1560) and Blaise de Vigenère (1596).
Justus von Liebig and Friedrich Wöhler determined the composition of benzoic acid. These latter also investigated how hippuric acid is related to benzoic acid.
In 1875 Salkowski discovered the antifungal properties of benzoic acid, which was used for a long time in the preservation of benzoate-containing cloudberry fruits.
Benzoic acid is produced commercially by partial oxidation of toluene with oxygen. The process is catalyzed by cobalt or manganese naphthenates. The process uses abundant materials, and proceeds in high yield.
The first industrial process involved the reaction of benzotrichloride (trichloromethyl benzene) with calcium hydroxide in water, using iron or iron salts as catalyst. The resulting calcium benzoate is converted to benzoic acid with hydrochloric acid. The product contains significant amounts of chlorinated benzoic acid derivatives. For this reason, benzoic acid for human consumption was obtained by dry distillation of gum benzoin. Food-grade benzoic acid is now produced synthetically.
Benzoic acid is cheap and readily available, so the laboratory synthesis of benzoic acid is mainly practiced for its pedagogical value. It is a common undergraduate preparation.
Benzoic acid can be purified by recrystallization from water because of its high solubility in hot water and poor solubility in cold water. The avoidance of organic solvents for the recrystallization makes this experiment particularly safe. This process usually gives a yield of around 65%.
Like other nitriles and amides, benzonitrile and benzamide can be hydrolyzed to benzoic acid or its conjugate base in acid or basic conditions.
Bromobenzene can be converted to benzoic acid by "carboxylation" of the intermediate phenylmagnesium bromide. This synthesis offers a convenient exercise for students to carry out a Grignard reaction, an important class of carbon–carbon bond forming reaction in organic chemistry.
Benzyl alcohol and benzyl chloride and virtually all benzyl derivatives are readily oxidized to benzoic acid.
Benzoic acid is mainly consumed in the production of phenol by oxidative decarboxylation at 300−400 °C:
The temperature required can be lowered to 200 °C by the addition of catalytic amounts of copper(II) salts. The phenol can be converted to cyclohexanol, which is a starting material for nylon synthesis.
Benzoate plasticizers, such as the glycol-, diethyleneglycol-, and triethyleneglycol esters, are obtained by transesterification of methyl benzoate with the corresponding diol. These plasticizers, which are used similarly to those derived from terephthalic acid ester, represent alternatives to phthalates.
Benzoic acid and its salts are used as food preservatives, represented by the E numbers E210, E211, E212, and E213. Benzoic acid inhibits the growth of mold, yeast and some bacteria. It is either added directly or created from reactions with its sodium, potassium, or calcium salt. The mechanism starts with the absorption of benzoic acid into the cell. If the intracellular pH changes to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase is decreased by 95%. The efficacy of benzoic acid and benzoate is thus dependent on the pH of the food. Benzoic acid, benzoates and their derivatives are used as preservatives for acidic foods and beverages such as citrus fruit juices (citric acid), sparkling drinks (carbon dioxide), soft drinks (phosphoric acid), pickles (vinegar) and other acidified foods.
Typical concentrations of benzoic acid as a preservative in food are between 0.05 and 0.1%. Foods in which benzoic acid may be used and maximum levels for its application are controlled by local food laws.
Concern has been expressed that benzoic acid and its salts may react with ascorbic acid (vitamin C) in some soft drinks, forming small quantities of carcinogenic benzene.
Benzoic acid is a constituent of Whitfield's ointment which is used for the treatment of fungal skin diseases such as ringworm and athlete's foot. As the principal component of gum benzoin, benzoic acid is also a major ingredient in both tincture of benzoin and Friar's balsam. Such products have a long history of use as topical antiseptics and inhalant decongestants.
Benzoic acid was used as an expectorant, analgesic, and antiseptic in the early 20th century.
In teaching laboratories, benzoic acid is a common standard for calibrating a bomb calorimeter.
Benzoic acid occurs naturally as do its esters in many plant and animal species. Appreciable amounts are found in most berries (around 0.05%). Ripe fruits of several Vaccinium species (e.g., cranberry, V. vitis macrocarpon; bilberry, V. myrtillus) contain as much as 0.03–0.13% free benzoic acid. Benzoic acid is also formed in apples after infection with the fungus Nectria galligena. Among animals, benzoic acid has been identified primarily in omnivorous or phytophageous species, e.g., in viscera and muscles of the rock ptarmigan (Lagopus muta) as well as in gland secretions of male muskoxen (Ovibos moschatus) or Asian bull elephants (Elephas maximus). Gum benzoin contains up to 20% of benzoic acid and 40% benzoic acid esters.
In terms of its biosynthesis, benzoate is produced in plants from cinnamic acid. A pathway has been identified from phenol via 4-hydroxybenzoate.
Reactions of benzoic acid can occur at either the aromatic ring or at the carboxyl group.
Electrophilic aromatic substitution reaction will take place mainly in 3-position due to the electron-withdrawing carboxylic group; i.e. benzoic acid is meta directing.
Reactions typical for carboxylic acids apply also to benzoic acid.
It is excreted as hippuric acid. Benzoic acid is metabolized by butyrate-CoA ligase into an intermediate product, benzoyl-CoA, which is then metabolized by glycine N-acyltransferase into hippuric acid. Humans metabolize toluene which is also excreted as hippuric acid.
For humans, the World Health Organization's International Programme on Chemical Safety (IPCS) suggests a provisional tolerable intake would be 5 mg/kg body weight per day. Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice. Lethal dose for cats can be as low as 300 mg/kg body weight. The oral LD50 for rats is 3040 mg/kg, for mice it is 1940–2263 mg/kg.
In Taipei, Taiwan, a city health survey in 2010 found that 30% of dried and pickled food products had benzoic acid. | [
{
"paragraph_id": 0,
"text": "Benzoic acid /bɛnˈzoʊ.ɪk/ is a white (or colorless) solid organic compound with the formula C6H5COOH, whose structure consists of a benzene ring (C6H6) with a carboxyl (−C(=O)OH) substituent. The benzoyl group is often abbreviated \"Bz\" (not to be confused with \"Bn\" which is used for benzyl), thus benzoic acid is also denoted as BzOH, since the benzoyl group has the formula –C6H5CO. It is the simplest aromatic carboxylic acid. The name is derived from gum benzoin, which was for a long time its only source.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Benzoic acid occurs naturally in many plants and serves as an intermediate in the biosynthesis of many secondary metabolites. Salts of benzoic acid are used as food preservatives. Benzoic acid is an important precursor for the industrial synthesis of many other organic substances. The salts and esters of benzoic acid are known as benzoates /ˈbɛnzoʊ.eɪt/.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Benzoic acid was discovered in the sixteenth century. The dry distillation of gum benzoin was first described by Nostradamus (1556), and then by Alexius Pedemontanus (1560) and Blaise de Vigenère (1596).",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Justus von Liebig and Friedrich Wöhler determined the composition of benzoic acid. These latter also investigated how hippuric acid is related to benzoic acid.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1875 Salkowski discovered the antifungal properties of benzoic acid, which was used for a long time in the preservation of benzoate-containing cloudberry fruits.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Benzoic acid is produced commercially by partial oxidation of toluene with oxygen. The process is catalyzed by cobalt or manganese naphthenates. The process uses abundant materials, and proceeds in high yield.",
"title": "Production"
},
{
"paragraph_id": 6,
"text": "The first industrial process involved the reaction of benzotrichloride (trichloromethyl benzene) with calcium hydroxide in water, using iron or iron salts as catalyst. The resulting calcium benzoate is converted to benzoic acid with hydrochloric acid. The product contains significant amounts of chlorinated benzoic acid derivatives. For this reason, benzoic acid for human consumption was obtained by dry distillation of gum benzoin. Food-grade benzoic acid is now produced synthetically.",
"title": "Production"
},
{
"paragraph_id": 7,
"text": "Benzoic acid is cheap and readily available, so the laboratory synthesis of benzoic acid is mainly practiced for its pedagogical value. It is a common undergraduate preparation.",
"title": "Production"
},
{
"paragraph_id": 8,
"text": "Benzoic acid can be purified by recrystallization from water because of its high solubility in hot water and poor solubility in cold water. The avoidance of organic solvents for the recrystallization makes this experiment particularly safe. This process usually gives a yield of around 65%.",
"title": "Production"
},
{
"paragraph_id": 9,
"text": "Like other nitriles and amides, benzonitrile and benzamide can be hydrolyzed to benzoic acid or its conjugate base in acid or basic conditions.",
"title": "Production"
},
{
"paragraph_id": 10,
"text": "Bromobenzene can be converted to benzoic acid by \"carboxylation\" of the intermediate phenylmagnesium bromide. This synthesis offers a convenient exercise for students to carry out a Grignard reaction, an important class of carbon–carbon bond forming reaction in organic chemistry.",
"title": "Production"
},
{
"paragraph_id": 11,
"text": "Benzyl alcohol and benzyl chloride and virtually all benzyl derivatives are readily oxidized to benzoic acid.",
"title": "Production"
},
{
"paragraph_id": 12,
"text": "Benzoic acid is mainly consumed in the production of phenol by oxidative decarboxylation at 300−400 °C:",
"title": "Uses"
},
{
"paragraph_id": 13,
"text": "The temperature required can be lowered to 200 °C by the addition of catalytic amounts of copper(II) salts. The phenol can be converted to cyclohexanol, which is a starting material for nylon synthesis.",
"title": "Uses"
},
{
"paragraph_id": 14,
"text": "Benzoate plasticizers, such as the glycol-, diethyleneglycol-, and triethyleneglycol esters, are obtained by transesterification of methyl benzoate with the corresponding diol. These plasticizers, which are used similarly to those derived from terephthalic acid ester, represent alternatives to phthalates.",
"title": "Uses"
},
{
"paragraph_id": 15,
"text": "Benzoic acid and its salts are used as food preservatives, represented by the E numbers E210, E211, E212, and E213. Benzoic acid inhibits the growth of mold, yeast and some bacteria. It is either added directly or created from reactions with its sodium, potassium, or calcium salt. The mechanism starts with the absorption of benzoic acid into the cell. If the intracellular pH changes to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase is decreased by 95%. The efficacy of benzoic acid and benzoate is thus dependent on the pH of the food. Benzoic acid, benzoates and their derivatives are used as preservatives for acidic foods and beverages such as citrus fruit juices (citric acid), sparkling drinks (carbon dioxide), soft drinks (phosphoric acid), pickles (vinegar) and other acidified foods.",
"title": "Uses"
},
{
"paragraph_id": 16,
"text": "Typical concentrations of benzoic acid as a preservative in food are between 0.05 and 0.1%. Foods in which benzoic acid may be used and maximum levels for its application are controlled by local food laws.",
"title": "Uses"
},
{
"paragraph_id": 17,
"text": "Concern has been expressed that benzoic acid and its salts may react with ascorbic acid (vitamin C) in some soft drinks, forming small quantities of carcinogenic benzene.",
"title": "Uses"
},
{
"paragraph_id": 18,
"text": "Benzoic acid is a constituent of Whitfield's ointment which is used for the treatment of fungal skin diseases such as ringworm and athlete's foot. As the principal component of gum benzoin, benzoic acid is also a major ingredient in both tincture of benzoin and Friar's balsam. Such products have a long history of use as topical antiseptics and inhalant decongestants.",
"title": "Uses"
},
{
"paragraph_id": 19,
"text": "Benzoic acid was used as an expectorant, analgesic, and antiseptic in the early 20th century.",
"title": "Uses"
},
{
"paragraph_id": 20,
"text": "In teaching laboratories, benzoic acid is a common standard for calibrating a bomb calorimeter.",
"title": "Uses"
},
{
"paragraph_id": 21,
"text": "Benzoic acid occurs naturally as do its esters in many plant and animal species. Appreciable amounts are found in most berries (around 0.05%). Ripe fruits of several Vaccinium species (e.g., cranberry, V. vitis macrocarpon; bilberry, V. myrtillus) contain as much as 0.03–0.13% free benzoic acid. Benzoic acid is also formed in apples after infection with the fungus Nectria galligena. Among animals, benzoic acid has been identified primarily in omnivorous or phytophageous species, e.g., in viscera and muscles of the rock ptarmigan (Lagopus muta) as well as in gland secretions of male muskoxen (Ovibos moschatus) or Asian bull elephants (Elephas maximus). Gum benzoin contains up to 20% of benzoic acid and 40% benzoic acid esters.",
"title": "Biology and health effects"
},
{
"paragraph_id": 22,
"text": "In terms of its biosynthesis, benzoate is produced in plants from cinnamic acid. A pathway has been identified from phenol via 4-hydroxybenzoate.",
"title": "Biology and health effects"
},
{
"paragraph_id": 23,
"text": "Reactions of benzoic acid can occur at either the aromatic ring or at the carboxyl group.",
"title": "Reactions"
},
{
"paragraph_id": 24,
"text": "Electrophilic aromatic substitution reaction will take place mainly in 3-position due to the electron-withdrawing carboxylic group; i.e. benzoic acid is meta directing.",
"title": "Reactions"
},
{
"paragraph_id": 25,
"text": "Reactions typical for carboxylic acids apply also to benzoic acid.",
"title": "Reactions"
},
{
"paragraph_id": 26,
"text": "",
"title": "Reactions"
},
{
"paragraph_id": 27,
"text": "It is excreted as hippuric acid. Benzoic acid is metabolized by butyrate-CoA ligase into an intermediate product, benzoyl-CoA, which is then metabolized by glycine N-acyltransferase into hippuric acid. Humans metabolize toluene which is also excreted as hippuric acid.",
"title": "Safety and mammalian metabolism"
},
{
"paragraph_id": 28,
"text": "For humans, the World Health Organization's International Programme on Chemical Safety (IPCS) suggests a provisional tolerable intake would be 5 mg/kg body weight per day. Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice. Lethal dose for cats can be as low as 300 mg/kg body weight. The oral LD50 for rats is 3040 mg/kg, for mice it is 1940–2263 mg/kg.",
"title": "Safety and mammalian metabolism"
},
{
"paragraph_id": 29,
"text": "In Taipei, Taiwan, a city health survey in 2010 found that 30% of dried and pickled food products had benzoic acid.",
"title": "Safety and mammalian metabolism"
}
] | Benzoic acid is a white solid organic compound with the formula C6H5COOH, whose structure consists of a benzene ring (C6H6) with a carboxyl (−COH) substituent. The benzoyl group is often abbreviated "Bz", thus benzoic acid is also denoted as BzOH, since the benzoyl group has the formula –C6H5CO. It is the simplest aromatic carboxylic acid. The name is derived from gum benzoin, which was for a long time its only source. Benzoic acid occurs naturally in many plants and serves as an intermediate in the biosynthesis of many secondary metabolites. Salts of benzoic acid are used as food preservatives. Benzoic acid is an important precursor for the industrial synthesis of many other organic substances. The salts and esters of benzoic acid are known as benzoates . | 2002-02-21T08:27:18Z | 2023-12-17T10:33:27Z | [
"Template:ICSC",
"Template:SIDS",
"Template:Consumer Food Safety",
"Template:Short description",
"Template:LD50",
"Template:Reflist",
"Template:Ullmann",
"Template:OrgSynth",
"Template:Local anesthetics",
"Template:Chem2",
"Template:Anchor",
"Template:Cite web",
"Template:See also",
"Template:Cite book",
"Template:Cite journal",
"Template:Webarchive",
"Template:Cite news",
"Template:Use dmy dates",
"Template:IPAc-en",
"Template:Disputed inline",
"Template:Commons category",
"Template:Authority control",
"Template:Chembox",
"Template:Annotated link",
"Template:Anti-arthropod medications"
] | https://en.wikipedia.org/wiki/Benzoic_acid |
4,107 | Boltzmann distribution | In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form:
where pi is the probability of the system being in state i, exp is the exponential function, εi is the energy of that state, and a constant kT of the distribution is the product of the Boltzmann constant k and thermodynamic temperature T. The symbol ∝ {\textstyle \propto } denotes proportionality (see § The distribution for the proportionality constant).
The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied.
The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:
The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902.
The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution.
The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as
where:
Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy
subject to the normalization constraint that ∑ p i = 1 {\textstyle \sum p_{i}=1} and the constraint that ∑ p i ε i {\textstyle \sum {p_{i}{\varepsilon }_{i}}} equals a particular mean energy value.
The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database.
The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states i and j is given as
where:
The corresponding ratio of populations of energy levels must also take their degeneracies into account.
The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state i is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state i. This probability is equal to the number of particles in state i divided by the total number of particles in the system, that is the fraction of particles that occupy state i.
where Ni is the number of particles in state i and N is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state i as a function of the energy of that state is
This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition.
The softmax function commonly used in machine learning is related to the Boltzmann distribution:
Distribution of the form
is called generalized Boltzmann distribution by some authors.
The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations.
The generalized Boltzmann distribution has the following properties:
The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects:
Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed:
In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning , as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced.
The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries.
The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization. | [
{
"paragraph_id": 0,
"text": "In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form:",
"title": ""
},
{
"paragraph_id": 1,
"text": "where pi is the probability of the system being in state i, exp is the exponential function, εi is the energy of that state, and a constant kT of the distribution is the product of the Boltzmann constant k and thermodynamic temperature T. The symbol ∝ {\\textstyle \\propto } denotes proportionality (see § The distribution for the proportionality constant).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium\" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as",
"title": "The distribution"
},
{
"paragraph_id": 7,
"text": "where:",
"title": "The distribution"
},
{
"paragraph_id": 8,
"text": "Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy",
"title": "The distribution"
},
{
"paragraph_id": 9,
"text": "subject to the normalization constraint that ∑ p i = 1 {\\textstyle \\sum p_{i}=1} and the constraint that ∑ p i ε i {\\textstyle \\sum {p_{i}{\\varepsilon }_{i}}} equals a particular mean energy value.",
"title": "The distribution"
},
{
"paragraph_id": 10,
"text": "The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database.",
"title": "The distribution"
},
{
"paragraph_id": 11,
"text": "The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states i and j is given as",
"title": "The distribution"
},
{
"paragraph_id": 12,
"text": "where:",
"title": "The distribution"
},
{
"paragraph_id": 13,
"text": "The corresponding ratio of populations of energy levels must also take their degeneracies into account.",
"title": "The distribution"
},
{
"paragraph_id": 14,
"text": "The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state i is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state i. This probability is equal to the number of particles in state i divided by the total number of particles in the system, that is the fraction of particles that occupy state i.",
"title": "The distribution"
},
{
"paragraph_id": 15,
"text": "where Ni is the number of particles in state i and N is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state i as a function of the energy of that state is",
"title": "The distribution"
},
{
"paragraph_id": 16,
"text": "This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition.",
"title": "The distribution"
},
{
"paragraph_id": 17,
"text": "The softmax function commonly used in machine learning is related to the Boltzmann distribution:",
"title": "The distribution"
},
{
"paragraph_id": 18,
"text": "Distribution of the form",
"title": "Generalized Boltzmann distribution"
},
{
"paragraph_id": 19,
"text": "is called generalized Boltzmann distribution by some authors.",
"title": "Generalized Boltzmann distribution"
},
{
"paragraph_id": 20,
"text": "The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations.",
"title": "Generalized Boltzmann distribution"
},
{
"paragraph_id": 21,
"text": "The generalized Boltzmann distribution has the following properties:",
"title": "Generalized Boltzmann distribution"
},
{
"paragraph_id": 22,
"text": "The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects:",
"title": "In statistical mechanics"
},
{
"paragraph_id": 23,
"text": "Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed:",
"title": "In statistical mechanics"
},
{
"paragraph_id": 24,
"text": "In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning , as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced.",
"title": "In mathematics"
},
{
"paragraph_id": 25,
"text": "The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries.",
"title": "In economics"
},
{
"paragraph_id": 26,
"text": "The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization.",
"title": "In economics"
}
] | In statistical mechanics and mathematics, a Boltzmann distribution is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form: where pi is the probability of the system being in state i, exp is the exponential function, εi is the energy of that state, and a constant kT of the distribution is the product of the Boltzmann constant k and thermodynamic temperature T. The symbol ∝ denotes proportionality. The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference: The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902. The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution. | 2001-08-31T06:53:27Z | 2023-09-24T10:26:18Z | [
"Template:Short description",
"Template:About",
"Template:Tmath",
"Template:Cite journal",
"Template:Mvar",
"Template:Main",
"Template:Section link",
"Template:Cite book",
"Template:Cite web",
"Template:Use American English",
"Template:Math",
"Template:R",
"Template:Reflist",
"Template:Probability distributions"
] | https://en.wikipedia.org/wiki/Boltzmann_distribution |
4,109 | Leg theory | Leg theory is a bowling tactic in the sport of cricket. The term leg theory is somewhat archaic, but the basic tactic remains a play in modern cricket.
Simply put, leg theory involves concentrating the bowling attack at or near the line of leg stump. This may or may not be accompanied by a concentration of fielders on the leg side. The line of attack aims to cramp the batsman, making him play the ball with the bat close to the body. This makes it difficult to hit the ball freely and score runs, especially on the off side. Since a leg theory attack means the batsman is more likely to hit the ball on the leg side, additional fielders on that side of the field can be effective in preventing runs and taking catches.
Stifling the batsman in this manner can lead to impatience and frustration, resulting in rash play by the batsman which in turn can lead to a quick dismissal. Concentrating attack on the leg stump is considered by many cricket fans and commentators to lead to boring play, as it stifles run scoring and encourages batsmen to play conservatively.
Leg theory can be a moderately successful tactic when used with both fast bowling and spin bowling, particularly leg spin to right-handed batsmen or off spin to left-handed batsmen. However, because it relies on lack of concentration or discipline by the batsman, it can be risky against patient and skilled players, especially batsmen who are strong on the leg side. The English opening bowlers Sydney Barnes and Frank Foster used leg theory with some success in Australia in 1911–12. In England, at around the same time Fred Root was one of the main proponents of the same tactic.
In 1930, England captain Douglas Jardine, together with Nottinghamshire's captain Arthur Carr and his bowlers Harold Larwood and Bill Voce, developed a variant of leg theory in which the bowlers bowled fast, short-pitched balls that would rise into the batsman's body, together with a heavily stacked ring of close fielders on the leg side. The idea was that when the batsman defended against the ball, he would be likely to deflect the ball into the air for a catch.
Jardine called this modified form of the tactic fast leg theory. On the 1932–33 English tour of Australia, Larwood and Voce bowled fast leg theory at the Australian batsmen. It turned out to be extremely dangerous, and most Australian players sustained injuries from being hit by the ball. Wicket-keeper Bert Oldfield's skull was fractured by a ball hitting his head (although the ball had first glanced off the bat and Larwood had an orthodox field), almost precipitating a riot by the Australian crowd.
The Australian press dubbed the tactic Bodyline, and claimed it was a deliberate attempt by the English team to intimidate and injure the Australian players. Reports of the controversy reaching England at the time described the bowling as fast leg theory, which sounded to many people to be a harmless and well-established tactic. This led to a serious misunderstanding amongst the English public and the Marylebone Cricket Club – the administrators of English cricket – of the dangers posed by Bodyline. The English press and cricket authorities declared the Australian protests to be a case of sore losing and "squealing".
It was only with the return of the English team and the subsequent use of Bodyline against English players in England by the touring West Indian cricket team in 1933 that demonstrated to the country the dangers it posed. The MCC subsequently revised the Laws of Cricket to prevent the use of "fast leg theory" tactics in future, also limiting the traditional tactic. | [
{
"paragraph_id": 0,
"text": "Leg theory is a bowling tactic in the sport of cricket. The term leg theory is somewhat archaic, but the basic tactic remains a play in modern cricket.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Simply put, leg theory involves concentrating the bowling attack at or near the line of leg stump. This may or may not be accompanied by a concentration of fielders on the leg side. The line of attack aims to cramp the batsman, making him play the ball with the bat close to the body. This makes it difficult to hit the ball freely and score runs, especially on the off side. Since a leg theory attack means the batsman is more likely to hit the ball on the leg side, additional fielders on that side of the field can be effective in preventing runs and taking catches.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Stifling the batsman in this manner can lead to impatience and frustration, resulting in rash play by the batsman which in turn can lead to a quick dismissal. Concentrating attack on the leg stump is considered by many cricket fans and commentators to lead to boring play, as it stifles run scoring and encourages batsmen to play conservatively.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Leg theory can be a moderately successful tactic when used with both fast bowling and spin bowling, particularly leg spin to right-handed batsmen or off spin to left-handed batsmen. However, because it relies on lack of concentration or discipline by the batsman, it can be risky against patient and skilled players, especially batsmen who are strong on the leg side. The English opening bowlers Sydney Barnes and Frank Foster used leg theory with some success in Australia in 1911–12. In England, at around the same time Fred Root was one of the main proponents of the same tactic.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In 1930, England captain Douglas Jardine, together with Nottinghamshire's captain Arthur Carr and his bowlers Harold Larwood and Bill Voce, developed a variant of leg theory in which the bowlers bowled fast, short-pitched balls that would rise into the batsman's body, together with a heavily stacked ring of close fielders on the leg side. The idea was that when the batsman defended against the ball, he would be likely to deflect the ball into the air for a catch.",
"title": "Fast leg theory"
},
{
"paragraph_id": 5,
"text": "Jardine called this modified form of the tactic fast leg theory. On the 1932–33 English tour of Australia, Larwood and Voce bowled fast leg theory at the Australian batsmen. It turned out to be extremely dangerous, and most Australian players sustained injuries from being hit by the ball. Wicket-keeper Bert Oldfield's skull was fractured by a ball hitting his head (although the ball had first glanced off the bat and Larwood had an orthodox field), almost precipitating a riot by the Australian crowd.",
"title": "Fast leg theory"
},
{
"paragraph_id": 6,
"text": "The Australian press dubbed the tactic Bodyline, and claimed it was a deliberate attempt by the English team to intimidate and injure the Australian players. Reports of the controversy reaching England at the time described the bowling as fast leg theory, which sounded to many people to be a harmless and well-established tactic. This led to a serious misunderstanding amongst the English public and the Marylebone Cricket Club – the administrators of English cricket – of the dangers posed by Bodyline. The English press and cricket authorities declared the Australian protests to be a case of sore losing and \"squealing\".",
"title": "Fast leg theory"
},
{
"paragraph_id": 7,
"text": "It was only with the return of the English team and the subsequent use of Bodyline against English players in England by the touring West Indian cricket team in 1933 that demonstrated to the country the dangers it posed. The MCC subsequently revised the Laws of Cricket to prevent the use of \"fast leg theory\" tactics in future, also limiting the traditional tactic.",
"title": "Fast leg theory"
}
] | Leg theory is a bowling tactic in the sport of cricket. The term leg theory is somewhat archaic, but the basic tactic remains a play in modern cricket. Simply put, leg theory involves concentrating the bowling attack at or near the line of leg stump. This may or may not be accompanied by a concentration of fielders on the leg side. The line of attack aims to cramp the batsman, making him play the ball with the bat close to the body. This makes it difficult to hit the ball freely and score runs, especially on the off side. Since a leg theory attack means the batsman is more likely to hit the ball on the leg side, additional fielders on that side of the field can be effective in preventing runs and taking catches. Stifling the batsman in this manner can lead to impatience and frustration, resulting in rash play by the batsman which in turn can lead to a quick dismissal. Concentrating attack on the leg stump is considered by many cricket fans and commentators to lead to boring play, as it stifles run scoring and encourages batsmen to play conservatively. Leg theory can be a moderately successful tactic when used with both fast bowling and spin bowling, particularly leg spin to right-handed batsmen or off spin to left-handed batsmen. However, because it relies on lack of concentration or discipline by the batsman, it can be risky against patient and skilled players, especially batsmen who are strong on the leg side. The English opening bowlers Sydney Barnes and Frank Foster used leg theory with some success in Australia in 1911–12. In England, at around the same time Fred Root was one of the main proponents of the same tactic. | 2002-02-25T15:43:11Z | 2023-12-16T17:24:26Z | [
"Template:Main",
"Template:Reflist",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Leg_theory |
4,110 | Blythe Danner | Blythe Katherine Danner (born February 3, 1943) is an American actress. Accolades she has received include two Primetime Emmy Awards for Best Supporting Actress in a Drama Series for her role as Izzy Huffstodt on Huff (2004–2006), and a Tony Award for Best Actress for her performance in Butterflies Are Free on Broadway (1969–1972). Danner was twice nominated for the Primetime Emmy for Outstanding Guest Actress in a Comedy Series for portraying Marilyn Truman on Will & Grace (2001–06; 2018–20), and the Primetime Emmy for Outstanding Lead Actress in a Miniseries or Movie for her roles in We Were the Mulvaneys (2002) and Back When We Were Grownups (2004). For the latter, she also received a Golden Globe Award nomination.
Danner played Dina Byrnes in Meet the Parents (2000) and its sequels Meet the Fockers (2004) and Little Fockers (2010). She has collaborated on several occasions with Woody Allen, appearing in three of his films: Another Woman (1988), Alice (1990), and Husbands and Wives (1992). Her other notable film credits include 1776 (1972), Hearts of the West (1975), The Great Santini (1979), Mr. and Mrs. Bridge (1990), The Prince of Tides (1991), To Wong Foo, Thanks for Everything! Julie Newmar (1995), The Myth of Fingerprints (1997), The X-Files (1998), Forces of Nature (1999), The Love Letter (1999), The Last Kiss (2006), Paul (2011), Hello I Must Be Going (2012), I'll See You in My Dreams (2015), and What They Had (2018).
Danner is the sister of Harry Danner and the widow of Bruce Paltrow. She is the mother of actress Gwyneth Paltrow and director Jake Paltrow.
Danner was born in Philadelphia, Pennsylvania, the daughter of Katharine (née Kile) and Harry Earl Danner, a bank executive. She has a brother, opera singer and actor Harry Danner; a sister, performer-turned-director Dorothy "Dottie" Danner; and a maternal half-brother, violin maker William Moennig III. Danner has Pennsylvania Dutch (German), and some English and Irish ancestry; her maternal grandmother was a German immigrant, and one of her paternal great-grandmothers was born in Barbados (to a family of European descent).
Danner graduated from George School, a Quaker high school located near Newtown, Bucks County, Pennsylvania, in 1960.
A graduate of Bard College, Danner's first roles included the 1967 musical Mata Hari and the 1968 Off-Broadway production of Summertree. Her early Broadway appearances included Cyrano de Bergerac (1968) and her Theatre World Award-winning performance in The Miser (1969). She won the Tony Award for Best Featured Actress in a Play for portraying a free-spirited divorcée in Butterflies Are Free (1970).
In 1972, Danner portrayed Martha Jefferson in the film version of 1776. That same year, she played the unknowing wife of a husband who committed murder, opposite Peter Falk and John Cassavetes, in the Columbo episode "Etude in Black".
Her earliest starring film role was opposite Alan Alda in To Kill a Clown (1972). Danner appeared in the episode of M*A*S*H entitled "The More I See You", playing the love interest of Alda's character Hawkeye Pierce. She played lawyer Amanda Bonner in television's Adam's Rib, opposite Ken Howard as Adam Bonner. She played Zelda Fitzgerald in F. Scott Fitzgerald and 'The Last of the Belles' (1974). She was the eponymous heroine in the film Lovin' Molly (1974) (directed by Sidney Lumet). She appeared in Futureworld, playing Tracy Ballard with co-star Peter Fonda (1976). In the 1982 TV movie Inside the Third Reich, she played the wife of Albert Speer. In the film version of Neil Simon's semi-autobiographical play Brighton Beach Memoirs (1986), she portrayed a middle-aged Jewish mother. She has appeared in two films based on the novels of Pat Conroy, The Great Santini (1979) and The Prince of Tides (1991), as well as two television movies adapted from books by Anne Tyler, Saint Maybe and Back When We Were Grownups, both for the Hallmark Hall of Fame.
Danner appeared opposite Robert De Niro in the 2000 comedy hit Meet the Parents, and its sequels, Meet the Fockers (2004) and Little Fockers (2010).
From 2001 to 2006, she regularly appeared on NBC's sitcom Will & Grace as Will Truman's mother Marilyn. From 2004 to 2006, she starred in the main cast of the comedy-drama series Huff. In 2005, she was nominated for three Primetime Emmy Awards for her work on Will & Grace, Huff, and the television film Back When We Were Grownups, winning for her role in Huff. The following year, she won a second consecutive Emmy Award for Huff. For 25 years, she has been a regular performer at the Williamstown Summer Theater Festival, where she also serves on the board of directors.
In 2006, Danner was awarded an inaugural Katharine Hepburn Medal by Bryn Mawr College's Katharine Houghton Hepburn Center. In 2015, Danner was inducted into the American Theater Hall of Fame.
Danner has been involved in environmental issues such as recycling and conservation for over 30 years. She has been active with INFORM, Inc., is on the Board of Environmental Advocates of New York and the board of directors of the Environmental Media Association, and won the 2002 EMA Board of Directors Ongoing Commitment Award. In 2011, Danner joined Moms Clean Air Force, to help call on parents to join in the fight against toxic air pollution.
After the death of her husband Bruce Paltrow from oral cancer, she became involved with the nonprofit Oral Cancer Foundation. In 2005, she filmed a public service announcement to raise public awareness of the disease and the need for early detection. She has since appeared on morning talk shows and given interviews in such magazines as People. The Bruce Paltrow Oral Cancer Fund, administered by the Oral Cancer Foundation, raises funding for oral cancer research and treatment, with a particular focus on those communities in which healthcare disparities exist.
She has also appeared in commercials for Prolia, a brand of denosumab used in the treatment of osteoporosis.
Danner was married to producer and director Bruce Paltrow, who died of oral cancer in 2002. She and Paltrow had two children together, actress Gwyneth Paltrow and director Jake Paltrow.
Danner's niece is the actress Katherine Moennig, the daughter of her maternal half-brother William.
Danner co-starred with her daughter in the 1992 television film Cruel Doubt and again in the 2003 film Sylvia, in which she portrayed Aurelia Plath, mother to Gwyneth's title role of Sylvia Plath.
Danner is a practitioner of transcendental meditation, which she has described as "very helpful and comforting". | [
{
"paragraph_id": 0,
"text": "Blythe Katherine Danner (born February 3, 1943) is an American actress. Accolades she has received include two Primetime Emmy Awards for Best Supporting Actress in a Drama Series for her role as Izzy Huffstodt on Huff (2004–2006), and a Tony Award for Best Actress for her performance in Butterflies Are Free on Broadway (1969–1972). Danner was twice nominated for the Primetime Emmy for Outstanding Guest Actress in a Comedy Series for portraying Marilyn Truman on Will & Grace (2001–06; 2018–20), and the Primetime Emmy for Outstanding Lead Actress in a Miniseries or Movie for her roles in We Were the Mulvaneys (2002) and Back When We Were Grownups (2004). For the latter, she also received a Golden Globe Award nomination.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Danner played Dina Byrnes in Meet the Parents (2000) and its sequels Meet the Fockers (2004) and Little Fockers (2010). She has collaborated on several occasions with Woody Allen, appearing in three of his films: Another Woman (1988), Alice (1990), and Husbands and Wives (1992). Her other notable film credits include 1776 (1972), Hearts of the West (1975), The Great Santini (1979), Mr. and Mrs. Bridge (1990), The Prince of Tides (1991), To Wong Foo, Thanks for Everything! Julie Newmar (1995), The Myth of Fingerprints (1997), The X-Files (1998), Forces of Nature (1999), The Love Letter (1999), The Last Kiss (2006), Paul (2011), Hello I Must Be Going (2012), I'll See You in My Dreams (2015), and What They Had (2018).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Danner is the sister of Harry Danner and the widow of Bruce Paltrow. She is the mother of actress Gwyneth Paltrow and director Jake Paltrow.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Danner was born in Philadelphia, Pennsylvania, the daughter of Katharine (née Kile) and Harry Earl Danner, a bank executive. She has a brother, opera singer and actor Harry Danner; a sister, performer-turned-director Dorothy \"Dottie\" Danner; and a maternal half-brother, violin maker William Moennig III. Danner has Pennsylvania Dutch (German), and some English and Irish ancestry; her maternal grandmother was a German immigrant, and one of her paternal great-grandmothers was born in Barbados (to a family of European descent).",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Danner graduated from George School, a Quaker high school located near Newtown, Bucks County, Pennsylvania, in 1960.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "A graduate of Bard College, Danner's first roles included the 1967 musical Mata Hari and the 1968 Off-Broadway production of Summertree. Her early Broadway appearances included Cyrano de Bergerac (1968) and her Theatre World Award-winning performance in The Miser (1969). She won the Tony Award for Best Featured Actress in a Play for portraying a free-spirited divorcée in Butterflies Are Free (1970).",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "In 1972, Danner portrayed Martha Jefferson in the film version of 1776. That same year, she played the unknowing wife of a husband who committed murder, opposite Peter Falk and John Cassavetes, in the Columbo episode \"Etude in Black\".",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "Her earliest starring film role was opposite Alan Alda in To Kill a Clown (1972). Danner appeared in the episode of M*A*S*H entitled \"The More I See You\", playing the love interest of Alda's character Hawkeye Pierce. She played lawyer Amanda Bonner in television's Adam's Rib, opposite Ken Howard as Adam Bonner. She played Zelda Fitzgerald in F. Scott Fitzgerald and 'The Last of the Belles' (1974). She was the eponymous heroine in the film Lovin' Molly (1974) (directed by Sidney Lumet). She appeared in Futureworld, playing Tracy Ballard with co-star Peter Fonda (1976). In the 1982 TV movie Inside the Third Reich, she played the wife of Albert Speer. In the film version of Neil Simon's semi-autobiographical play Brighton Beach Memoirs (1986), she portrayed a middle-aged Jewish mother. She has appeared in two films based on the novels of Pat Conroy, The Great Santini (1979) and The Prince of Tides (1991), as well as two television movies adapted from books by Anne Tyler, Saint Maybe and Back When We Were Grownups, both for the Hallmark Hall of Fame.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "Danner appeared opposite Robert De Niro in the 2000 comedy hit Meet the Parents, and its sequels, Meet the Fockers (2004) and Little Fockers (2010).",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "From 2001 to 2006, she regularly appeared on NBC's sitcom Will & Grace as Will Truman's mother Marilyn. From 2004 to 2006, she starred in the main cast of the comedy-drama series Huff. In 2005, she was nominated for three Primetime Emmy Awards for her work on Will & Grace, Huff, and the television film Back When We Were Grownups, winning for her role in Huff. The following year, she won a second consecutive Emmy Award for Huff. For 25 years, she has been a regular performer at the Williamstown Summer Theater Festival, where she also serves on the board of directors.",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "In 2006, Danner was awarded an inaugural Katharine Hepburn Medal by Bryn Mawr College's Katharine Houghton Hepburn Center. In 2015, Danner was inducted into the American Theater Hall of Fame.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "Danner has been involved in environmental issues such as recycling and conservation for over 30 years. She has been active with INFORM, Inc., is on the Board of Environmental Advocates of New York and the board of directors of the Environmental Media Association, and won the 2002 EMA Board of Directors Ongoing Commitment Award. In 2011, Danner joined Moms Clean Air Force, to help call on parents to join in the fight against toxic air pollution.",
"title": "Environmental activism"
},
{
"paragraph_id": 12,
"text": "After the death of her husband Bruce Paltrow from oral cancer, she became involved with the nonprofit Oral Cancer Foundation. In 2005, she filmed a public service announcement to raise public awareness of the disease and the need for early detection. She has since appeared on morning talk shows and given interviews in such magazines as People. The Bruce Paltrow Oral Cancer Fund, administered by the Oral Cancer Foundation, raises funding for oral cancer research and treatment, with a particular focus on those communities in which healthcare disparities exist.",
"title": "Health care activism"
},
{
"paragraph_id": 13,
"text": "She has also appeared in commercials for Prolia, a brand of denosumab used in the treatment of osteoporosis.",
"title": "Health care activism"
},
{
"paragraph_id": 14,
"text": "Danner was married to producer and director Bruce Paltrow, who died of oral cancer in 2002. She and Paltrow had two children together, actress Gwyneth Paltrow and director Jake Paltrow.",
"title": "Personal life"
},
{
"paragraph_id": 15,
"text": "Danner's niece is the actress Katherine Moennig, the daughter of her maternal half-brother William.",
"title": "Personal life"
},
{
"paragraph_id": 16,
"text": "Danner co-starred with her daughter in the 1992 television film Cruel Doubt and again in the 2003 film Sylvia, in which she portrayed Aurelia Plath, mother to Gwyneth's title role of Sylvia Plath.",
"title": "Personal life"
},
{
"paragraph_id": 17,
"text": "Danner is a practitioner of transcendental meditation, which she has described as \"very helpful and comforting\".",
"title": "Personal life"
}
] | Blythe Katherine Danner is an American actress. Accolades she has received include two Primetime Emmy Awards for Best Supporting Actress in a Drama Series for her role as Izzy Huffstodt on Huff (2004–2006), and a Tony Award for Best Actress for her performance in Butterflies Are Free on Broadway (1969–1972). Danner was twice nominated for the Primetime Emmy for Outstanding Guest Actress in a Comedy Series for portraying Marilyn Truman on Will & Grace, and the Primetime Emmy for Outstanding Lead Actress in a Miniseries or Movie for her roles in We Were the Mulvaneys (2002) and Back When We Were Grownups (2004). For the latter, she also received a Golden Globe Award nomination. Danner played Dina Byrnes in Meet the Parents (2000) and its sequels Meet the Fockers (2004) and Little Fockers (2010). She has collaborated on several occasions with Woody Allen, appearing in three of his films: Another Woman (1988), Alice (1990), and Husbands and Wives (1992). Her other notable film credits include 1776 (1972), Hearts of the West (1975), The Great Santini (1979), Mr. and Mrs. Bridge (1990), The Prince of Tides (1991), To Wong Foo, Thanks for Everything! Julie Newmar (1995), The Myth of Fingerprints (1997), The X-Files (1998), Forces of Nature (1999), The Love Letter (1999), The Last Kiss (2006), Paul (2011), Hello I Must Be Going (2012), I'll See You in My Dreams (2015), and What They Had (2018). Danner is the sister of Harry Danner and the widow of Bruce Paltrow. She is the mother of actress Gwyneth Paltrow and director Jake Paltrow. | 2001-09-01T03:34:01Z | 2023-12-31T08:13:17Z | [
"Template:Iobdb name",
"Template:Authority control",
"Template:Short description",
"Template:Use American English",
"Template:Infobox person",
"Template:Cite web",
"Template:Dead link",
"Template:Citation needed",
"Template:Nom",
"Template:Commons category",
"Template:IBDB name",
"Template:Use mdy dates",
"Template:Tooltip",
"Template:Won",
"Template:Reflist",
"Template:Cite book",
"Template:Webarchive",
"Template:IMDb name",
"Template:Navboxes"
] | https://en.wikipedia.org/wiki/Blythe_Danner |
4,111 | Bioleaching | Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt.
Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite.
The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper.
Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe from Fe. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal.
Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe), which in turn is reduced to give ferrous ion (Fe):
The ferrous ion is then oxidized by bacteria using oxygen:
Thiosulfate is also oxidized by bacteria to give sulfate:
The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction:
The net products of the reaction are soluble ferrous sulfate and sulfuric acid.
The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant.
The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu ions being left in solution.
Chalcopyrite leaching:
net reaction:
In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe ==> UO2 + 2 Fe). In this case, the sole purpose of the bacterial step is the regeneration of Fe. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by Acidithiobacillus spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals.
The dissolved copper (Cu) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene:
The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution.
Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there.
The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron:
The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons).
Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal.
Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and fly ash from municipal waste incineration. Experiments have shown that two fungal strains (Aspergillus niger, Penicillium simplicissimum) were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. Aspergillus niger can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal.
Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants. And low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished.
Bioleaching can be used to extract metals from low concentration ores such as gold that are too poor for other technologies. It can be used to partially replace the extensive crushing and grinding that translates to prohibitive cost and energy consumption in a conventional process. Because the lower cost of bacterial leaching outweighs the time it takes to extract the metal.
High concentration ores, such as copper, are more economical to smelt rather bioleach due to the slow speed of the bacterial leaching process compared to smelting. The slow speed of bioleaching introduces a significant delay in cash flow for new mines. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable.
Economically it is also very expensive and many companies once started can not keep up with the demand and end up in debt.
In 2020 scientists showed, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basaltic rocks via bioleaching in space.
The process is more environmentally friendly than traditional extraction methods. For the company this can translate into profit, since the necessary limiting of sulfur dioxide emissions during smelting is expensive. Less landscape damage occurs, since the bacteria involved grow naturally, and the mine and surrounding area can be left relatively untouched. As the bacteria breed in the conditions of the mine, they are easily cultivated and recycled.
Toxic chemicals are sometimes produced in the process. Sulfuric acid and H ions that have been formed can leak into the ground and surface water turning it acidic, causing environmental damage. Heavy ions such as iron, zinc, and arsenic leak during acid mine drainage. When the pH of this solution rises, as a result of dilution by fresh water, these ions precipitate, forming "Yellow Boy" pollution. For these reasons, a setup of bioleaching must be carefully planned, since the process can lead to a biosafety failure. Unlike other methods, once started, bioheap leaching cannot be quickly stopped, because leaching would still continue with rainwater and natural bacteria. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous. | [
{
"paragraph_id": 0,
"text": "Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe from Fe. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal.",
"title": "Process"
},
{
"paragraph_id": 4,
"text": "Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe), which in turn is reduced to give ferrous ion (Fe):",
"title": "Process"
},
{
"paragraph_id": 5,
"text": "The ferrous ion is then oxidized by bacteria using oxygen:",
"title": "Process"
},
{
"paragraph_id": 6,
"text": "Thiosulfate is also oxidized by bacteria to give sulfate:",
"title": "Process"
},
{
"paragraph_id": 7,
"text": "The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction:",
"title": "Process"
},
{
"paragraph_id": 8,
"text": "The net products of the reaction are soluble ferrous sulfate and sulfuric acid.",
"title": "Process"
},
{
"paragraph_id": 9,
"text": "The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant.",
"title": "Process"
},
{
"paragraph_id": 10,
"text": "The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu ions being left in solution.",
"title": "Process"
},
{
"paragraph_id": 11,
"text": "Chalcopyrite leaching:",
"title": "Process"
},
{
"paragraph_id": 12,
"text": "net reaction:",
"title": "Process"
},
{
"paragraph_id": 13,
"text": "In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe ==> UO2 + 2 Fe). In this case, the sole purpose of the bacterial step is the regeneration of Fe. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by Acidithiobacillus spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals.",
"title": "Process"
},
{
"paragraph_id": 14,
"text": "The dissolved copper (Cu) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene:",
"title": "Further processing"
},
{
"paragraph_id": 15,
"text": "The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution.",
"title": "Further processing"
},
{
"paragraph_id": 16,
"text": "Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there.",
"title": "Further processing"
},
{
"paragraph_id": 17,
"text": "The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron:",
"title": "Further processing"
},
{
"paragraph_id": 18,
"text": "The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons).",
"title": "Further processing"
},
{
"paragraph_id": 19,
"text": "Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal.",
"title": "Further processing"
},
{
"paragraph_id": 20,
"text": "Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and fly ash from municipal waste incineration. Experiments have shown that two fungal strains (Aspergillus niger, Penicillium simplicissimum) were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. Aspergillus niger can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal.",
"title": "With fungi"
},
{
"paragraph_id": 21,
"text": "Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants. And low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished.",
"title": "Feasibility"
},
{
"paragraph_id": 22,
"text": "Bioleaching can be used to extract metals from low concentration ores such as gold that are too poor for other technologies. It can be used to partially replace the extensive crushing and grinding that translates to prohibitive cost and energy consumption in a conventional process. Because the lower cost of bacterial leaching outweighs the time it takes to extract the metal.",
"title": "Feasibility"
},
{
"paragraph_id": 23,
"text": "High concentration ores, such as copper, are more economical to smelt rather bioleach due to the slow speed of the bacterial leaching process compared to smelting. The slow speed of bioleaching introduces a significant delay in cash flow for new mines. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable.",
"title": "Feasibility"
},
{
"paragraph_id": 24,
"text": "Economically it is also very expensive and many companies once started can not keep up with the demand and end up in debt.",
"title": "Feasibility"
},
{
"paragraph_id": 25,
"text": "In 2020 scientists showed, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basaltic rocks via bioleaching in space.",
"title": "Feasibility"
},
{
"paragraph_id": 26,
"text": "The process is more environmentally friendly than traditional extraction methods. For the company this can translate into profit, since the necessary limiting of sulfur dioxide emissions during smelting is expensive. Less landscape damage occurs, since the bacteria involved grow naturally, and the mine and surrounding area can be left relatively untouched. As the bacteria breed in the conditions of the mine, they are easily cultivated and recycled.",
"title": "Environmental impact"
},
{
"paragraph_id": 27,
"text": "Toxic chemicals are sometimes produced in the process. Sulfuric acid and H ions that have been formed can leak into the ground and surface water turning it acidic, causing environmental damage. Heavy ions such as iron, zinc, and arsenic leak during acid mine drainage. When the pH of this solution rises, as a result of dilution by fresh water, these ions precipitate, forming \"Yellow Boy\" pollution. For these reasons, a setup of bioleaching must be carefully planned, since the process can lead to a biosafety failure. Unlike other methods, once started, bioheap leaching cannot be quickly stopped, because leaching would still continue with rainwater and natural bacteria. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous.",
"title": "Environmental impact"
}
] | Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt. Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite. The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper. | 2001-09-01T15:38:38Z | 2023-11-07T13:53:35Z | [
"Template:Authority control",
"Template:Short description",
"Template:Multiple image",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite web",
"Template:Citation needed",
"Template:Portal",
"Template:Cite book",
"Template:Cite news"
] | https://en.wikipedia.org/wiki/Bioleaching |
4,113 | Bouldering | Bouldering is a form of free climbing that is performed on small rock formations or artificial rock walls without the use of ropes or harnesses. While bouldering can be done without any equipment, most climbers use climbing shoes to help secure footholds, chalk to keep their hands dry and to provide a firmer grip, and bouldering mats to prevent injuries from falls. Unlike free solo climbing, which is also performed without ropes, bouldering problems (the sequence of moves that a climber performs to complete the climb) are usually less than six metres (20 ft) tall. Traverses, which are a form of boulder problem, require the climber to climb horizontally from one end to another. Artificial climbing walls allow boulderers to climb indoors in areas without natural boulders. In addition, bouldering competitions take place in both indoor and outdoor settings.
The sport was originally a method of training for roped climbs and mountaineering, so climbers could practice specific moves at a safe distance from the ground. Additionally, the sport served to build stamina and increase finger strength. Throughout the 20th century, bouldering evolved into a separate discipline. Individual problems are assigned ratings based on difficulty. Although there have been various rating systems used throughout the history of bouldering, modern problems usually use either the V-scale or the Fontainebleau scale.
The growing popularity of bouldering has caused several environmental concerns, including soil erosion and trampled vegetation, as climbers often hike off-trail to reach bouldering sites. This has caused some landowners to restrict access or prohibit bouldering altogether.
The characteristics of boulder problems depend largely on the type of rock being climbed. For example, granite often features long cracks and slabs while sandstone rocks are known for their steep overhangs and frequent horizontal breaks. Limestone and volcanic rock are also used for bouldering.
There are many prominent bouldering areas throughout the United States, including Hueco Tanks in Texas, Mount Blue Sky in Colorado, The Appalachian Mountains in The Eastern United States, and The Buttermilks in Bishop, California. Squamish, British Columbia is one of the most popular bouldering areas in Canada. Europe is also home to a number of bouldering sites, such as Fontainebleau in France, Meschia in Italy, Albarracín in Spain, and various mountains throughout Switzerland. Africa's most prominent bouldering areas include the more established Rocklands, South Africa, the newer Oukaïmeden in Morocco or more recently opened areas like Chimanimani in Zimbabwe.
Artificial climbing walls are used to simulate boulder problems in an indoor environment, usually at climbing gyms. These walls are constructed with wooden panels, polymer cement panels, concrete shells, or precast molds of actual rock walls. Holds, usually made of plastic, are then bolted onto the wall to create problems. Some problems use steep overhanging surfaces which force the climber to support much of their weight using their upper body strength. Other problems are set on flat walls; Instead of requiring upper body strength, these problems create difficulty by requiring the climber to execute a series of predetermined movements to complete the route. The IFSC Climbing World Championships have noticeably included more of such problems in their competitions as of late.
Climbing gyms often feature multiple problems within the same section of wall. In the US the most common method route-setters use to designate the intended problem is by placing colored tape next to each hold. For example, red tape would indicate one bouldering problem while green tape would be used to set a different problem in the same area. Across much of the rest of the world, problems and grades are usually designated using a set color of plastic hold to indicate problems and their difficulty levels. Using colored holds to set has certain advantages, the most notable of which are that it makes it more obvious where the holds for a problem are, and that there is no chance of tape being accidentally kicked off footholds. Smaller, resource-poor climbing gyms may prefer taped problems because large, expensive holds can be used in multiple routes by marking them with more than one color of tape. The tape indicates the hold(s) that the athlete should grab first.
Indoor bouldering requires very little in terms of equipment, at minimum climbing shoes, at maximum, a chalk bag, chalk, a brush, and climbing shoes.
Bouldering problems are assigned numerical difficulty ratings by route-setters and climbers. The two most widely used rating systems are the V-scale and the Fontainebleau system.
The V-scale, which originated in the United States, is an open-ended rating system with higher numbers indicating a higher degree of difficulty. The V1 rating indicates that a problem can be completed by a novice climber in good physical condition after several attempts. The scale begins at V0, and as of 2013, the highest V rating that has been assigned to a bouldering problem is V17. Some climbing gyms also use a VB grade to indicate beginner problems.
The Fontainebleau scale follows a similar system, with each numerical grade divided into three ratings with the letters a, b, and c. For example, Fontainebleau 7A roughly corresponds with V6, while Fontainebleau 7C+ is equivalent to V10. In both systems, grades are further differentiated by appending "+" to indicate a small increase in difficulty. Despite this level of specificity, ratings of individual problems are often controversial, as ability level is not the only factor that affects how difficult a problem may be for a particular climber. Height, arm length, flexibility, and other body characteristics can also be relevant to perceived difficulty.
Highball bouldering is defined as climbing high, difficult, long, and tall boulders, using the same protection as standard bouldering. This form of bouldering adds an additional requirement of mental focus to the existing test of physical strength and skill. Highballing, like most of climbing, is open to interpretation. Most climbers say anything above 4.5 m (15 ft) is a highball and can range in height up to 10.5–12 m (35–40 ft) where highball bouldering then turns into free soloing.
Highball bouldering may have begun in 1961 when John Gill, without top-rope rehearsal, bouldered a steep face on a 11.5 m (37 ft) granite spire called "The Thimble". The difficulty level of this ascent (V4/5 or 5.12a) was extraordinary for that time. Gill's achievement initiated a wave of climbers making ascents of large boulders. Later, with the introduction and evolution of crash pads, climbers were able to push the limits of highball bouldering ever higher.
In 2002 Jason Kehl completed the first highball at double-digit V-difficulty, called Evilution, a 17 m (55 ft) boulder in the Buttermilks of California, earning the grade of V12. This climb marked the beginning of a new generation of highball climbing that pushed not only height but great difficulty. It is not unusual for climbers to rehearse such risky problems on top-rope, although this practice is not a settled issue.
Important milestone ascents in this style include:
Traditionally, competition in bouldering was informal, with climbers working out problems near the limits of their abilities, then challenging their peers to repeat these accomplishments. However, modern climbing gyms allow for a more formal competitive structure.
The International Federation of Sport Climbing (IFSC) employs an indoor format (although competitions can also take place in an outdoor setting) that breaks the competition into three rounds: qualifications, semi-finals, and finals. The rounds feature different sets of four to six boulder problems, and each competitor has a fixed amount of time to attempt each problem. At the end of each round, competitors are ranked by the number of completed problems with ties settled by the total number of attempts taken to solve the problems.
Some competitions only permit climbers a fixed number of attempts at each problem with a timed rest period in between. In an open-format competition, all climbers compete simultaneously, and are given a fixed amount of time to complete as many problems as possible. More points are awarded for more difficult problems, while points are deducted for multiple attempts on the same problem.
In 2012, the IFSC submitted a proposal to the International Olympic Committee (IOC) to include lead climbing in the 2020 Summer Olympics. The proposal was later revised to an "overall" competition, which would feature bouldering, lead climbing, and speed climbing. In May 2013, the IOC announced that climbing would not be added to the 2020 Olympic program.
In 2016, the International Olympic Committee (IOC) officially approved climbing as an Olympic sport "in order to appeal to younger audiences." The Olympics will feature the earlier proposed overall competition. Medalists will be competing in all three categories for a best overall score. The score will be calculated by the multiplication of the positions that the climbers have attained in each discipline of climbing.
Rock climbing first appeared as a sport in the late 1800s. Early records describe climbers engaging in what is now referred to as bouldering, not as a separate discipline, but as a playful form of training for larger ascents. It was during this time that the words "bouldering" and "problem" first appeared in British climbing literature. Oscar Eckenstein was an early proponent of the activity in the British Isles. In the early 20th century, the Fontainebleau area of France established itself as a prominent climbing area, where some of the first dedicated bleausards (or "boulderers") emerged. One of those athletes, Pierre Allain, invented the specialized shoe used for rock climbing.
In the late 1950s through the 1960s, American mathematician John Gill pushed the sport further and contributed several important innovations, distinguishing bouldering as a separate discipline in the process. Gill previously pursued gymnastics, a sport which had an established scale of difficulty for movements and body positions, and shifted the focus of bouldering from reaching the summit to navigating a set of holds. Gill developed a rating system that was closed-ended: B1 problems were as difficult as the most challenging roped routes of the time, B2 problems were more difficult, and B3 problems had been completed once.
Gill introduced chalk as a method of keeping the climber's hands dry, promoted a dynamic climbing style, and emphasized the importance of strength training to complement skill. As Gill improved in ability and influence, his ideas became the norm.
In the 1980s, two important training tools emerged. One important training tool was bouldering mats, also referred to as "crash pads", which protected against injuries from falling and enabled boulderers to climb in areas that would have been too dangerous otherwise. The second important tool was indoor climbing walls, which helped spread the sport to areas without outdoor climbing and allowed serious climbers to train year-round.
As the sport grew in popularity, new bouldering areas were developed throughout Europe and the United States, and more athletes began participating in bouldering competitions. The visibility of the sport greatly increased in the early 2000s, as YouTube videos and climbing blogs helped boulderers around the world to quickly learn techniques, find hard problems, and announce newly completed projects.
Notable boulder climbs are chronicled by the climbing media to track progress in boulder climbing standards and levels of technical difficulty; in contrast, the hardest traditional climbing routes tend to be of lower technical difficulty due to the additional burden of having to place protection during the course of the climb, and due to the lack of any possibility of using natural protection on the most extreme climbs.
As of November 2022, the world's hardest bouldering routes are Burden of Dreams by Nalle Hukkataival and Return of the Sleepwalker by Daniel Woods, both at proposed grades of 9A (V17). There are a number of routes with a confirmed climbing grade of 8C+ (V16), the first of which was Gioia by Christian Core in 2008 (and confirmed by Adam Ondra in 2011).
As of December 2021, female climbers Josune Bereziartu, Ashima Shiraishi, and Kaddi Lehmann have repeated boulder problems at the 8C (V15) boulder grade. On July 28, 2023, Katie Lamb repeated Box Therapy at Rocky Mountain National Park, which at the time was graded 8C+ (V16) making Katie the first female to climb 8C+. However, after Brooke Raboutou repeated the climb In October 2023 along with consensus of first ascensionist Daniel Woods and second acenstionist Drew Ruana, the boulder was ultimately downgraded to 8C (V15). This made Katie Lamb the fourth female to climb 8C (V15) and Brooke the fifth.
Unlike other climbing sports, bouldering can be performed safely and effectively with very little equipment, an aspect which makes the discipline highly appealing, but opinions differ. While bouldering pioneer John Sherman asserted that "The only gear really needed to go bouldering is boulders," others suggest the use of climbing shoes and a chalkbag – a small pouch where ground-up chalk is kept – as the bare minimum, and more experienced boulderers typically bring multiple pairs of climbing shoes, chalk, brushes, crash pads, and a skincare kit.
Climbing shoes have the most direct impact on performance. Besides protecting the climber's feet from rough surfaces, climbing shoes are designed to help the climber secure footholds. Climbing shoes typically fit much tighter than other athletic footwear and often curl the toes downwards to enable precise footwork. They are manufactured in a variety of different styles to perform in different situations. For example, High-top shoes provide better protection for the ankle, while low-top shoes provide greater flexibility and freedom of movement. Stiffer shoes excel at securing small edges, whereas softer shoes provide greater sensitivity. The front of the shoe, called the "toe box", can be asymmetric, which performs well on overhanging rocks, or symmetric, which is better suited for vertical problems and slabs.
To absorb sweat, most boulderers use gymnastics chalk on their hands, stored in a chalkbag, which can be tied around the waist (also called sport climbing chalkbags), allowing the climber to reapply chalk during the climb. There are also versions of floor chalkbags (also called bouldering chalkbags), which are usually bigger than sport climbing chalkbags and are meant to be kept on the floor while climbing; this is because boulders do not usually have so many movements as to require chalking up more than once. Different sizes of brushes are used to remove excess chalk and debris from boulders in between climbs; they are often attached to the end of a long straight object in order to reach higher holds. Crash pads, also referred to as bouldering mats, are foam cushions placed on the ground to protect climbers from injury after falling.
Boulder problems are generally shorter than 20 feet (6.1 m) from ground to top. This makes the sport significantly safer than free solo climbing, which is also performed without ropes, but with no upper limit on the height of the climb. However, minor injuries are common in bouldering, particularly sprained ankles and wrists. Two factors contribute to the frequency of injuries in bouldering: first, boulder problems typically feature more difficult moves than other climbing disciplines, making falls more common. Second, without ropes to arrest the climber's descent, every fall will cause the climber to hit the ground.
To prevent injuries, boulderers position crash pads near the boulder to provide a softer landing, as well as one or more spotters (people watching out for the climber to fall in convenient position) to help redirect the climber towards the pads. Upon landing, boulderers employ falling techniques similar to those used in gymnastics: spreading the impact across the entire body to avoid bone fractures, and positioning limbs to allow joints to move freely throughout the impact.
Although every type of rock climbing requires a high level of strength and technique, bouldering is the most dynamic form of the sport, requiring the highest level of power and placing considerable strain on the body. Training routines that strengthen fingers and forearms are useful in preventing injuries such as tendonitis and ruptured ligaments.
However, as with other forms of climbing, bouldering technique begins with proper footwork. Leg muscles are significantly stronger than arm muscles; thus, proficient boulderers use their arms to maintain balance and body positioning as much as possible, relying on their legs to push them up the rock. Boulderers also keep their arms straight with their shoulders engaged whenever feasible, allowing their bones to support their body weight rather than their muscles.
Bouldering movements are described as either "static" or "dynamic". Static movements are those that are performed slowly, with the climber's position controlled by maintaining contact on the boulder with the other three limbs. Dynamic movements use the climber's momentum to reach holds that would be difficult or impossible to secure statically, with an increased risk of falling if the movement is not performed accurately.
Bouldering can damage vegetation that grows on rocks, such as moss and lichens. This can occur as a result of the climber intentionally cleaning the boulder, or unintentionally from repeated use of handholds and footholds. Vegetation on the ground surrounding the boulder can also be damaged from overuse, particularly by climbers laying down crash pads. Soil erosion can occur when boulderers trample vegetation while hiking off of established trails, or when they unearth small rocks near the boulder in an effort to make the landing zone safer in case of a fall. The repeated use of white climbing chalk can damage the rock surface of boulders and cliffs, particularly sandstone and other porous rock types, and the scrubbing of rocks to remove chalk can also degrade the rock surface. In order to prevent chalk from damaging the surface of the rock, it is important to remove it gently with a brush after a rock climbing session. Other environmental concerns include littering, improperly disposed feces, and graffiti. These issues have caused some land managers to prohibit bouldering, as was the case in Tea Garden, a popular bouldering area in Rocklands, South Africa. | [
{
"paragraph_id": 0,
"text": "Bouldering is a form of free climbing that is performed on small rock formations or artificial rock walls without the use of ropes or harnesses. While bouldering can be done without any equipment, most climbers use climbing shoes to help secure footholds, chalk to keep their hands dry and to provide a firmer grip, and bouldering mats to prevent injuries from falls. Unlike free solo climbing, which is also performed without ropes, bouldering problems (the sequence of moves that a climber performs to complete the climb) are usually less than six metres (20 ft) tall. Traverses, which are a form of boulder problem, require the climber to climb horizontally from one end to another. Artificial climbing walls allow boulderers to climb indoors in areas without natural boulders. In addition, bouldering competitions take place in both indoor and outdoor settings.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The sport was originally a method of training for roped climbs and mountaineering, so climbers could practice specific moves at a safe distance from the ground. Additionally, the sport served to build stamina and increase finger strength. Throughout the 20th century, bouldering evolved into a separate discipline. Individual problems are assigned ratings based on difficulty. Although there have been various rating systems used throughout the history of bouldering, modern problems usually use either the V-scale or the Fontainebleau scale.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The growing popularity of bouldering has caused several environmental concerns, including soil erosion and trampled vegetation, as climbers often hike off-trail to reach bouldering sites. This has caused some landowners to restrict access or prohibit bouldering altogether.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The characteristics of boulder problems depend largely on the type of rock being climbed. For example, granite often features long cracks and slabs while sandstone rocks are known for their steep overhangs and frequent horizontal breaks. Limestone and volcanic rock are also used for bouldering.",
"title": "Outdoor bouldering"
},
{
"paragraph_id": 4,
"text": "There are many prominent bouldering areas throughout the United States, including Hueco Tanks in Texas, Mount Blue Sky in Colorado, The Appalachian Mountains in The Eastern United States, and The Buttermilks in Bishop, California. Squamish, British Columbia is one of the most popular bouldering areas in Canada. Europe is also home to a number of bouldering sites, such as Fontainebleau in France, Meschia in Italy, Albarracín in Spain, and various mountains throughout Switzerland. Africa's most prominent bouldering areas include the more established Rocklands, South Africa, the newer Oukaïmeden in Morocco or more recently opened areas like Chimanimani in Zimbabwe.",
"title": "Outdoor bouldering"
},
{
"paragraph_id": 5,
"text": "Artificial climbing walls are used to simulate boulder problems in an indoor environment, usually at climbing gyms. These walls are constructed with wooden panels, polymer cement panels, concrete shells, or precast molds of actual rock walls. Holds, usually made of plastic, are then bolted onto the wall to create problems. Some problems use steep overhanging surfaces which force the climber to support much of their weight using their upper body strength. Other problems are set on flat walls; Instead of requiring upper body strength, these problems create difficulty by requiring the climber to execute a series of predetermined movements to complete the route. The IFSC Climbing World Championships have noticeably included more of such problems in their competitions as of late.",
"title": "Indoor bouldering"
},
{
"paragraph_id": 6,
"text": "Climbing gyms often feature multiple problems within the same section of wall. In the US the most common method route-setters use to designate the intended problem is by placing colored tape next to each hold. For example, red tape would indicate one bouldering problem while green tape would be used to set a different problem in the same area. Across much of the rest of the world, problems and grades are usually designated using a set color of plastic hold to indicate problems and their difficulty levels. Using colored holds to set has certain advantages, the most notable of which are that it makes it more obvious where the holds for a problem are, and that there is no chance of tape being accidentally kicked off footholds. Smaller, resource-poor climbing gyms may prefer taped problems because large, expensive holds can be used in multiple routes by marking them with more than one color of tape. The tape indicates the hold(s) that the athlete should grab first.",
"title": "Indoor bouldering"
},
{
"paragraph_id": 7,
"text": "Indoor bouldering requires very little in terms of equipment, at minimum climbing shoes, at maximum, a chalk bag, chalk, a brush, and climbing shoes.",
"title": "Indoor bouldering"
},
{
"paragraph_id": 8,
"text": "Bouldering problems are assigned numerical difficulty ratings by route-setters and climbers. The two most widely used rating systems are the V-scale and the Fontainebleau system.",
"title": "Grading"
},
{
"paragraph_id": 9,
"text": "The V-scale, which originated in the United States, is an open-ended rating system with higher numbers indicating a higher degree of difficulty. The V1 rating indicates that a problem can be completed by a novice climber in good physical condition after several attempts. The scale begins at V0, and as of 2013, the highest V rating that has been assigned to a bouldering problem is V17. Some climbing gyms also use a VB grade to indicate beginner problems.",
"title": "Grading"
},
{
"paragraph_id": 10,
"text": "The Fontainebleau scale follows a similar system, with each numerical grade divided into three ratings with the letters a, b, and c. For example, Fontainebleau 7A roughly corresponds with V6, while Fontainebleau 7C+ is equivalent to V10. In both systems, grades are further differentiated by appending \"+\" to indicate a small increase in difficulty. Despite this level of specificity, ratings of individual problems are often controversial, as ability level is not the only factor that affects how difficult a problem may be for a particular climber. Height, arm length, flexibility, and other body characteristics can also be relevant to perceived difficulty.",
"title": "Grading"
},
{
"paragraph_id": 11,
"text": "Highball bouldering is defined as climbing high, difficult, long, and tall boulders, using the same protection as standard bouldering. This form of bouldering adds an additional requirement of mental focus to the existing test of physical strength and skill. Highballing, like most of climbing, is open to interpretation. Most climbers say anything above 4.5 m (15 ft) is a highball and can range in height up to 10.5–12 m (35–40 ft) where highball bouldering then turns into free soloing.",
"title": "Highball bouldering"
},
{
"paragraph_id": 12,
"text": "Highball bouldering may have begun in 1961 when John Gill, without top-rope rehearsal, bouldered a steep face on a 11.5 m (37 ft) granite spire called \"The Thimble\". The difficulty level of this ascent (V4/5 or 5.12a) was extraordinary for that time. Gill's achievement initiated a wave of climbers making ascents of large boulders. Later, with the introduction and evolution of crash pads, climbers were able to push the limits of highball bouldering ever higher.",
"title": "Highball bouldering"
},
{
"paragraph_id": 13,
"text": "In 2002 Jason Kehl completed the first highball at double-digit V-difficulty, called Evilution, a 17 m (55 ft) boulder in the Buttermilks of California, earning the grade of V12. This climb marked the beginning of a new generation of highball climbing that pushed not only height but great difficulty. It is not unusual for climbers to rehearse such risky problems on top-rope, although this practice is not a settled issue.",
"title": "Highball bouldering"
},
{
"paragraph_id": 14,
"text": "Important milestone ascents in this style include:",
"title": "Highball bouldering"
},
{
"paragraph_id": 15,
"text": "Traditionally, competition in bouldering was informal, with climbers working out problems near the limits of their abilities, then challenging their peers to repeat these accomplishments. However, modern climbing gyms allow for a more formal competitive structure.",
"title": "Competition bouldering"
},
{
"paragraph_id": 16,
"text": "The International Federation of Sport Climbing (IFSC) employs an indoor format (although competitions can also take place in an outdoor setting) that breaks the competition into three rounds: qualifications, semi-finals, and finals. The rounds feature different sets of four to six boulder problems, and each competitor has a fixed amount of time to attempt each problem. At the end of each round, competitors are ranked by the number of completed problems with ties settled by the total number of attempts taken to solve the problems.",
"title": "Competition bouldering"
},
{
"paragraph_id": 17,
"text": "Some competitions only permit climbers a fixed number of attempts at each problem with a timed rest period in between. In an open-format competition, all climbers compete simultaneously, and are given a fixed amount of time to complete as many problems as possible. More points are awarded for more difficult problems, while points are deducted for multiple attempts on the same problem.",
"title": "Competition bouldering"
},
{
"paragraph_id": 18,
"text": "In 2012, the IFSC submitted a proposal to the International Olympic Committee (IOC) to include lead climbing in the 2020 Summer Olympics. The proposal was later revised to an \"overall\" competition, which would feature bouldering, lead climbing, and speed climbing. In May 2013, the IOC announced that climbing would not be added to the 2020 Olympic program.",
"title": "Competition bouldering"
},
{
"paragraph_id": 19,
"text": "In 2016, the International Olympic Committee (IOC) officially approved climbing as an Olympic sport \"in order to appeal to younger audiences.\" The Olympics will feature the earlier proposed overall competition. Medalists will be competing in all three categories for a best overall score. The score will be calculated by the multiplication of the positions that the climbers have attained in each discipline of climbing.",
"title": "Competition bouldering"
},
{
"paragraph_id": 20,
"text": "Rock climbing first appeared as a sport in the late 1800s. Early records describe climbers engaging in what is now referred to as bouldering, not as a separate discipline, but as a playful form of training for larger ascents. It was during this time that the words \"bouldering\" and \"problem\" first appeared in British climbing literature. Oscar Eckenstein was an early proponent of the activity in the British Isles. In the early 20th century, the Fontainebleau area of France established itself as a prominent climbing area, where some of the first dedicated bleausards (or \"boulderers\") emerged. One of those athletes, Pierre Allain, invented the specialized shoe used for rock climbing.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In the late 1950s through the 1960s, American mathematician John Gill pushed the sport further and contributed several important innovations, distinguishing bouldering as a separate discipline in the process. Gill previously pursued gymnastics, a sport which had an established scale of difficulty for movements and body positions, and shifted the focus of bouldering from reaching the summit to navigating a set of holds. Gill developed a rating system that was closed-ended: B1 problems were as difficult as the most challenging roped routes of the time, B2 problems were more difficult, and B3 problems had been completed once.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Gill introduced chalk as a method of keeping the climber's hands dry, promoted a dynamic climbing style, and emphasized the importance of strength training to complement skill. As Gill improved in ability and influence, his ideas became the norm.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the 1980s, two important training tools emerged. One important training tool was bouldering mats, also referred to as \"crash pads\", which protected against injuries from falling and enabled boulderers to climb in areas that would have been too dangerous otherwise. The second important tool was indoor climbing walls, which helped spread the sport to areas without outdoor climbing and allowed serious climbers to train year-round.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "As the sport grew in popularity, new bouldering areas were developed throughout Europe and the United States, and more athletes began participating in bouldering competitions. The visibility of the sport greatly increased in the early 2000s, as YouTube videos and climbing blogs helped boulderers around the world to quickly learn techniques, find hard problems, and announce newly completed projects.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Notable boulder climbs are chronicled by the climbing media to track progress in boulder climbing standards and levels of technical difficulty; in contrast, the hardest traditional climbing routes tend to be of lower technical difficulty due to the additional burden of having to place protection during the course of the climb, and due to the lack of any possibility of using natural protection on the most extreme climbs.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "As of November 2022, the world's hardest bouldering routes are Burden of Dreams by Nalle Hukkataival and Return of the Sleepwalker by Daniel Woods, both at proposed grades of 9A (V17). There are a number of routes with a confirmed climbing grade of 8C+ (V16), the first of which was Gioia by Christian Core in 2008 (and confirmed by Adam Ondra in 2011).",
"title": "History"
},
{
"paragraph_id": 27,
"text": "As of December 2021, female climbers Josune Bereziartu, Ashima Shiraishi, and Kaddi Lehmann have repeated boulder problems at the 8C (V15) boulder grade. On July 28, 2023, Katie Lamb repeated Box Therapy at Rocky Mountain National Park, which at the time was graded 8C+ (V16) making Katie the first female to climb 8C+. However, after Brooke Raboutou repeated the climb In October 2023 along with consensus of first ascensionist Daniel Woods and second acenstionist Drew Ruana, the boulder was ultimately downgraded to 8C (V15). This made Katie Lamb the fourth female to climb 8C (V15) and Brooke the fifth.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Unlike other climbing sports, bouldering can be performed safely and effectively with very little equipment, an aspect which makes the discipline highly appealing, but opinions differ. While bouldering pioneer John Sherman asserted that \"The only gear really needed to go bouldering is boulders,\" others suggest the use of climbing shoes and a chalkbag – a small pouch where ground-up chalk is kept – as the bare minimum, and more experienced boulderers typically bring multiple pairs of climbing shoes, chalk, brushes, crash pads, and a skincare kit.",
"title": "Equipment"
},
{
"paragraph_id": 29,
"text": "Climbing shoes have the most direct impact on performance. Besides protecting the climber's feet from rough surfaces, climbing shoes are designed to help the climber secure footholds. Climbing shoes typically fit much tighter than other athletic footwear and often curl the toes downwards to enable precise footwork. They are manufactured in a variety of different styles to perform in different situations. For example, High-top shoes provide better protection for the ankle, while low-top shoes provide greater flexibility and freedom of movement. Stiffer shoes excel at securing small edges, whereas softer shoes provide greater sensitivity. The front of the shoe, called the \"toe box\", can be asymmetric, which performs well on overhanging rocks, or symmetric, which is better suited for vertical problems and slabs.",
"title": "Equipment"
},
{
"paragraph_id": 30,
"text": "To absorb sweat, most boulderers use gymnastics chalk on their hands, stored in a chalkbag, which can be tied around the waist (also called sport climbing chalkbags), allowing the climber to reapply chalk during the climb. There are also versions of floor chalkbags (also called bouldering chalkbags), which are usually bigger than sport climbing chalkbags and are meant to be kept on the floor while climbing; this is because boulders do not usually have so many movements as to require chalking up more than once. Different sizes of brushes are used to remove excess chalk and debris from boulders in between climbs; they are often attached to the end of a long straight object in order to reach higher holds. Crash pads, also referred to as bouldering mats, are foam cushions placed on the ground to protect climbers from injury after falling.",
"title": "Equipment"
},
{
"paragraph_id": 31,
"text": "Boulder problems are generally shorter than 20 feet (6.1 m) from ground to top. This makes the sport significantly safer than free solo climbing, which is also performed without ropes, but with no upper limit on the height of the climb. However, minor injuries are common in bouldering, particularly sprained ankles and wrists. Two factors contribute to the frequency of injuries in bouldering: first, boulder problems typically feature more difficult moves than other climbing disciplines, making falls more common. Second, without ropes to arrest the climber's descent, every fall will cause the climber to hit the ground.",
"title": "Equipment"
},
{
"paragraph_id": 32,
"text": "To prevent injuries, boulderers position crash pads near the boulder to provide a softer landing, as well as one or more spotters (people watching out for the climber to fall in convenient position) to help redirect the climber towards the pads. Upon landing, boulderers employ falling techniques similar to those used in gymnastics: spreading the impact across the entire body to avoid bone fractures, and positioning limbs to allow joints to move freely throughout the impact.",
"title": "Equipment"
},
{
"paragraph_id": 33,
"text": "Although every type of rock climbing requires a high level of strength and technique, bouldering is the most dynamic form of the sport, requiring the highest level of power and placing considerable strain on the body. Training routines that strengthen fingers and forearms are useful in preventing injuries such as tendonitis and ruptured ligaments.",
"title": "Techniques"
},
{
"paragraph_id": 34,
"text": "However, as with other forms of climbing, bouldering technique begins with proper footwork. Leg muscles are significantly stronger than arm muscles; thus, proficient boulderers use their arms to maintain balance and body positioning as much as possible, relying on their legs to push them up the rock. Boulderers also keep their arms straight with their shoulders engaged whenever feasible, allowing their bones to support their body weight rather than their muscles.",
"title": "Techniques"
},
{
"paragraph_id": 35,
"text": "Bouldering movements are described as either \"static\" or \"dynamic\". Static movements are those that are performed slowly, with the climber's position controlled by maintaining contact on the boulder with the other three limbs. Dynamic movements use the climber's momentum to reach holds that would be difficult or impossible to secure statically, with an increased risk of falling if the movement is not performed accurately.",
"title": "Techniques"
},
{
"paragraph_id": 36,
"text": "Bouldering can damage vegetation that grows on rocks, such as moss and lichens. This can occur as a result of the climber intentionally cleaning the boulder, or unintentionally from repeated use of handholds and footholds. Vegetation on the ground surrounding the boulder can also be damaged from overuse, particularly by climbers laying down crash pads. Soil erosion can occur when boulderers trample vegetation while hiking off of established trails, or when they unearth small rocks near the boulder in an effort to make the landing zone safer in case of a fall. The repeated use of white climbing chalk can damage the rock surface of boulders and cliffs, particularly sandstone and other porous rock types, and the scrubbing of rocks to remove chalk can also degrade the rock surface. In order to prevent chalk from damaging the surface of the rock, it is important to remove it gently with a brush after a rock climbing session. Other environmental concerns include littering, improperly disposed feces, and graffiti. These issues have caused some land managers to prohibit bouldering, as was the case in Tea Garden, a popular bouldering area in Rocklands, South Africa.",
"title": "Environmental impact"
}
] | Bouldering is a form of free climbing that is performed on small rock formations or artificial rock walls without the use of ropes or harnesses. While bouldering can be done without any equipment, most climbers use climbing shoes to help secure footholds, chalk to keep their hands dry and to provide a firmer grip, and bouldering mats to prevent injuries from falls. Unlike free solo climbing, which is also performed without ropes, bouldering problems are usually less than six metres (20 ft) tall. Traverses, which are a form of boulder problem, require the climber to climb horizontally from one end to another. Artificial climbing walls allow boulderers to climb indoors in areas without natural boulders. In addition, bouldering competitions take place in both indoor and outdoor settings. The sport was originally a method of training for roped climbs and mountaineering, so climbers could practice specific moves at a safe distance from the ground. Additionally, the sport served to build stamina and increase finger strength. Throughout the 20th century, bouldering evolved into a separate discipline. Individual problems are assigned ratings based on difficulty. Although there have been various rating systems used throughout the history of bouldering, modern problems usually use either the V-scale or the Fontainebleau scale. The growing popularity of bouldering has caused several environmental concerns, including soil erosion and trampled vegetation, as climbers often hike off-trail to reach bouldering sites. This has caused some landowners to restrict access or prohibit bouldering altogether. | 2002-02-25T15:43:11Z | 2023-12-18T02:49:21Z | [
"Template:Webarchive",
"Template:Cite book",
"Template:Cite magazine",
"Template:Commons category inline",
"Template:Use dmy dates",
"Template:Rp",
"Template:Citation needed",
"Template:See also",
"Template:Extreme sports",
"Template:Cite web",
"Template:Dead link",
"Template:Climbing navbox",
"Template:Short description",
"Template:Convert",
"Template:Boulder grade",
"Template:Reflist",
"Template:Cite Instagram",
"Template:Good article",
"Template:Climbing sidebar",
"Template:Further",
"Template:Main",
"Template:Cite news"
] | https://en.wikipedia.org/wiki/Bouldering |
4,115 | Boiling point | The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor.
The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at 99.97 °C (211.95 °F) under standard pressure at sea level, but at 93.4 °C (200.1 °F) at 1,905 metres (6,250 ft) altitude. For a given pressure, different liquids will boil at different temperatures.
The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar.
The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure).
Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid.
A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing).
Saturation temperature means boiling point. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition.
If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied.
The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa (or 1 atm), or the IUPAC standard pressure of 100.000 kPa. At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point.
If the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus:
where:
Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature.
If the temperature in a system remains constant (an isothermal system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. Similarly, a liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased.
There are two conventions regarding the standard boiling point of water: The normal boiling point is 99.97 °C (211.9 °F) at a pressure of 1 atm (i.e., 101.325 kPa). The IUPAC-recommended standard boiling point of water at a standard pressure of 100 kPa (1 bar) is 99.61 °C (211.3 °F). For comparison, on top of Mount Everest, at 8,848 m (29,029 ft) elevation, the pressure is about 34 kPa (255 Torr) and the boiling point of water is 71 °C (160 °F). The Celsius temperature scale was defined until 1954 by two points: 0 °C being defined by the water freezing point and 100 °C being defined by the water boiling point at standard atmospheric pressure.
The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point (i.e., the boiling point at atmospheric pressure) of the liquid.
The vapor pressure chart to the right has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points.
For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure.
The critical point of a liquid is the highest temperature (and pressure) it will actually boil at.
See also Vapour pressure of water.
The element with the lowest boiling point is helium. Both the boiling points of rhenium and tungsten exceed 5000 K at standard pressure; because it is difficult to measure extreme temperatures precisely without bias, both have been cited in the literature as having the higher boiling point.
As can be seen from the above plot of the logarithm of the vapor pressure vs. the temperature for any given pure chemical compound, its normal boiling point can serve as an indication of that compound's overall volatility. A given pure compound has only one normal boiling point, if any, and a compound's normal boiling point and melting point can serve as characteristic physical properties for that compound, listed in reference books. The higher a compound's normal boiling point, the less volatile that compound is overall, and conversely, the lower a compound's normal boiling point, the more volatile that compound is overall. Some compounds decompose at higher temperatures before reaching their normal boiling point, or sometimes even their melting point. For a stable compound, the boiling point ranges from its triple point to its critical point, depending on the external pressure. Beyond its triple point, a compound's normal boiling point, if any, is higher than its melting point. Beyond the critical point, a compound's liquid and vapor phases merge into one phase, which may be called a superheated gas. At any given temperature, if a compound's normal boiling point is lower, then that compound will generally exist as a gas at atmospheric external pressure. If the compound's normal boiling point is higher, then that compound can exist as a liquid or solid at that given temperature at atmospheric external pressure, and will so exist in equilibrium with its vapor (if volatile) if its vapors are contained. If a compound's vapors are not contained, then some volatile compounds can eventually evaporate away in spite of their higher boiling points.
In general, compounds with ionic bonds have high normal boiling points, if they do not decompose before reaching such high temperatures. Many metals have high boiling points, but not all. Very generally—with other factors being equal—in compounds with covalently bonded molecules, as the size of the molecule (or molecular mass) increases, the normal boiling point increases. When the molecular size becomes that of a macromolecule, polymer, or otherwise very large, the compound often decomposes at high temperature before the boiling point is reached. Another factor that affects the normal boiling point of a compound is the polarity of its molecules. As the polarity of a compound's molecules increases, its normal boiling point increases, other factors being equal. Closely related is the ability of a molecule to form hydrogen bonds (in the liquid state), which makes it harder for molecules to leave the liquid state and thus increases the normal boiling point of the compound. Simple carboxylic acids dimerize by forming hydrogen bonds between molecules. A minor factor affecting boiling points is the shape of a molecule. Making the shape of a molecule more compact tends to lower the normal boiling point slightly compared to an equivalent molecule with more surface area.
Most volatile compounds (anywhere near ambient temperatures) go through an intermediate liquid phase while warming up from a solid phase to eventually transform to a vapor phase. By comparison to boiling, a sublimation is a physical transformation in which a solid turns directly into vapor, which happens in a few select cases such as with carbon dioxide at atmospheric pressure. For such compounds, a sublimation point is a temperature at which a solid turning directly into vapor has a vapor pressure equal to the external pressure.
In the preceding section, boiling points of pure compounds were covered. Vapor pressures and boiling points of substances can be affected by the presence of dissolved impurities (solutes) or other miscible compounds, the degree of effect depending on the concentration of the impurities or other compounds. The presence of non-volatile impurities such as salts or compounds of a volatility far lower than the main component compound decreases its mole fraction and the solution's volatility, and thus raises the normal boiling point in proportion to the concentration of the solutes. This effect is called boiling point elevation. As a common example, salt water boils at a higher temperature than pure water.
In other mixtures of miscible compounds (components), there may be two or more components of varying volatility, each having its own pure component boiling point at any given pressure. The presence of other volatile components in a mixture affects the vapor pressures and thus boiling points and dew points of all the components in the mixture. The dew point is a temperature at which a vapor condenses into a liquid. Furthermore, at any given temperature, the composition of the vapor is different from the composition of the liquid in most such cases. In order to illustrate these effects between the volatile components in a mixture, a boiling point diagram is commonly used. Distillation is a process of boiling and [usually] condensation which takes advantage of these differences in composition between liquid and vapor phases. | [
{
"paragraph_id": 0,
"text": "The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at 99.97 °C (211.95 °F) under standard pressure at sea level, but at 93.4 °C (200.1 °F) at 1,905 metres (6,250 ft) altitude. For a given pressure, different liquids will boil at different temperatures.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure).",
"title": ""
},
{
"paragraph_id": 4,
"text": "Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid.",
"title": ""
},
{
"paragraph_id": 5,
"text": "A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing).",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 6,
"text": "Saturation temperature means boiling point. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition.",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 7,
"text": "If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied.",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 8,
"text": "The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa (or 1 atm), or the IUPAC standard pressure of 100.000 kPa. At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point.",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 9,
"text": "If the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus:",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 10,
"text": "where:",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 11,
"text": "Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature.",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 12,
"text": "If the temperature in a system remains constant (an isothermal system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. Similarly, a liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased.",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 13,
"text": "There are two conventions regarding the standard boiling point of water: The normal boiling point is 99.97 °C (211.9 °F) at a pressure of 1 atm (i.e., 101.325 kPa). The IUPAC-recommended standard boiling point of water at a standard pressure of 100 kPa (1 bar) is 99.61 °C (211.3 °F). For comparison, on top of Mount Everest, at 8,848 m (29,029 ft) elevation, the pressure is about 34 kPa (255 Torr) and the boiling point of water is 71 °C (160 °F). The Celsius temperature scale was defined until 1954 by two points: 0 °C being defined by the water freezing point and 100 °C being defined by the water boiling point at standard atmospheric pressure.",
"title": "Saturation temperature and pressure"
},
{
"paragraph_id": 14,
"text": "The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point (i.e., the boiling point at atmospheric pressure) of the liquid.",
"title": "Relation between the normal boiling point and the vapor pressure of liquids"
},
{
"paragraph_id": 15,
"text": "The vapor pressure chart to the right has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points.",
"title": "Relation between the normal boiling point and the vapor pressure of liquids"
},
{
"paragraph_id": 16,
"text": "For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure.",
"title": "Relation between the normal boiling point and the vapor pressure of liquids"
},
{
"paragraph_id": 17,
"text": "The critical point of a liquid is the highest temperature (and pressure) it will actually boil at.",
"title": "Relation between the normal boiling point and the vapor pressure of liquids"
},
{
"paragraph_id": 18,
"text": "See also Vapour pressure of water.",
"title": "Relation between the normal boiling point and the vapor pressure of liquids"
},
{
"paragraph_id": 19,
"text": "The element with the lowest boiling point is helium. Both the boiling points of rhenium and tungsten exceed 5000 K at standard pressure; because it is difficult to measure extreme temperatures precisely without bias, both have been cited in the literature as having the higher boiling point.",
"title": "Boiling point of chemical elements"
},
{
"paragraph_id": 20,
"text": "As can be seen from the above plot of the logarithm of the vapor pressure vs. the temperature for any given pure chemical compound, its normal boiling point can serve as an indication of that compound's overall volatility. A given pure compound has only one normal boiling point, if any, and a compound's normal boiling point and melting point can serve as characteristic physical properties for that compound, listed in reference books. The higher a compound's normal boiling point, the less volatile that compound is overall, and conversely, the lower a compound's normal boiling point, the more volatile that compound is overall. Some compounds decompose at higher temperatures before reaching their normal boiling point, or sometimes even their melting point. For a stable compound, the boiling point ranges from its triple point to its critical point, depending on the external pressure. Beyond its triple point, a compound's normal boiling point, if any, is higher than its melting point. Beyond the critical point, a compound's liquid and vapor phases merge into one phase, which may be called a superheated gas. At any given temperature, if a compound's normal boiling point is lower, then that compound will generally exist as a gas at atmospheric external pressure. If the compound's normal boiling point is higher, then that compound can exist as a liquid or solid at that given temperature at atmospheric external pressure, and will so exist in equilibrium with its vapor (if volatile) if its vapors are contained. If a compound's vapors are not contained, then some volatile compounds can eventually evaporate away in spite of their higher boiling points.",
"title": "Boiling point as a reference property of a pure compound"
},
{
"paragraph_id": 21,
"text": "In general, compounds with ionic bonds have high normal boiling points, if they do not decompose before reaching such high temperatures. Many metals have high boiling points, but not all. Very generally—with other factors being equal—in compounds with covalently bonded molecules, as the size of the molecule (or molecular mass) increases, the normal boiling point increases. When the molecular size becomes that of a macromolecule, polymer, or otherwise very large, the compound often decomposes at high temperature before the boiling point is reached. Another factor that affects the normal boiling point of a compound is the polarity of its molecules. As the polarity of a compound's molecules increases, its normal boiling point increases, other factors being equal. Closely related is the ability of a molecule to form hydrogen bonds (in the liquid state), which makes it harder for molecules to leave the liquid state and thus increases the normal boiling point of the compound. Simple carboxylic acids dimerize by forming hydrogen bonds between molecules. A minor factor affecting boiling points is the shape of a molecule. Making the shape of a molecule more compact tends to lower the normal boiling point slightly compared to an equivalent molecule with more surface area.",
"title": "Boiling point as a reference property of a pure compound"
},
{
"paragraph_id": 22,
"text": "Most volatile compounds (anywhere near ambient temperatures) go through an intermediate liquid phase while warming up from a solid phase to eventually transform to a vapor phase. By comparison to boiling, a sublimation is a physical transformation in which a solid turns directly into vapor, which happens in a few select cases such as with carbon dioxide at atmospheric pressure. For such compounds, a sublimation point is a temperature at which a solid turning directly into vapor has a vapor pressure equal to the external pressure.",
"title": "Boiling point as a reference property of a pure compound"
},
{
"paragraph_id": 23,
"text": "In the preceding section, boiling points of pure compounds were covered. Vapor pressures and boiling points of substances can be affected by the presence of dissolved impurities (solutes) or other miscible compounds, the degree of effect depending on the concentration of the impurities or other compounds. The presence of non-volatile impurities such as salts or compounds of a volatility far lower than the main component compound decreases its mole fraction and the solution's volatility, and thus raises the normal boiling point in proportion to the concentration of the solutes. This effect is called boiling point elevation. As a common example, salt water boils at a higher temperature than pure water.",
"title": "Impurities and mixtures"
},
{
"paragraph_id": 24,
"text": "In other mixtures of miscible compounds (components), there may be two or more components of varying volatility, each having its own pure component boiling point at any given pressure. The presence of other volatile components in a mixture affects the vapor pressures and thus boiling points and dew points of all the components in the mixture. The dew point is a temperature at which a vapor condenses into a liquid. Furthermore, at any given temperature, the composition of the vapor is different from the composition of the liquid in most such cases. In order to illustrate these effects between the volatile components in a mixture, a boiling point diagram is commonly used. Distillation is a process of boiling and [usually] condensation which takes advantage of these differences in composition between liquid and vapor phases.",
"title": "Impurities and mixtures"
}
] | The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor. The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at 99.97 °C (211.95 °F) under standard pressure at sea level, but at 93.4 °C (200.1 °F) at 1,905 metres (6,250 ft) altitude. For a given pressure, different liquids will boil at different temperatures. The normal boiling point of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar. The heat of vaporization is the energy required to transform a given quantity of a substance from a liquid into a gas at a given pressure. Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. | 2001-09-05T06:50:35Z | 2023-11-24T21:20:09Z | [
"Template:Cite NSRW",
"Template:Cite web",
"Template:Convert",
"Template:Periodic table (boiling point)",
"Template:Cite book",
"Template:Authority control",
"Template:Short description",
"Template:Main",
"Template:Nobold",
"Template:Cite journal",
"Template:Phase of matter",
"Template:About",
"Template:Further",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Boiling_point |
4,116 | Big Bang | The Big Bang event is a physical theory that describes how the universe expanded from an initial state of high density and temperature. It was first proposed in 1927 by Roman Catholic priest and physicist Georges Lemaître. Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The overall uniformity of the Universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments. However, physics currently lacks a widely accepted theory of quantum gravity that can successfully model the earliest conditions of the Big Bang.
Crucially, these models are compatible with the Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity"). In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated 13.787±0.020 billion years ago, which is considered the age of the universe.
There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. The unequal abundances of matter and antimatter that allowed this to occur is an unexplained effect known as baryon asymmetry. These primordial elements—mostly hydrogen, with some helium and lithium—later coalesced through gravity, forming early stars and galaxies. Astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang models and various observations indicate that this excess gravitational potential is not created by baryonic matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to an unexplained phenomenon known as dark energy.
The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the CMB, large-scale structure, and Hubble's law. The models depend on two major assumptions: the universality of physical laws and the cosmological principle. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location.
These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars.
The large-scale universe appears isotropic as viewed from Earth. If it is indeed isotropic, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995.
An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe.
Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.
Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other.
According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling.
Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This irregular behavior, known as the gravitational singularity, indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone cannot fully extrapolate toward the singularity. In some proposals, such as the emergent Universe models, the singularity is replaced by another cosmological epoch. A different approach identifies the initial singularity as a singularity predicted by some models of the Big Bang theory to have existed before the Big Bang.
This primordial singularity is itself sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the Standard Model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event—known as the "age of the universe"—is 13.8 billion years.
Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a singularity. Commonly used calculations and limits for explaining gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. Since the early universe did not immediately collapse into a multitude of black holes, matter at that time must have been very evenly distributed with a negligible density gradient.
The earliest phases of the Big Bang are subject to much speculation, since astronomical data about them are not available. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, 1.6×10 m, and consequently had a temperature of approximately 10 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10 seconds, where gravitation separated from the other forces as the universe's temperature fell.
At approximately 10 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified.
Inflation stopped locally at around 10 to 10 seconds, with the observable universe's volume having increased by a factor of at least 10. Reheating occurred until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe.
The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10 seconds.
After about 10 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 10 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).
A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei.
As the universe cooled, the rest energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years, the electrons and nuclei combined into atoms (mostly hydrogen), which were able to emit radiation. This relic radiation, which continued through space largely unimpeded, is known as the cosmic microwave background.
Over a long period of time, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. (Warm dark matter is ruled out by early reionization.) This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%.
In an "extended model" which includes hot dark matter in the form of neutrinos, then the "physical baryon density" Ω b h 2 {\displaystyle \Omega _{\text{b}}h^{2}} is estimated at 0.023. (This is different from the 'baryon density' Ω b {\displaystyle \Omega _{\text{b}}} expressed as a fraction of the total matter/energy density, which is about 0.046.) The corresponding cold dark matter density Ω c h 2 {\displaystyle \Omega _{\text{c}}h^{2}} is about 0.11, and the corresponding neutrino density Ω v h 2 {\displaystyle \Omega _{\text{v}}h^{2}} is estimated to be less than 0.0062.
Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate.
Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory.
All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics.
English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s.
It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative.
The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. An attempt to find a more suitable alternative was not successful.
The Big Bang models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time.
In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the 100-inch (2.5 m) Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law.
Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence.
In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the Big Bang concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed:
If the world has begun with a single quantum, the notions of space and time would altogether fail to have any meaning at the beginning; they would only begin to have a sensible meaning when the original quantum had been divided into a sufficient number of quanta. If this suggestion is correct, the beginning of the world happened a little before the beginning of space and time.
During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis.
After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced BBN and whose associates, Ralph Alpher and Robert Herman, predicted the CMB. Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe.
In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe).
In the mid-1990s, observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters.
Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.
"[The] big bang picture is too firmly grounded in data from every area to be proved invalid in its general features."
— Lawrence Krauss
The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures, These are sometimes called the "four pillars" of the Big Bang models.
Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics.
Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: v = H 0 D {\displaystyle v=H_{0}D} where
Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker.
The theory requires the relation v = H D {\displaystyle v=HD} to hold at all times, where D {\displaystyle D} is the proper distance, v is the recessional velocity, and v {\displaystyle v} , H {\displaystyle H} , and D {\displaystyle D} vary as the universe expands (hence we write H 0 {\displaystyle H_{0}} to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity v {\displaystyle v} . For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one.
An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder.
In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics.
The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around 372±14 kyr, the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent.
In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 10, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 10. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results.
During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.
In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing.
Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (He), helium-3 (He), deuterium (H), and lithium-7 (Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for He:H, about 10 for H:H, about 10 for He:H, and about 10 for Li:H.
The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for He, and off by a factor of two for Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than He, and in constant ratios, too.
Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters.
Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory.
In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN.
The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model.
The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult.
Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang.
As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists.
It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter, rather than antimatter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry.
Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy".
Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler.
Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant.
The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units.
During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters.
Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway.
Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations.
The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.
A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation.
Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been confirmed by measurements of the CMB.
A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended.
The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness.
The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat.
The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 10 of its critical value, or it would not exist as it does today.
One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe.
Another common misconception is that the Big Bang must be understood as the expansion of space and not in terms of the contents of space exploding apart. In fact, either description can be accurate. The expansion of space (implied by the FLRW metric) is only a mathematical convention, corresponding to a choice of coordinates on spacetime. There is no generally covariant sense in which space expands.
The recession speeds associated with Hubble's law are not velocities in a relativistic sense (for example, they are not related to the spatial components of 4-velocities). Therefore, it is not remarkable that according to Hubble's law, galaxies farther than the Hubble distance recede faster than the speed of light. Such recession speeds do not correspond to faster-than-light travel.
Many popular accounts attribute the cosmological redshift to the expansion of space. This can be misleading because the expansion of space is only a coordinate choice. The most natural interpretation of the cosmological redshift is that it is a Doppler shift.
Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture.
The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the "primeval atom" while Gamow called the material "ylem". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, specific laws of nature most likely came to existence in a random way, but as inflation models show, some combinations of these are far more probable. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created.
The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. However, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the Planck epoch, and correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang.
While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony".
Some speculative proposals in this regard, each of which entails untested hypotheses, are:
Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse.
Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch.
Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death.
Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip.
As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous. | [
{
"paragraph_id": 0,
"text": "The Big Bang event is a physical theory that describes how the universe expanded from an initial state of high density and temperature. It was first proposed in 1927 by Roman Catholic priest and physicist Georges Lemaître. Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The overall uniformity of the Universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments. However, physics currently lacks a widely accepted theory of quantum gravity that can successfully model the earliest conditions of the Big Bang.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Crucially, these models are compatible with the Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named \"the Big Bang singularity\"). In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated 13.787±0.020 billion years ago, which is considered the age of the universe.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. The unequal abundances of matter and antimatter that allowed this to occur is an unexplained effect known as baryon asymmetry. These primordial elements—mostly hydrogen, with some helium and lithium—later coalesced through gravity, forming early stars and galaxies. Astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang models and various observations indicate that this excess gravitational potential is not created by baryonic matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to an unexplained phenomenon known as dark energy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the CMB, large-scale structure, and Hubble's law. The models depend on two major assumptions: the universality of physical laws and the cosmological principle. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location.",
"title": "Features of the models"
},
{
"paragraph_id": 4,
"text": "These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars.",
"title": "Features of the models"
},
{
"paragraph_id": 5,
"text": "The large-scale universe appears isotropic as viewed from Earth. If it is indeed isotropic, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995.",
"title": "Features of the models"
},
{
"paragraph_id": 6,
"text": "An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never \"catch up\" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe.",
"title": "Features of the models"
},
{
"paragraph_id": 7,
"text": "Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.",
"title": "Features of the models"
},
{
"paragraph_id": 8,
"text": "Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other.",
"title": "Features of the models"
},
{
"paragraph_id": 9,
"text": "According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling.",
"title": "Timeline"
},
{
"paragraph_id": 10,
"text": "Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This irregular behavior, known as the gravitational singularity, indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone cannot fully extrapolate toward the singularity. In some proposals, such as the emergent Universe models, the singularity is replaced by another cosmological epoch. A different approach identifies the initial singularity as a singularity predicted by some models of the Big Bang theory to have existed before the Big Bang.",
"title": "Timeline"
},
{
"paragraph_id": 11,
"text": "This primordial singularity is itself sometimes called \"the Big Bang\", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, \"the Big Bang\" as an event is also colloquially referred to as the \"birth\" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the Standard Model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event—known as the \"age of the universe\"—is 13.8 billion years.",
"title": "Timeline"
},
{
"paragraph_id": 12,
"text": "Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a singularity. Commonly used calculations and limits for explaining gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. Since the early universe did not immediately collapse into a multitude of black holes, matter at that time must have been very evenly distributed with a negligible density gradient.",
"title": "Timeline"
},
{
"paragraph_id": 13,
"text": "The earliest phases of the Big Bang are subject to much speculation, since astronomical data about them are not available. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, 1.6×10 m, and consequently had a temperature of approximately 10 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10 seconds, where gravitation separated from the other forces as the universe's temperature fell.",
"title": "Timeline"
},
{
"paragraph_id": 14,
"text": "At approximately 10 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were \"frozen in\" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified.",
"title": "Timeline"
},
{
"paragraph_id": 15,
"text": "Inflation stopped locally at around 10 to 10 seconds, with the observable universe's volume having increased by a factor of at least 10. Reheating occurred until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe.",
"title": "Timeline"
},
{
"paragraph_id": 16,
"text": "The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10 seconds.",
"title": "Timeline"
},
{
"paragraph_id": 17,
"text": "After about 10 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 10 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).",
"title": "Timeline"
},
{
"paragraph_id": 18,
"text": "A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei.",
"title": "Timeline"
},
{
"paragraph_id": 19,
"text": "As the universe cooled, the rest energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years, the electrons and nuclei combined into atoms (mostly hydrogen), which were able to emit radiation. This relic radiation, which continued through space largely unimpeded, is known as the cosmic microwave background.",
"title": "Timeline"
},
{
"paragraph_id": 20,
"text": "Over a long period of time, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. (Warm dark matter is ruled out by early reionization.) This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%.",
"title": "Timeline"
},
{
"paragraph_id": 21,
"text": "In an \"extended model\" which includes hot dark matter in the form of neutrinos, then the \"physical baryon density\" Ω b h 2 {\\displaystyle \\Omega _{\\text{b}}h^{2}} is estimated at 0.023. (This is different from the 'baryon density' Ω b {\\displaystyle \\Omega _{\\text{b}}} expressed as a fraction of the total matter/energy density, which is about 0.046.) The corresponding cold dark matter density Ω c h 2 {\\displaystyle \\Omega _{\\text{c}}h^{2}} is about 0.11, and the corresponding neutrino density Ω v h 2 {\\displaystyle \\Omega _{\\text{v}}h^{2}} is estimated to be less than 0.0062.",
"title": "Timeline"
},
{
"paragraph_id": 22,
"text": "Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate.",
"title": "Timeline"
},
{
"paragraph_id": 23,
"text": "Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory.",
"title": "Timeline"
},
{
"paragraph_id": 24,
"text": "All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics.",
"title": "Timeline"
},
{
"paragraph_id": 25,
"text": "English astronomer Fred Hoyle is credited with coining the term \"Big Bang\" during a talk for a March 1949 BBC Radio broadcast, saying: \"These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past.\" However, it did not catch on until the 1970s.",
"title": "Concept history"
},
{
"paragraph_id": 26,
"text": "It is popularly reported that Hoyle, who favored an alternative \"steady-state\" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is \"unconvincing\", and mentions a number of indications that it was not a pejorative.",
"title": "Concept history"
},
{
"paragraph_id": 27,
"text": "The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. An attempt to find a more suitable alternative was not successful.",
"title": "Concept history"
},
{
"paragraph_id": 28,
"text": "The Big Bang models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a \"spiral nebula\" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were \"island universes\" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time.",
"title": "Concept history"
},
{
"paragraph_id": 29,
"text": "In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the 100-inch (2.5 m) Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law.",
"title": "Concept history"
},
{
"paragraph_id": 30,
"text": "Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a \"primeval atom\" where and when the fabric of time and space came into existence.",
"title": "Concept history"
},
{
"paragraph_id": 31,
"text": "In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the Big Bang concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was \"repugnant\" to him. Lemaître, however, disagreed:",
"title": "Concept history"
},
{
"paragraph_id": 32,
"text": "If the world has begun with a single quantum, the notions of space and time would altogether fail to have any meaning at the beginning; they would only begin to have a sensible meaning when the original quantum had been divided into a sufficient number of quanta. If this suggestion is correct, the beginning of the world happened a little before the beginning of space and time.",
"title": "Concept history"
},
{
"paragraph_id": 33,
"text": "During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis.",
"title": "Concept history"
},
{
"paragraph_id": 34,
"text": "After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced BBN and whose associates, Ralph Alpher and Robert Herman, predicted the CMB. Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as \"this big bang idea\" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe.",
"title": "Concept history"
},
{
"paragraph_id": 35,
"text": "In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called \"inflation\". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe).",
"title": "Concept history"
},
{
"paragraph_id": 36,
"text": "In the mid-1990s, observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters.",
"title": "Concept history"
},
{
"paragraph_id": 37,
"text": "Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.",
"title": "Concept history"
},
{
"paragraph_id": 38,
"text": "\"[The] big bang picture is too firmly grounded in data from every area to be proved invalid in its general features.\"",
"title": "Observational evidence"
},
{
"paragraph_id": 39,
"text": "— Lawrence Krauss",
"title": "Observational evidence"
},
{
"paragraph_id": 40,
"text": "The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures, These are sometimes called the \"four pillars\" of the Big Bang models.",
"title": "Observational evidence"
},
{
"paragraph_id": 41,
"text": "Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics.",
"title": "Observational evidence"
},
{
"paragraph_id": 42,
"text": "",
"title": "Observational evidence"
},
{
"paragraph_id": 43,
"text": "Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: v = H 0 D {\\displaystyle v=H_{0}D} where",
"title": "Observational evidence"
},
{
"paragraph_id": 44,
"text": "Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker.",
"title": "Observational evidence"
},
{
"paragraph_id": 45,
"text": "The theory requires the relation v = H D {\\displaystyle v=HD} to hold at all times, where D {\\displaystyle D} is the proper distance, v is the recessional velocity, and v {\\displaystyle v} , H {\\displaystyle H} , and D {\\displaystyle D} vary as the universe expands (hence we write H 0 {\\displaystyle H_{0}} to denote the present-day Hubble \"constant\"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity v {\\displaystyle v} . For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one.",
"title": "Observational evidence"
},
{
"paragraph_id": 46,
"text": "An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder.",
"title": "Observational evidence"
},
{
"paragraph_id": 47,
"text": "In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics.",
"title": "Observational evidence"
},
{
"paragraph_id": 48,
"text": "The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around 372±14 kyr, the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent.",
"title": "Observational evidence"
},
{
"paragraph_id": 49,
"text": "In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 10, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 10. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results.",
"title": "Observational evidence"
},
{
"paragraph_id": 50,
"text": "During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.",
"title": "Observational evidence"
},
{
"paragraph_id": 51,
"text": "In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing.",
"title": "Observational evidence"
},
{
"paragraph_id": 52,
"text": "Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (He), helium-3 (He), deuterium (H), and lithium-7 (Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for He:H, about 10 for H:H, about 10 for He:H, and about 10 for Li:H.",
"title": "Observational evidence"
},
{
"paragraph_id": 53,
"text": "The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for He, and off by a factor of two for Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to \"tune\" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than He, and in constant ratios, too.",
"title": "Observational evidence"
},
{
"paragraph_id": 54,
"text": "Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters.",
"title": "Observational evidence"
},
{
"paragraph_id": 55,
"text": "Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory.",
"title": "Observational evidence"
},
{
"paragraph_id": 56,
"text": "In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN.",
"title": "Observational evidence"
},
{
"paragraph_id": 57,
"text": "The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model.",
"title": "Observational evidence"
},
{
"paragraph_id": 58,
"text": "The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult.",
"title": "Observational evidence"
},
{
"paragraph_id": 59,
"text": "Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang.",
"title": "Observational evidence"
},
{
"paragraph_id": 60,
"text": "As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 61,
"text": "It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter, rather than antimatter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 62,
"text": "Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed \"dark energy\".",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 63,
"text": "Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the \"missing\" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 64,
"text": "Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 65,
"text": "The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the \"most embarrassing problem in physics\", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 66,
"text": "During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 67,
"text": "Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 68,
"text": "Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 69,
"text": "The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 70,
"text": "A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 71,
"text": "Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been confirmed by measurements of the CMB.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 72,
"text": "A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 73,
"text": "The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 74,
"text": "The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 75,
"text": "The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 10 of its critical value, or it would not exist as it does today.",
"title": "Problems and related issues in physics"
},
{
"paragraph_id": 76,
"text": "One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe.",
"title": "Misconceptions"
},
{
"paragraph_id": 77,
"text": "Another common misconception is that the Big Bang must be understood as the expansion of space and not in terms of the contents of space exploding apart. In fact, either description can be accurate. The expansion of space (implied by the FLRW metric) is only a mathematical convention, corresponding to a choice of coordinates on spacetime. There is no generally covariant sense in which space expands.",
"title": "Misconceptions"
},
{
"paragraph_id": 78,
"text": "The recession speeds associated with Hubble's law are not velocities in a relativistic sense (for example, they are not related to the spatial components of 4-velocities). Therefore, it is not remarkable that according to Hubble's law, galaxies farther than the Hubble distance recede faster than the speed of light. Such recession speeds do not correspond to faster-than-light travel.",
"title": "Misconceptions"
},
{
"paragraph_id": 79,
"text": "Many popular accounts attribute the cosmological redshift to the expansion of space. This can be misleading because the expansion of space is only a coordinate choice. The most natural interpretation of the cosmological redshift is that it is a Doppler shift.",
"title": "Misconceptions"
},
{
"paragraph_id": 80,
"text": "Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture.",
"title": "Implications"
},
{
"paragraph_id": 81,
"text": "The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the \"primeval atom\" while Gamow called the material \"ylem\". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, specific laws of nature most likely came to existence in a random way, but as inflation models show, some combinations of these are far more probable. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created.",
"title": "Implications"
},
{
"paragraph_id": 82,
"text": "The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. However, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the Planck epoch, and correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang.",
"title": "Implications"
},
{
"paragraph_id": 83,
"text": "While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of \"cosmogony\".",
"title": "Implications"
},
{
"paragraph_id": 84,
"text": "Some speculative proposals in this regard, each of which entails untested hypotheses, are:",
"title": "Implications"
},
{
"paragraph_id": 85,
"text": "Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse.",
"title": "Implications"
},
{
"paragraph_id": 86,
"text": "Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch.",
"title": "Implications"
},
{
"paragraph_id": 87,
"text": "Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death.",
"title": "Implications"
},
{
"paragraph_id": 88,
"text": "Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip.",
"title": "Implications"
},
{
"paragraph_id": 89,
"text": "As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous.",
"title": "Implications"
}
] | The Big Bang event is a physical theory that describes how the universe expanded from an initial state of high density and temperature. It was first proposed in 1927 by Roman Catholic priest and physicist Georges Lemaître. Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The overall uniformity of the Universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments. However, physics currently lacks a widely accepted theory of quantum gravity that can successfully model the earliest conditions of the Big Bang. Crucially, these models are compatible with the Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning. In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated 13.787±0.020 billion years ago, which is considered the age of the universe. There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. The unequal abundances of matter and antimatter that allowed this to occur is an unexplained effect known as baryon asymmetry. These primordial elements—mostly hydrogen, with some helium and lithium—later coalesced through gravity, forming early stars and galaxies. Astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang models and various observations indicate that this excess gravitational potential is not created by baryonic matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to an unexplained phenomenon known as dark energy. | 2001-11-07T00:28:23Z | 2023-12-29T19:19:17Z | [
"Template:Authority control",
"Template:Sfn",
"Template:Annotated link",
"Template:Reflist",
"Template:Cite serial",
"Template:Cite magazine",
"Template:Google translation",
"Template:Refbegin",
"Template:Webarchive",
"Template:External Timeline",
"Template:Multiple image",
"Template:Cn",
"Template:Harvnb",
"Template:Cosmology topics",
"Template:Use dmy dates",
"Template:Cite AV media",
"Template:Rp",
"Template:Cite arXiv",
"Template:Bibcode",
"Template:Cbignore",
"Template:Short description",
"Template:About",
"Template:Pp-semi-indef",
"Template:Clarify",
"Template:Refend",
"Template:Spoken Wikipedia",
"Template:Doi",
"Template:Curlie",
"Template:Use American English",
"Template:Val",
"Template:Cite book",
"Template:Cite news",
"Template:Cite journal",
"Template:Cite conference",
"Template:Main",
"Template:Anchor",
"Template:Spaced ndash",
"Template:Cite web",
"Template:Cosmology",
"Template:Blockquote",
"Template:Quote box",
"Template:Big Bang timeline",
"Template:Big History",
"Template:Subject bar",
"Template:See also",
"Template:Refn",
"Template:Convert",
"Template:For"
] | https://en.wikipedia.org/wiki/Big_Bang |
4,119 | Bock | Bock (or bok (German: [bɔk] ) is a strong beer originated in Germany, usually a dark lager.
The style now known as Bock was first brewed in the 14th century in the Hanseatic town of Einbeck in Lower Saxony.
The style was later adopted in Bavaria by Munich brewers in the 17th century. Due to their Bavarian accent, citizens of Munich pronounced "Einbeck" as "ein Bock" ("a billy goat"), and thus the beer became known as "Bock". A goat often appears on bottle labels.
Bock is historically associated with special occasions, often religious festivals such as Christmas, Easter, or Lent (Lentenbock). Bock has a long history of being brewed and consumed by Bavarian monks as a source of nutrition during times of fasting.
Several substyles of Bock exist, including:
Traditional Bock is a sweet, relatively strong (6.3–7.6% by volume), lightly hopped lager registering between 20 and 30 International Bitterness Units (IBUs). The beer should be clear, with color ranging from light copper to brown, and a bountiful, persistent off-white head. The aroma should be malty and toasty, possibly with hints of alcohol, but no detectable hops or fruitiness. The mouthfeel is smooth, with low to moderate carbonation and no astringency. The taste is rich and toasty, sometimes with a bit of caramel. The low-to-undetectable presence of hops provides just enough bitterness so that the sweetness is not cloying and the aftertaste is muted.
The following (mostly US-based) commercial products are indicative of the style: Christmas Bock (Gunpowder Falls Brewing Company), Point Bock (Stevens Point Brewery) Einbecker Ur-Bock Dunkel, Pennsylvania Brewing St. Nick Bock, Aass Bock, Great Lakes Rockefeller Bock, Stegmaier Brewhouse Bock, and Nashville Brewing Company's Nashville Bock.
The Maibock style – also known as Heller Bock or Lente Bock in the Netherlands – is a strong pale lager, lighter in colour and with more hop presence.
Colour can range from deep gold to light amber with a large, creamy, persistent white head, and moderate to moderately high carbonation, while alcohol content ranges from 6.3% to 8.1% by volume. The flavour is typically less malty than a traditional Bock, and may be drier, hoppier, and more bitter, but still with a relatively low hop flavour, with a mild spicy or peppery quality from the hops, increased carbonation and alcohol content.
Doppelbock or Double Bock is a stronger version of traditional Bock that was first brewed in Munich by the Paulaner Friars, a Franciscan order founded by St. Francis of Paula.
Historically, Doppelbock was high in alcohol and sweetness. The story is told that it served as "liquid bread" for the Friars during times of fasting when solid food was not permitted. However, historian Mark Dredge, in his book A Brief History of Lager, says that this story is myth and that the monks produced Doppelbock to supplement their order's vegetarian diet all year.
Today, Doppelbock is still strong – ranging from 7% to 12% or more by volume. It is clear, with colour ranging from dark gold, for the paler version, to dark brown with ruby highlights for a darker version. It has a large, creamy, persistent head (although head retention may be impaired by alcohol in the stronger versions). The aroma is intensely malty, with some toasty notes, and possibly some alcohol presence as well; darker versions may have a chocolate-like or fruity aroma. The flavour is very rich and malty, with noticeable alcoholic strength, and little or no detectable hops (16–26 IBUs).
Paler versions may have a drier finish. The monks who originally brewed Doppelbock named their beer "Salvator" (literally "Savior", but actually a malapropism for "Sankt Vater", "St. Father", originally brewed for the feast of St. Francis of Paola on 2 April which often falls in Lent), which today is trademarked by Paulaner.
Brewers of modern Doppelbock often add "-ator" to their beer's name as a signpost of the style; there are 200 "-ator" Doppelbock names registered with the German patent office.
The following are representative examples of the style: Paulaner Salvator, Ayinger Celebrator, Weihenstephaner Korbinian, Andechser Doppelbock Dunkel, Spaten Optimator, Augustiner Brau Maximator, Tucher Bajuvator, Weltenburger Kloster Asam-Bock, Capital Autumnal Fire, EKU 28, Eggenberg Urbock 23º, Bell's Consecrator, Moretti La Rossa, Samuel Adams Double Bock, Tröegs Tröegenator Double Bock, Wasatch Brewery Devastator, Great Lakes Doppelrock, Abita Andygator, Wolverine State Brewing Company Predator, Burly Brewing's Burlynator, Monteith's Doppel Bock, and Christian Moerlein Emancipator Doppelbock.
Eisbock is a traditional specialty beer of the Kulmbach district of Bavaria, made by partially freezing a Doppelbock and removing the water ice to concentrate the flavour and alcohol content, which ranges from 8.6% to 14.3% by volume.
It is clear, with a colour ranging from deep copper to dark brown in colour, often with ruby highlights. Although it can pour with a thin off-white head, head retention is frequently impaired by the higher alcohol content. The aroma is intense, with no hop presence, but frequently can contain fruity notes, especially of prunes, raisins, and plums. Mouthfeel is full and smooth, with significant alcohol, although this should not be hot or sharp. The flavour is rich and sweet, often with toasty notes, and sometimes hints of chocolate, always balanced by a significant alcohol presence.
The following are representative examples of the style: Colorado Team Brew "Warning Sign", Kulmbacher Reichelbräu Eisbock, Eggenberg, Schneider Aventinus Eisbock, Urbock Dunkel Eisbock, Franconia Brewing Company Ice Bock 17%.
The strongest ice beer, Strength in Numbers, was a one-time collaboration in 2020 between Schorschbrau of Germany and BrewDog of Scotland, who had competed with each other in the early years of the 21st century to produce the world's strongest beer. Strength in Numbers was created using traditional ice distillation, reaching a final strength of 57.8% ABV.
Weizenbock is a style that replaces some of the barley in the grain bill with 40–60% wheat. It was first produced in Bavaria in 1907 by G. Schneider & Sohn and was named Aventinus after 16th-century Bavarian historian Johannes Aventinus. The style combines darker Munich malts and top-fermenting wheat beer yeast, brewed at the strength of a Doppelbock. | [
{
"paragraph_id": 0,
"text": "Bock (or bok (German: [bɔk] ) is a strong beer originated in Germany, usually a dark lager.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The style now known as Bock was first brewed in the 14th century in the Hanseatic town of Einbeck in Lower Saxony.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "The style was later adopted in Bavaria by Munich brewers in the 17th century. Due to their Bavarian accent, citizens of Munich pronounced \"Einbeck\" as \"ein Bock\" (\"a billy goat\"), and thus the beer became known as \"Bock\". A goat often appears on bottle labels.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Bock is historically associated with special occasions, often religious festivals such as Christmas, Easter, or Lent (Lentenbock). Bock has a long history of being brewed and consumed by Bavarian monks as a source of nutrition during times of fasting.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Several substyles of Bock exist, including:",
"title": "Styles"
},
{
"paragraph_id": 5,
"text": "Traditional Bock is a sweet, relatively strong (6.3–7.6% by volume), lightly hopped lager registering between 20 and 30 International Bitterness Units (IBUs). The beer should be clear, with color ranging from light copper to brown, and a bountiful, persistent off-white head. The aroma should be malty and toasty, possibly with hints of alcohol, but no detectable hops or fruitiness. The mouthfeel is smooth, with low to moderate carbonation and no astringency. The taste is rich and toasty, sometimes with a bit of caramel. The low-to-undetectable presence of hops provides just enough bitterness so that the sweetness is not cloying and the aftertaste is muted.",
"title": "Styles"
},
{
"paragraph_id": 6,
"text": "The following (mostly US-based) commercial products are indicative of the style: Christmas Bock (Gunpowder Falls Brewing Company), Point Bock (Stevens Point Brewery) Einbecker Ur-Bock Dunkel, Pennsylvania Brewing St. Nick Bock, Aass Bock, Great Lakes Rockefeller Bock, Stegmaier Brewhouse Bock, and Nashville Brewing Company's Nashville Bock.",
"title": "Styles"
},
{
"paragraph_id": 7,
"text": "The Maibock style – also known as Heller Bock or Lente Bock in the Netherlands – is a strong pale lager, lighter in colour and with more hop presence.",
"title": "Styles"
},
{
"paragraph_id": 8,
"text": "Colour can range from deep gold to light amber with a large, creamy, persistent white head, and moderate to moderately high carbonation, while alcohol content ranges from 6.3% to 8.1% by volume. The flavour is typically less malty than a traditional Bock, and may be drier, hoppier, and more bitter, but still with a relatively low hop flavour, with a mild spicy or peppery quality from the hops, increased carbonation and alcohol content.",
"title": "Styles"
},
{
"paragraph_id": 9,
"text": "Doppelbock or Double Bock is a stronger version of traditional Bock that was first brewed in Munich by the Paulaner Friars, a Franciscan order founded by St. Francis of Paula.",
"title": "Styles"
},
{
"paragraph_id": 10,
"text": "Historically, Doppelbock was high in alcohol and sweetness. The story is told that it served as \"liquid bread\" for the Friars during times of fasting when solid food was not permitted. However, historian Mark Dredge, in his book A Brief History of Lager, says that this story is myth and that the monks produced Doppelbock to supplement their order's vegetarian diet all year.",
"title": "Styles"
},
{
"paragraph_id": 11,
"text": "Today, Doppelbock is still strong – ranging from 7% to 12% or more by volume. It is clear, with colour ranging from dark gold, for the paler version, to dark brown with ruby highlights for a darker version. It has a large, creamy, persistent head (although head retention may be impaired by alcohol in the stronger versions). The aroma is intensely malty, with some toasty notes, and possibly some alcohol presence as well; darker versions may have a chocolate-like or fruity aroma. The flavour is very rich and malty, with noticeable alcoholic strength, and little or no detectable hops (16–26 IBUs).",
"title": "Styles"
},
{
"paragraph_id": 12,
"text": "Paler versions may have a drier finish. The monks who originally brewed Doppelbock named their beer \"Salvator\" (literally \"Savior\", but actually a malapropism for \"Sankt Vater\", \"St. Father\", originally brewed for the feast of St. Francis of Paola on 2 April which often falls in Lent), which today is trademarked by Paulaner.",
"title": "Styles"
},
{
"paragraph_id": 13,
"text": "Brewers of modern Doppelbock often add \"-ator\" to their beer's name as a signpost of the style; there are 200 \"-ator\" Doppelbock names registered with the German patent office.",
"title": "Styles"
},
{
"paragraph_id": 14,
"text": "The following are representative examples of the style: Paulaner Salvator, Ayinger Celebrator, Weihenstephaner Korbinian, Andechser Doppelbock Dunkel, Spaten Optimator, Augustiner Brau Maximator, Tucher Bajuvator, Weltenburger Kloster Asam-Bock, Capital Autumnal Fire, EKU 28, Eggenberg Urbock 23º, Bell's Consecrator, Moretti La Rossa, Samuel Adams Double Bock, Tröegs Tröegenator Double Bock, Wasatch Brewery Devastator, Great Lakes Doppelrock, Abita Andygator, Wolverine State Brewing Company Predator, Burly Brewing's Burlynator, Monteith's Doppel Bock, and Christian Moerlein Emancipator Doppelbock.",
"title": "Styles"
},
{
"paragraph_id": 15,
"text": "Eisbock is a traditional specialty beer of the Kulmbach district of Bavaria, made by partially freezing a Doppelbock and removing the water ice to concentrate the flavour and alcohol content, which ranges from 8.6% to 14.3% by volume.",
"title": "Styles"
},
{
"paragraph_id": 16,
"text": "It is clear, with a colour ranging from deep copper to dark brown in colour, often with ruby highlights. Although it can pour with a thin off-white head, head retention is frequently impaired by the higher alcohol content. The aroma is intense, with no hop presence, but frequently can contain fruity notes, especially of prunes, raisins, and plums. Mouthfeel is full and smooth, with significant alcohol, although this should not be hot or sharp. The flavour is rich and sweet, often with toasty notes, and sometimes hints of chocolate, always balanced by a significant alcohol presence.",
"title": "Styles"
},
{
"paragraph_id": 17,
"text": "The following are representative examples of the style: Colorado Team Brew \"Warning Sign\", Kulmbacher Reichelbräu Eisbock, Eggenberg, Schneider Aventinus Eisbock, Urbock Dunkel Eisbock, Franconia Brewing Company Ice Bock 17%.",
"title": "Styles"
},
{
"paragraph_id": 18,
"text": "The strongest ice beer, Strength in Numbers, was a one-time collaboration in 2020 between Schorschbrau of Germany and BrewDog of Scotland, who had competed with each other in the early years of the 21st century to produce the world's strongest beer. Strength in Numbers was created using traditional ice distillation, reaching a final strength of 57.8% ABV.",
"title": "Styles"
},
{
"paragraph_id": 19,
"text": "Weizenbock is a style that replaces some of the barley in the grain bill with 40–60% wheat. It was first produced in Bavaria in 1907 by G. Schneider & Sohn and was named Aventinus after 16th-century Bavarian historian Johannes Aventinus. The style combines darker Munich malts and top-fermenting wheat beer yeast, brewed at the strength of a Doppelbock.",
"title": "Styles"
}
] | Bock (or bok is a strong beer originated in Germany, usually a dark lager. | 2001-09-04T17:25:43Z | 2023-12-29T16:38:42Z | [
"Template:Reflist",
"Template:Snd",
"Template:Further",
"Template:Use dmy dates",
"Template:Use Oxford spelling",
"Template:IPA-de",
"Template:Beer Styles",
"Template:Short description",
"Template:Other uses",
"Template:Cite web",
"Template:Commons",
"Template:Authority control",
"Template:Infobox drink",
"Template:Citation needed",
"Template:Cite magazine",
"Template:Americana Poster",
"Template:Lang",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Bock |
4,124 | Bantu languages | The Bantu languages (English: UK: /ˌbænˈtuː/, US: /ˈbæntuː/ Proto-Bantu: *bantʊ̀) are a language family of about 600 languages that are spoken by the Bantu peoples of Central, Southern, Eastern and Southeast Africa. They form the largest branch of the Southern Bantoid languages.
The total number of Bantu languages is estimated at between 440 and 680 distinct languages, depending on the definition of "language" versus "dialect". Many Bantu languages borrow words from each other, and some are mutually intelligible.
The total number of Bantu speakers is estimated to be around 350 million in 2015 (roughly 30% of the population of Africa or 5% of the world population). Bantu languages are largely spoken southeast of Cameroon, and throughout Central, Southern, Eastern, and Southeast Africa. About one-sixth of Bantu speakers, and one-third of Bantu languages, are found in the Democratic Republic of the Congo.
The most widely spoken Bantu language by number of speakers is Swahili, with 16 million native speakers and 80 million L2 speakers (2015). Most native speakers of Swahili live in Tanzania, where it is a national language, while as a second language it is taught as a mandatory subject in many schools in East Africa, and is a lingua franca of the East African Community.
Other major Bantu languages include Lingala with more than 20 million speakers (Congo, DRC), Zulu with 12 million speakers (South Africa), Xhosa with 8.2 million speakers (South Africa and Zimbabwe), and Shona with less than 10 million speakers (if Manyika and Ndau are included), while Sotho-Tswana languages (Sotho, Tswana and Pedi) have more than 15 million speakers (across Botswana, Lesotho, South Africa, and Zambia). Zimbabwe has Kalanga, Matebele, Nambiya, and Xhosa speakers. Ethnologue separates the largely mutually intelligible Kinyarwanda and Kirundi, which together have 20 million speakers.
The similarity among dispersed Bantu languages had been observed as early as the 17th century. The term Bantu as a name for the group was coined (as Bâ-ntu) by Wilhelm Bleek in 1857 or 1858, and popularized in his Comparative Grammar of 1862. He coined the term to represent the word for "people" in loosely reconstructed Proto-Bantu, from the plural noun class prefix *ba- categorizing "people", and the root *ntʊ̀- "some (entity), any" (e.g. Xhosa umntu "person", abantu "people"; Zulu umuntu "person", abantu "people").
There is no indigenous term for the group, as Bantu-speaking populations refer to themselves by their endonyms, but did not have a concept for the larger ethno-linguistic phylum. Bleek's coinage was inspired by the anthropological observation of groups frequently self-identifying as "people" or "the true people" (as is the case, for example, with the term Khoikhoi, but this is a kare "praise address" and not an ethnic name).
The term narrow Bantu, excluding those languages classified as Bantoid by Guthrie (1948), was introduced in the 1960s.
The prefix ba- specifically refers to people. Endonymically, the term for cultural objects, including language, is formed with the ki- noun class (Nguni ísi-), as in KiSwahili (Swahili language and culture), IsiZulu (Zulu language and culture) and KiGanda (Ganda religion and culture).
In the 1980s, South African linguists suggested referring to these languages as KiNtu. The word kintu exists in some places, but it means "thing", with no relation to the concept of "language". In addition, delegates at the African Languages Association of Southern Africa conference in 1984 reported that, in some places, the term Kintu has a derogatory significance. This is because kintu refers to "things" and is used as a dehumanizing term for people who have lost their dignity.
In addition, Kintu is a figure in some mythologies.
In the 1990s, the term Kintu was still occasionally used by South African linguists. But in contemporary decolonial South African linguistics, the term Ntu languages is used.
The Bantu languages descend from a common Proto-Bantu language, which is believed to have been spoken in what is now Cameroon in Central Africa. An estimated 2,500–3,000 years ago (1000 BC to 500 BC), speakers of the Proto-Bantu language began a series of migrations eastward and southward, carrying agriculture with them. This Bantu expansion came to dominate Sub-Saharan Africa east of Cameroon, an area where Bantu peoples now constitute nearly the entire population. Some other sources estimate the Bantu Expansion started closer to 3000 BC.
The technical term Bantu, meaning "human beings" or simply "people", was first used by Wilhelm Bleek (1827–1875), as the concept is reflected in many of the languages of this group. A common characteristic of Bantu languages is that they use words such as muntu or mutu for "human being" or in simplistic terms "person", and the plural prefix for human nouns starting with mu- (class 1) in most languages is ba- (class 2), thus giving bantu for "people". Bleek, and later Carl Meinhof, pursued extensive studies comparing the grammatical structures of Bantu languages.
The most widely used classification is an alphanumeric coding system developed by Malcolm Guthrie in his 1948 classification of the Bantu languages. It is mainly geographic. The term "narrow Bantu" was coined by the Benue–Congo Working Group to distinguish Bantu as recognized by Guthrie, from the Bantoid languages not recognized as Bantu by Guthrie.
In recent times, the distinctiveness of Narrow Bantu as opposed to the other Southern Bantoid languages has been called into doubt (cf. Piron 1995, Williamson & Blench 2000, Blench 2011), but the term is still widely used.
There is no true genealogical classification of the (Narrow) Bantu languages. Until recently most attempted classifications only considered languages that happen to fall within traditional Narrow Bantu, but there seems to be a continuum with the related languages of South Bantoid.
At a broader level, the family is commonly split in two depending on the reflexes of proto-Bantu tone patterns: many Bantuists group together parts of zones A through D (the extent depending on the author) as Northwest Bantu or Forest Bantu, and the remainder as Central Bantu or Savanna Bantu. The two groups have been described as having mirror-image tone systems: where Northwest Bantu has a high tone in a cognate, Central Bantu languages generally have a low tone, and vice versa.
Northwest Bantu is more divergent internally than Central Bantu, and perhaps less conservative due to contact with non-Bantu Niger–Congo languages; Central Bantu is likely the innovative line cladistically. Northwest Bantu is clearly not a coherent family, but even for Central Bantu the evidence is lexical, with little evidence that it is a historically valid group.
Another attempt at a detailed genetic classification to replace the Guthrie system is the 1999 "Tervuren" proposal of Bastin, Coupez, and Mann. However, it relies on lexicostatistics, which, because of its reliance on overall similarity rather than shared innovations, may predict spurious groups of conservative languages that are not closely related. Meanwhile, Ethnologue has added languages to the Guthrie classification which Guthrie overlooked, while removing the Mbam languages (much of zone A), and shifting some languages between groups (much of zones D and E to a new zone J, for example, and part of zone L to K, and part of M to F) in an apparent effort at a semi-genetic, or at least semi-areal, classification. This has been criticized for sowing confusion in one of the few unambiguous ways to distinguish Bantu languages. Nurse & Philippson (2006) evaluate many proposals for low-level groups of Bantu languages, but the result is not a complete portrayal of the family. Glottolog has incorporated many of these into their classification.
The languages that share Dahl's law may also form a valid group, Northeast Bantu. The infobox at right lists these together with various low-level groups that are fairly uncontroversial, though they continue to be revised. The development of a rigorous genealogical classification of many branches of Niger–Congo, not just Bantu, is hampered by insufficient data.
Simplified phylogeny of northwestern branches of Bantu by Grollemund (2012):
Other computational phylogenetic analyses of Bantu include Currie et al. (2013), Grollemund et al. (2015), Rexova et al. 2006, Holden et al., 2016, and Whiteley et al. 2018.
Glottolog (2021) does not consider the older geographic classification by Guthrie relevant for its ongoing classification based on more recent linguistic studies, and divides Bantu into four main branches: Bantu A-B10-B20-B30, Central-Western Bantu, East Bantu and Mbam-Bube-Jarawan.
Guthrie reconstructed both the phonemic inventory and the vocabulary of Proto-Bantu.
The most prominent grammatical characteristic of Bantu languages is the extensive use of affixes (see Sotho grammar and Ganda noun classes for detailed discussions of these affixes). Each noun belongs to a class, and each language may have several numbered classes, somewhat like grammatical gender in European languages. The class is indicated by a prefix that is part of the noun, as well as agreement markers on verb and qualificative roots connected with the noun. Plurality is indicated by a change of class, with a resulting change of prefix. All Bantu languages are agglutinative.
The verb has a number of prefixes, though in the western languages these are often treated as independent words. In Swahili, for example, Mtoto mdogo amekisoma (for comparison, Kamwana kadoko karikuverenga in Shona language) means 'The small child has read it [a book]'. Mtoto 'child' governs the adjective prefix m- (representing the diminutive form of the word) and the verb subject prefix a-. Then comes perfect tense -me- and an object marker -ki- agreeing with implicit kitabu 'book' (from Arabic kitab). Pluralizing to 'children' gives Watoto wadogo wamekisoma (Vana vadoko varikuverenga in Shona), and pluralizing to 'books' (vitabu) gives watoto wadogo wamevisoma.
Bantu words are typically made up of open syllables of the type CV (consonant-vowel) with most languages having syllables exclusively of this type. The Bushong language recorded by Vansina, however, has final consonants, while slurring of the final syllable (though written) is reported as common among the Tonga of Malawi. The morphological shape of Bantu words is typically CV, VCV, CVCV, VCVCV, etc.; that is, any combination of CV (with possibly a V- syllable at the start). In other words, a strong claim for this language family is that almost all words end in a vowel, precisely because closed syllables (CVC) are not permissible in most of the documented languages, as far as is understood.
This tendency to avoid consonant clusters in some positions is important when words are imported from English or other non-Bantu languages. An example from Chewa: the word "school", borrowed from English, and then transformed to fit the sound patterns of this language, is sukulu. That is, sk- has been broken up by inserting an epenthetic -u-; -u has also been added at the end of the word. Another example is buledi for "bread". Similar effects are seen in loanwords for other non-African CV languages like Japanese. However, a clustering of sounds at the beginning of a syllable can be readily observed in such languages as Shona, and the Makua languages.
With few exceptions, such as Kiswahili and Rutooro, Bantu languages are tonal and have two to four register tones.
Reduplication is a common morphological phenomenon in Bantu languages and is usually used to indicate frequency or intensity of the action signalled by the (unreduplicated) verb stem.
Well-known words and names that have reduplication include:
Repetition emphasizes the repeated word in the context that it is used. For instance, "Mwenda pole hajikwai," means "He who goes slowly doesn't trip," while, "Pole pole ndio mwendo," means "A slow but steady pace wins the race." The latter repeats "pole" to emphasize the consistency of slowness of the pace.
As another example, "Haraka haraka" would mean "hurrying just for the sake of hurrying" (reckless hurry), as in "Njoo! Haraka haraka" [come here! Hurry, hurry].
In contrast, there are some words in some of the languages in which reduplication has the opposite meaning. It usually denotes short durations, or lower intensity of the action, and also means a few repetitions or a little bit more.
The following is a list of nominal classes in Bantu languages:
Virtually all Bantu languages have a Subject–verb–object word order with some exceptions such as the Nen language which has a Subject-Object-Verb word order.
Following is an incomplete list of the principal Bantu languages of each country. Included are those languages that constitute at least 1% of the population and have at least 10% the number of speakers of the largest Bantu language in the country.
Most languages are referred to in English without the class prefix (Swahili, Tswana, Ndebele), but are sometimes seen with the (language-specific) prefix (Kiswahili, Setswana, Sindebele). In a few cases prefixes are used to distinguish languages with the same root in their name, such as Tshiluba and Kiluba (both Luba), Umbundu and Kimbundu (both Mbundu). The prefixless form typically does not occur in the language itself, but is the basis for other words based on the ethnicity. So, in the country of Botswana the people are the Batswana, one person is a Motswana, and the language is Setswana; and in Uganda, centred on the kingdom of Buganda, the dominant ethnicity are the Baganda (singular Muganda), whose language is Luganda.
According to the South African National Census of 2011
Map 1 shows Bantu languages in Africa and map 2 a magnification of the Benin, Nigeria and Cameroon area, as of July 2017.
A case has been made out for borrowings of many place-names and even misremembered rhymes – chiefly from one of the Luba varieties – in the USA.
Some words from various Bantu languages have been borrowed into western languages. These include:
Along with the Latin script and Arabic script orthographies, there are also some modern indigenous writing systems used for Bantu languages: | [
{
"paragraph_id": 0,
"text": "The Bantu languages (English: UK: /ˌbænˈtuː/, US: /ˈbæntuː/ Proto-Bantu: *bantʊ̀) are a language family of about 600 languages that are spoken by the Bantu peoples of Central, Southern, Eastern and Southeast Africa. They form the largest branch of the Southern Bantoid languages.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The total number of Bantu languages is estimated at between 440 and 680 distinct languages, depending on the definition of \"language\" versus \"dialect\". Many Bantu languages borrow words from each other, and some are mutually intelligible.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The total number of Bantu speakers is estimated to be around 350 million in 2015 (roughly 30% of the population of Africa or 5% of the world population). Bantu languages are largely spoken southeast of Cameroon, and throughout Central, Southern, Eastern, and Southeast Africa. About one-sixth of Bantu speakers, and one-third of Bantu languages, are found in the Democratic Republic of the Congo.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The most widely spoken Bantu language by number of speakers is Swahili, with 16 million native speakers and 80 million L2 speakers (2015). Most native speakers of Swahili live in Tanzania, where it is a national language, while as a second language it is taught as a mandatory subject in many schools in East Africa, and is a lingua franca of the East African Community.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Other major Bantu languages include Lingala with more than 20 million speakers (Congo, DRC), Zulu with 12 million speakers (South Africa), Xhosa with 8.2 million speakers (South Africa and Zimbabwe), and Shona with less than 10 million speakers (if Manyika and Ndau are included), while Sotho-Tswana languages (Sotho, Tswana and Pedi) have more than 15 million speakers (across Botswana, Lesotho, South Africa, and Zambia). Zimbabwe has Kalanga, Matebele, Nambiya, and Xhosa speakers. Ethnologue separates the largely mutually intelligible Kinyarwanda and Kirundi, which together have 20 million speakers.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The similarity among dispersed Bantu languages had been observed as early as the 17th century. The term Bantu as a name for the group was coined (as Bâ-ntu) by Wilhelm Bleek in 1857 or 1858, and popularized in his Comparative Grammar of 1862. He coined the term to represent the word for \"people\" in loosely reconstructed Proto-Bantu, from the plural noun class prefix *ba- categorizing \"people\", and the root *ntʊ̀- \"some (entity), any\" (e.g. Xhosa umntu \"person\", abantu \"people\"; Zulu umuntu \"person\", abantu \"people\").",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "There is no indigenous term for the group, as Bantu-speaking populations refer to themselves by their endonyms, but did not have a concept for the larger ethno-linguistic phylum. Bleek's coinage was inspired by the anthropological observation of groups frequently self-identifying as \"people\" or \"the true people\" (as is the case, for example, with the term Khoikhoi, but this is a kare \"praise address\" and not an ethnic name).",
"title": "Name"
},
{
"paragraph_id": 7,
"text": "The term narrow Bantu, excluding those languages classified as Bantoid by Guthrie (1948), was introduced in the 1960s.",
"title": "Name"
},
{
"paragraph_id": 8,
"text": "The prefix ba- specifically refers to people. Endonymically, the term for cultural objects, including language, is formed with the ki- noun class (Nguni ísi-), as in KiSwahili (Swahili language and culture), IsiZulu (Zulu language and culture) and KiGanda (Ganda religion and culture).",
"title": "Name"
},
{
"paragraph_id": 9,
"text": "In the 1980s, South African linguists suggested referring to these languages as KiNtu. The word kintu exists in some places, but it means \"thing\", with no relation to the concept of \"language\". In addition, delegates at the African Languages Association of Southern Africa conference in 1984 reported that, in some places, the term Kintu has a derogatory significance. This is because kintu refers to \"things\" and is used as a dehumanizing term for people who have lost their dignity.",
"title": "Name"
},
{
"paragraph_id": 10,
"text": "In addition, Kintu is a figure in some mythologies.",
"title": "Name"
},
{
"paragraph_id": 11,
"text": "In the 1990s, the term Kintu was still occasionally used by South African linguists. But in contemporary decolonial South African linguistics, the term Ntu languages is used.",
"title": "Name"
},
{
"paragraph_id": 12,
"text": "The Bantu languages descend from a common Proto-Bantu language, which is believed to have been spoken in what is now Cameroon in Central Africa. An estimated 2,500–3,000 years ago (1000 BC to 500 BC), speakers of the Proto-Bantu language began a series of migrations eastward and southward, carrying agriculture with them. This Bantu expansion came to dominate Sub-Saharan Africa east of Cameroon, an area where Bantu peoples now constitute nearly the entire population. Some other sources estimate the Bantu Expansion started closer to 3000 BC.",
"title": "Origin"
},
{
"paragraph_id": 13,
"text": "The technical term Bantu, meaning \"human beings\" or simply \"people\", was first used by Wilhelm Bleek (1827–1875), as the concept is reflected in many of the languages of this group. A common characteristic of Bantu languages is that they use words such as muntu or mutu for \"human being\" or in simplistic terms \"person\", and the plural prefix for human nouns starting with mu- (class 1) in most languages is ba- (class 2), thus giving bantu for \"people\". Bleek, and later Carl Meinhof, pursued extensive studies comparing the grammatical structures of Bantu languages.",
"title": "Origin"
},
{
"paragraph_id": 14,
"text": "The most widely used classification is an alphanumeric coding system developed by Malcolm Guthrie in his 1948 classification of the Bantu languages. It is mainly geographic. The term \"narrow Bantu\" was coined by the Benue–Congo Working Group to distinguish Bantu as recognized by Guthrie, from the Bantoid languages not recognized as Bantu by Guthrie.",
"title": "Classification"
},
{
"paragraph_id": 15,
"text": "In recent times, the distinctiveness of Narrow Bantu as opposed to the other Southern Bantoid languages has been called into doubt (cf. Piron 1995, Williamson & Blench 2000, Blench 2011), but the term is still widely used.",
"title": "Classification"
},
{
"paragraph_id": 16,
"text": "There is no true genealogical classification of the (Narrow) Bantu languages. Until recently most attempted classifications only considered languages that happen to fall within traditional Narrow Bantu, but there seems to be a continuum with the related languages of South Bantoid.",
"title": "Classification"
},
{
"paragraph_id": 17,
"text": "At a broader level, the family is commonly split in two depending on the reflexes of proto-Bantu tone patterns: many Bantuists group together parts of zones A through D (the extent depending on the author) as Northwest Bantu or Forest Bantu, and the remainder as Central Bantu or Savanna Bantu. The two groups have been described as having mirror-image tone systems: where Northwest Bantu has a high tone in a cognate, Central Bantu languages generally have a low tone, and vice versa.",
"title": "Classification"
},
{
"paragraph_id": 18,
"text": "Northwest Bantu is more divergent internally than Central Bantu, and perhaps less conservative due to contact with non-Bantu Niger–Congo languages; Central Bantu is likely the innovative line cladistically. Northwest Bantu is clearly not a coherent family, but even for Central Bantu the evidence is lexical, with little evidence that it is a historically valid group.",
"title": "Classification"
},
{
"paragraph_id": 19,
"text": "Another attempt at a detailed genetic classification to replace the Guthrie system is the 1999 \"Tervuren\" proposal of Bastin, Coupez, and Mann. However, it relies on lexicostatistics, which, because of its reliance on overall similarity rather than shared innovations, may predict spurious groups of conservative languages that are not closely related. Meanwhile, Ethnologue has added languages to the Guthrie classification which Guthrie overlooked, while removing the Mbam languages (much of zone A), and shifting some languages between groups (much of zones D and E to a new zone J, for example, and part of zone L to K, and part of M to F) in an apparent effort at a semi-genetic, or at least semi-areal, classification. This has been criticized for sowing confusion in one of the few unambiguous ways to distinguish Bantu languages. Nurse & Philippson (2006) evaluate many proposals for low-level groups of Bantu languages, but the result is not a complete portrayal of the family. Glottolog has incorporated many of these into their classification.",
"title": "Classification"
},
{
"paragraph_id": 20,
"text": "The languages that share Dahl's law may also form a valid group, Northeast Bantu. The infobox at right lists these together with various low-level groups that are fairly uncontroversial, though they continue to be revised. The development of a rigorous genealogical classification of many branches of Niger–Congo, not just Bantu, is hampered by insufficient data.",
"title": "Classification"
},
{
"paragraph_id": 21,
"text": "Simplified phylogeny of northwestern branches of Bantu by Grollemund (2012):",
"title": "Classification"
},
{
"paragraph_id": 22,
"text": "Other computational phylogenetic analyses of Bantu include Currie et al. (2013), Grollemund et al. (2015), Rexova et al. 2006, Holden et al., 2016, and Whiteley et al. 2018.",
"title": "Classification"
},
{
"paragraph_id": 23,
"text": "Glottolog (2021) does not consider the older geographic classification by Guthrie relevant for its ongoing classification based on more recent linguistic studies, and divides Bantu into four main branches: Bantu A-B10-B20-B30, Central-Western Bantu, East Bantu and Mbam-Bube-Jarawan.",
"title": "Classification"
},
{
"paragraph_id": 24,
"text": "Guthrie reconstructed both the phonemic inventory and the vocabulary of Proto-Bantu.",
"title": "Language structure"
},
{
"paragraph_id": 25,
"text": "The most prominent grammatical characteristic of Bantu languages is the extensive use of affixes (see Sotho grammar and Ganda noun classes for detailed discussions of these affixes). Each noun belongs to a class, and each language may have several numbered classes, somewhat like grammatical gender in European languages. The class is indicated by a prefix that is part of the noun, as well as agreement markers on verb and qualificative roots connected with the noun. Plurality is indicated by a change of class, with a resulting change of prefix. All Bantu languages are agglutinative.",
"title": "Language structure"
},
{
"paragraph_id": 26,
"text": "The verb has a number of prefixes, though in the western languages these are often treated as independent words. In Swahili, for example, Mtoto mdogo amekisoma (for comparison, Kamwana kadoko karikuverenga in Shona language) means 'The small child has read it [a book]'. Mtoto 'child' governs the adjective prefix m- (representing the diminutive form of the word) and the verb subject prefix a-. Then comes perfect tense -me- and an object marker -ki- agreeing with implicit kitabu 'book' (from Arabic kitab). Pluralizing to 'children' gives Watoto wadogo wamekisoma (Vana vadoko varikuverenga in Shona), and pluralizing to 'books' (vitabu) gives watoto wadogo wamevisoma.",
"title": "Language structure"
},
{
"paragraph_id": 27,
"text": "Bantu words are typically made up of open syllables of the type CV (consonant-vowel) with most languages having syllables exclusively of this type. The Bushong language recorded by Vansina, however, has final consonants, while slurring of the final syllable (though written) is reported as common among the Tonga of Malawi. The morphological shape of Bantu words is typically CV, VCV, CVCV, VCVCV, etc.; that is, any combination of CV (with possibly a V- syllable at the start). In other words, a strong claim for this language family is that almost all words end in a vowel, precisely because closed syllables (CVC) are not permissible in most of the documented languages, as far as is understood.",
"title": "Language structure"
},
{
"paragraph_id": 28,
"text": "This tendency to avoid consonant clusters in some positions is important when words are imported from English or other non-Bantu languages. An example from Chewa: the word \"school\", borrowed from English, and then transformed to fit the sound patterns of this language, is sukulu. That is, sk- has been broken up by inserting an epenthetic -u-; -u has also been added at the end of the word. Another example is buledi for \"bread\". Similar effects are seen in loanwords for other non-African CV languages like Japanese. However, a clustering of sounds at the beginning of a syllable can be readily observed in such languages as Shona, and the Makua languages.",
"title": "Language structure"
},
{
"paragraph_id": 29,
"text": "With few exceptions, such as Kiswahili and Rutooro, Bantu languages are tonal and have two to four register tones.",
"title": "Language structure"
},
{
"paragraph_id": 30,
"text": "Reduplication is a common morphological phenomenon in Bantu languages and is usually used to indicate frequency or intensity of the action signalled by the (unreduplicated) verb stem.",
"title": "Language structure"
},
{
"paragraph_id": 31,
"text": "Well-known words and names that have reduplication include:",
"title": "Language structure"
},
{
"paragraph_id": 32,
"text": "Repetition emphasizes the repeated word in the context that it is used. For instance, \"Mwenda pole hajikwai,\" means \"He who goes slowly doesn't trip,\" while, \"Pole pole ndio mwendo,\" means \"A slow but steady pace wins the race.\" The latter repeats \"pole\" to emphasize the consistency of slowness of the pace.",
"title": "Language structure"
},
{
"paragraph_id": 33,
"text": "As another example, \"Haraka haraka\" would mean \"hurrying just for the sake of hurrying\" (reckless hurry), as in \"Njoo! Haraka haraka\" [come here! Hurry, hurry].",
"title": "Language structure"
},
{
"paragraph_id": 34,
"text": "In contrast, there are some words in some of the languages in which reduplication has the opposite meaning. It usually denotes short durations, or lower intensity of the action, and also means a few repetitions or a little bit more.",
"title": "Language structure"
},
{
"paragraph_id": 35,
"text": "The following is a list of nominal classes in Bantu languages:",
"title": "Language structure"
},
{
"paragraph_id": 36,
"text": "Virtually all Bantu languages have a Subject–verb–object word order with some exceptions such as the Nen language which has a Subject-Object-Verb word order.",
"title": "Language structure"
},
{
"paragraph_id": 37,
"text": "Following is an incomplete list of the principal Bantu languages of each country. Included are those languages that constitute at least 1% of the population and have at least 10% the number of speakers of the largest Bantu language in the country.",
"title": "By country"
},
{
"paragraph_id": 38,
"text": "Most languages are referred to in English without the class prefix (Swahili, Tswana, Ndebele), but are sometimes seen with the (language-specific) prefix (Kiswahili, Setswana, Sindebele). In a few cases prefixes are used to distinguish languages with the same root in their name, such as Tshiluba and Kiluba (both Luba), Umbundu and Kimbundu (both Mbundu). The prefixless form typically does not occur in the language itself, but is the basis for other words based on the ethnicity. So, in the country of Botswana the people are the Batswana, one person is a Motswana, and the language is Setswana; and in Uganda, centred on the kingdom of Buganda, the dominant ethnicity are the Baganda (singular Muganda), whose language is Luganda.",
"title": "By country"
},
{
"paragraph_id": 39,
"text": "According to the South African National Census of 2011",
"title": "By country"
},
{
"paragraph_id": 40,
"text": "Map 1 shows Bantu languages in Africa and map 2 a magnification of the Benin, Nigeria and Cameroon area, as of July 2017.",
"title": "Geographic areas"
},
{
"paragraph_id": 41,
"text": "A case has been made out for borrowings of many place-names and even misremembered rhymes – chiefly from one of the Luba varieties – in the USA.",
"title": "Bantu words popularised in western cultures"
},
{
"paragraph_id": 42,
"text": "Some words from various Bantu languages have been borrowed into western languages. These include:",
"title": "Bantu words popularised in western cultures"
},
{
"paragraph_id": 43,
"text": "Along with the Latin script and Arabic script orthographies, there are also some modern indigenous writing systems used for Bantu languages:",
"title": "Writing systems"
}
] | The Bantu languages are a language family of about 600 languages that are spoken by the Bantu peoples of Central, Southern, Eastern and Southeast Africa. They form the largest branch of the Southern Bantoid languages. The total number of Bantu languages is estimated at between 440 and 680 distinct languages, depending on the definition of "language" versus "dialect". Many Bantu languages borrow words from each other, and some are mutually intelligible. The total number of Bantu speakers is estimated to be around 350 million in 2015. Bantu languages are largely spoken southeast of Cameroon, and throughout Central, Southern, Eastern, and Southeast Africa. About one-sixth of Bantu speakers, and one-third of Bantu languages, are found in the Democratic Republic of the Congo. The most widely spoken Bantu language by number of speakers is Swahili, with 16 million native speakers and 80 million L2 speakers (2015). Most native speakers of Swahili live in Tanzania, where it is a national language, while as a second language it is taught as a mandatory subject in many schools in East Africa, and is a lingua franca of the East African Community. Other major Bantu languages include Lingala with more than 20 million speakers, Zulu with 12 million speakers, Xhosa with 8.2 million speakers, and Shona with less than 10 million speakers, while Sotho-Tswana languages have more than 15 million speakers. Zimbabwe has Kalanga, Matebele, Nambiya, and Xhosa speakers. Ethnologue separates the largely mutually intelligible Kinyarwanda and Kirundi, which together have 20 million speakers. | 2001-09-13T17:56:34Z | 2023-12-27T23:54:20Z | [
"Template:Cite LPD",
"Template:Webarchive",
"Template:Niger-Congo branches",
"Template:Infobox language family",
"Template:IPA-cen",
"Template:Main",
"Template:Wikt-lang",
"Template:Cite EPD",
"Template:Cite web",
"Template:Doi",
"Template:See also",
"Template:When",
"Template:Columns-list",
"Template:Reflist",
"Template:Cite book",
"Template:Bantu",
"Template:Full citation needed",
"Template:Citation needed",
"Template:Multiple image",
"Template:Cite journal",
"Template:Narrow Bantu languages",
"Template:Authority control",
"Template:Short description",
"Template:See",
"Template:Clade",
"Template:Unreferenced section"
] | https://en.wikipedia.org/wiki/Bantu_languages |
4,127 | Bearing | Bearing(s) may refer to: | [
{
"paragraph_id": 0,
"text": "Bearing(s) may refer to:",
"title": ""
}
] | Bearing(s) may refer to: Bearing (angle), a term for direction
Bearing (mechanical), a component that separates moving parts and takes a load
Bridge bearing, a component separating a bridge pier and deck
Bearing BTS Station in Bangkok
Bearings (album), by Ronnie Montrose in 2000 | 2023-07-25T17:54:55Z | [
"Template:Wiktionary",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Bearing |
|
4,130 | CIM-10 Bomarc | The Boeing CIM-10 Bomarc ("Boeing Michigan Aeronautical Research Center") (IM-99 Weapon System prior to September 1962) was a supersonic ramjet powered long-range surface-to-air missile (SAM) used during the Cold War for the air defense of North America. In addition to being the first operational long-range SAM and the first operational pulse doppler aviation radar, it was the only SAM deployed by the United States Air Force.
Stored horizontally in a launcher shelter with a movable roof, the missile was erected, fired vertically using rocket boosters to high altitude, and then tipped over into a horizontal Mach 2.5 cruise powered by ramjet engines. This lofted trajectory allowed the missile to operate at a maximum range as great as 430 mi (700 km). Controlled from the ground for most of its flight, when it reached the target area it was commanded to begin a dive, activating an onboard active radar homing seeker for terminal guidance. A radar proximity fuse detonated the warhead, either a large conventional explosive or the W40 nuclear warhead.
The Air Force originally planned for a total of 52 sites covering most of the major cities and industrial regions in the US. The US Army was deploying their own systems at the same time, and the two services fought constantly both in political circles and in the press. Development dragged on, and by the time it was ready for deployment in the late 1950s, the nuclear threat had moved from manned bombers to the intercontinental ballistic missile (ICBM). By this time the Army had successfully deployed the much shorter range Nike Hercules that they claimed filled any possible need through the 1960s, in spite of Air Force claims to the contrary.
As testing continued, the Air Force reduced its plans to sixteen sites, and then again to eight with an additional two sites in Canada. The first US site was declared operational in 1959, but with only a single working missile. Bringing the rest of the missiles into service took years, by which time the system was obsolete. Deactivations began in 1969 and by 1972 all Bomarc sites had been shut down. A small number were used as target drones, and only a few remain on display today.
During World War II, the US Army Air Force (USAAF) concluded that existing anti-aircraft guns, only marginally effective against existing generations of propeller-driven aircraft, would not be effective at all against the emerging jet-powered designs. Like the Germans and British before them, they concluded the only successful defence would be to use guided weapons.
As early as 1944 the US Army started exploring anti-aircraft missiles, examining a variety of concepts. At the time, two basic concepts appeared possible; one would use a short-range rocket that flew directly at the target from below following a course close to the line-of-sight, and the other would fly up to the target's altitude and then tip over and fly horizontally towards the target like a fighter aircraft. As both concepts seemed promising, the Army Air Force was given the task of developing the airplane-like design, while the Army Ordnance Department was given the more ballistic collision-course concept. Official requirements were published in 1945.
Official requirements were published in 1945; Bell Laboratories won the Ordnance contract for a short-range line-of-sight weapon under Project Nike, while a team of players led by Boeing won the contract for a long-range design known as Ground-to-Air Pilotless Aircraft, or GAPA. GAPA moved to the US Air Force when that branch was formed in 1947. In 1946, the USAAF also started two early research projects into anti-missile systems in Project Thumper (MX-795) and Project Wizard (MX-794).
Formally organized in 1946 under USAAF project MX-606, by 1950 Boeing had launched more than 100 test rockets in various configurations, all under the designator XSAM-A-1 GAPA. The tests were very promising, and Boeing received a USAF contract in 1949 to develop a production design under project MX-1599.
The MX-1599 missile was to be a ramjet-powered, nuclear-armed long-range surface-to-air missile to defend the Continental United States from high-flying bombers. The Michigan Aerospace Research Center (MARC) was added to the project soon afterward, and this gave the new missile its name Bomarc (for Boeing and MARC). In 1951, the USAF decided to emphasize its point of view that missiles were nothing else than pilotless aircraft by assigning aircraft designators to its missile projects, and anti-aircraft missiles received F-for-Fighter designations. The Bomarc became the F-99.
By this time, the Army's Nike project was progressing well, and would enter operational service in 1953. This led the Air Force to begin a lengthy series of attacks on the Army in the press, a common occurrence at the time known as "policy by press release". When the Army released its first official information on Ajax to the press, the Air Force responded by leaking information on BOMARC to Aviation Week, and continued to denigrate Nike in the press over the next few years, in one case showing a graphic of Washington being destroyed by nuclear bombs that Ajax failed to stop.
Tests of the XF-99 test vehicles began in September 1952 and continued through early 1955. The XF-99 tested only the liquid-fueled booster rocket, which would accelerate the missile to ramjet ignition speed. In February 1955, tests of the XF-99A propulsion test vehicles began. These included live ramjets, but still had no guidance system or warhead. The designation YF-99A had been reserved for the operational test vehicles. In August 1955, the USAF discontinued the use of aircraft-like type designators for missiles, and the XF-99A and YF-99A became XIM-99A and YIM-99A, respectively. Originally the USAF had allocated the designation IM-69, but this was changed (possibly at Boeing's request to keep number 99) to IM-99 in October 1955.
By this time, Ajax was widely deployed around the United States and some overseas locations, and the Army was beginning to develop its much more powerful successor, Nike Hercules. Hercules was an existential threat to BOMARC, as its much greater range and nuclear warhead filled many of the roles that BOMARC was designed for. A new round of fighting in the press broke out, capped by an article in the New York Times entitled "Air Force Calls Army Nike Unfit To Guard Nation".
In October 1957, the first YIM-99A production-representative prototype flew with full guidance, and succeeded to pass the target within destructive range. In late 1957, Boeing received the production contract for the IM-99A Bomarc A, and in September 1959, the first IM-99A squadron became operational.
The IM-99A had an operational radius of 200 miles (320 km) and was designed to fly at Mach 2.5–2.8 at a cruising altitude of 60,000 feet (18,000 m). It was 46.6 ft (14.2 m) long and weighed 15,500 pounds (7,000 kg). Its armament was either a 1,000-pound (450 kg) conventional warhead or a W40 nuclear warhead (7–10 kiloton yield). A liquid-fuel rocket engine boosted the Bomarc to Mach 2, when its Marquardt RJ43-MA-3 ramjet engines, fueled by 80-octane gasoline, would take over for the remainder of the flight. This was the same model of engine used to power the Lockheed X-7, the Lockheed AQM-60 Kingfisher drone used to test air defenses, and the Lockheed D-21 launched from the back of an M-21, although the Bomarc and Kingfisher engines used different materials due to the longer duration of their flights.
The operational IM-99A missiles were based horizontally in semi-hardened shelters, nicknamed "coffins". After the launch order, the shelter's roof would slide open, and the missile raised to the vertical. After the missile was supplied with fuel for the booster rocket, it would be launched by the Aerojet General LR59-AJ-13 booster. After sufficient speed was reached, the Marquardt RJ43-MA-3 ramjets would ignite and propel the missile to its cruise speed of Mach 2.8 at an altitude of 66,000 ft (20,000 m).
When the Bomarc was within 10 mi (16 km) of the target, its own Westinghouse AN/DPN-34 radar guided the missile to the interception point. The maximum range of the IM-99A was 250 mi (400 km), and it was fitted with either a conventional high-explosive or a 10 kiloton W-40 nuclear fission warhead.
The Bomarc relied on the Semi-Automatic Ground Environment (SAGE), an automated control system used by NORAD for detecting, tracking and intercepting enemy bomber aircraft. SAGE allowed for remote launching of the Bomarc missiles, which were housed in a constant combat-ready basis in individual launch shelters in remote areas. At the height of the program, there were 14 Bomarc sites located in the US and two in Canada.
The liquid-fuel booster of the Bomarc A had several drawbacks. It took two minutes to fuel before launch, which could be a long time in high-speed intercepts, and its hypergolic propellants (hydrazine and nitric acid) were very dangerous to handle, leading to several serious accidents.
As soon as high-thrust solid-fuel rockets became a reality in the mid-1950s, the USAF began to develop a new solid-fueled Bomarc variant, the IM-99B Bomarc B. It used a Thiokol XM51 booster, and also had improved Marquardt RJ43-MA-7 (and finally the RJ43-MA-11) ramjets. The first IM-99B was launched in May 1959, but problems with the new propulsion system delayed the first fully successful flight until July 1960, when a supersonic MQM-15A Regulus II drone was intercepted. Because the new booster required less space in the missile, more ramjet fuel could be carried, thus increasing the range to 430 mi (700 km). The terminal homing system was also improved, using the world's first pulse Doppler search radar, the Westinghouse AN/DPN-53. All Bomarc Bs were equipped with the W-40 nuclear warhead. In June 1961, the first IM-99B squadron became operational, and Bomarc B quickly replaced most Bomarc A missiles. On 23 March 1961, a Bomarc B successfully intercepted a Regulus II cruise missile flying at 100,000 ft (30,000 m), thus achieving the highest interception in the world up to that date.
Boeing built 570 Bomarc missiles between 1957 and 1964, 269 CIM-10A, 301 CIM-10B.
In September 1958 Air Research & Development Command decided to transfer the Bomarc program from its testing at Cape Canaveral Air Force Station to a new facility on Santa Rosa Island, south of Eglin AFB Hurlburt Field on the Gulf of Mexico. To operate the facility and to provide training and operational evaluation in the missile program, Air Defense Command established the 4751st Air Defense Wing (Missile) (4751st ADW) on 15 January 1958. The first launch from Santa Rosa took place on 15 January 1959.
In 1955, to support a program which called for 40 squadrons of BOMARC (120 missiles to a squadron for a total of 4,800 missiles), ADC reached a decision on the location of these 40 squadrons and suggested operational dates for each. The sequence was as follows: ... l. McGuire 1/60 2. Suffolk 2/60 3. Otis 3/60 4. Dow 4/60 5. Niagara Falls 1/61 6. Plattsburgh 1/61 7. Kinross 2/61 8. K.I. Sawyer 2/61 9. Langley 2/61 10. Truax 3/61 11. Paine 3/61 12. Portland 3/61 ... At the end of 1958, ADC plans called for construction of the following BOMARC bases in the following order: l. McGuire 2. Suffolk 3. Otis 4. Dow 5. Langley 6. Truax 7. Kinross 8. Duluth 9. Ethan Allen 10. Niagara Falls 11. Paine 12. Adair 13. Travis 14. Vandenberg 15. San Diego 16. Malmstrom 17. Grand Forks 18. Minot 19. Youngstown 20. Seymour-Johnson 21. Bunker Hill 22. Sioux Falls 23. Charleston 24. McConnell 25. Holloman 26. McCoy 27. Amarillo 28. Barksdale 29. Williams.
The first USAF operational Bomarc squadron was the 46th Air Defense Missile Squadron (ADMS), organized on 1 January 1959 and activated on 25 March. The 46th ADMS was assigned to the New York Air Defense Sector at McGuire Air Force Base, New Jersey. The training program, under the 4751st Air Defense Wing used technicians acting as instructors and was established for a four-month duration. Training included missile maintenance; SAGE operations and launch procedures, including the launch of an unarmed missile at Eglin. In September 1959 the squadron assembled at their permanent station, the Bomarc site near McGuire AFB, and trained for operational readiness. The first Bomarc-A were used at McGuire on 19 September 1959 with Kincheloe AFB getting the first operational IM-99Bs. While several of the squadrons replicated earlier fighter interceptor unit numbers, they were all new organizations with no previous historical counterpart.
ADC's initial plans called for some 52 Bomarc sites around the United States with 120 missiles each but as defense budgets decreased during the 1950s the number of sites dropped substantially. Ongoing development and reliability problems didn't help, nor did Congressional debate over the missile's usefulness and necessity. In June 1959, the Air Force authorized 16 Bomarc sites with 56 missiles each; the initial five would get the IM-99A with the remainder getting the IM-99B. However, in March 1960, HQ USAF cut deployment to eight sites in the United States and two in Canada.
Within a year of operations, a Bomarc A with a nuclear warhead caught fire at McGuire AFB on 7 June 1960 after its on-board helium tank exploded. While the missile's explosives did not detonate, the heat melted the warhead and released plutonium, which the fire crews spread. The Air Force and the Atomic Energy Commission cleaned up the site and covered it with concrete. This was the only major incident involving the weapon system. The site remained in operation for several years following the fire. Since its closure in 1972, the area has remained off limits, primarily due to low levels of plutonium contamination. Between 2002 and 2004, 21,998 cubic yards of contaminated debris and soils were shipped to what was then known as Envirocare, located in Utah.
In 1962, the US Air Force started using modified A-models as drones; following the October 1962 tri-service redesignation of aircraft and weapons systems they became CQM-10As. Otherwise the air defense missile squadrons maintained alert while making regular trips to Santa Rosa Island for training and firing practice. After the inactivation of the 4751st ADW(M) on 1 July 1962 and transfer of Hurlburt to Tactical Air Command for air commando operations the 4751st Air Defense Squadron (Missile) remained at Hurlburt and Santa Rosa Island for training purposes.
In 1964, the liquid-fueled Bomarc-A sites and squadrons began to be deactivated. The sites at Dow and Suffolk County closed first. The remainder continued to be operational for several more years while the government started dismantling the air defense missile network. Niagara Falls was the first BOMARC B installation to close, in December 1969; the others remained on alert through 1972. In April 1972, the last Bomarc B in U.S. Air Force service was retired at McGuire and the 46th ADMS inactivated and the base was deactivated.
In the era of the intercontinental ballistic missiles the Bomarc, designed to intercept relatively slow manned bombers, had become a useless asset. The remaining Bomarc missiles were used by all armed services as high-speed target drones for tests of other air-defense missiles. The Bomarc A and Bomarc B targets were designated as CQM-10A and CQM-10B, respectively.
Following the accident, the McGuire complex has never been sold or converted to other uses and remains in Air Force ownership, making it the most intact site of the eight in the US. It has been nominated to the National Register of Historic Sites. Although a number of IM-99/CIM-10 Bomarcs have been placed on public display, because of concerns about the possible environmental hazards of the thoriated magnesium structure of the airframe several have been removed from public view.
Russ Sneddon, director of the Air Force Armament Museum, Eglin Air Force Base, Florida provided information about missing CIM-10 exhibit airframe serial 59–2016, one of the museum's original artifacts from its founding in 1975 and donated by the 4751st Air Defense Squadron at Hurlburt Field, Eglin Auxiliary Field 9, Eglin AFB. As of December 2006, the suspect missile was stored in a secure compound behind the Armaments Museum. In December 2010, the airframe was still on premises, but partly dismantled.
The Bomarc Missile Program was highly controversial in Canada. The Progressive Conservative government of Prime Minister John Diefenbaker initially agreed to deploy the missiles, and shortly thereafter controversially scrapped the Avro Arrow, a supersonic manned interceptor aircraft, arguing that the missile program made the Arrow unnecessary.
Initially, it was unclear whether the missiles would be equipped with nuclear warheads. By 1960 it became known that the missiles were to have a nuclear payload, and a debate ensued about whether Canada should accept nuclear weapons. Ultimately, the Diefenbaker government decided that the Bomarcs should not be equipped with nuclear warheads. The dispute split the Diefenbaker Cabinet, and led to the collapse of the government in 1963. The Official Opposition and Liberal Party leader Lester B. Pearson originally was against nuclear missiles, but reversed his personal position and argued in favour of accepting nuclear warheads. He won the 1963 election, largely on the basis of this issue, and his new Liberal government proceeded to accept nuclear-armed Bomarcs, with the first being deployed on 31 December 1963. When the nuclear warheads were deployed, Pearson's wife, Maryon, resigned her honorary membership in the anti-nuclear weapons group, Voice of Women.
Canadian operational deployment of the Bomarc involved the formation of two specialized Surface/Air Missile squadrons. The first to begin operations was No. 446 SAM Squadron at RCAF Station North Bay, which was the command and control center for both squadrons. With construction of the compound and related facilities completed in 1961, the squadron received its Bomarcs in 1961, without nuclear warheads. The squadron became fully operational from 31 December 1963, when the nuclear warheads arrived, until disbanding on 31 March 1972. All the warheads were stored separately and under control of Detachment 1 of the USAF 425th Munitions Maintenance Squadron at Stewart Air Force Base. During operational service, the Bomarcs were maintained on stand-by, on a 24-hour basis, but were never fired, although the squadron test-fired the missiles at Eglin AFB, Florida on annual winter retreats.
No. 447 SAM Squadron operating out of RCAF Station La Macaza, Quebec, was activated on 15 September 1962 although warheads were not delivered until late 1963. The squadron followed the same operational procedures as No. 446, its sister squadron. With the passage of time the operational capability of the 1950s-era Bomarc system no longer met modern requirements; the Department of National Defence deemed that the Bomarc missile defense was no longer a viable system, and ordered both squadrons to be stood down in 1972. The bunkers and ancillary facilities remain at both former sites.
Locations under construction but not activated. Each site was programmed for 28 IM-99B missiles:
Reference for BOMARC units and locations:
Below is a list of museums or sites which have a Bomarc missile on display:
The Bomarc missile captured the imagination of the American and Canadian popular music industry, giving rise to a pop music group, the Bomarcs (composed mainly of servicemen stationed on a Florida radar site that tracked Bomarcs), a record label, Bomarc Records, and a moderately successful Canadian pop group, The Beau Marks.
Aircraft of comparable role, configuration, and era | [
{
"paragraph_id": 0,
"text": "The Boeing CIM-10 Bomarc (\"Boeing Michigan Aeronautical Research Center\") (IM-99 Weapon System prior to September 1962) was a supersonic ramjet powered long-range surface-to-air missile (SAM) used during the Cold War for the air defense of North America. In addition to being the first operational long-range SAM and the first operational pulse doppler aviation radar, it was the only SAM deployed by the United States Air Force.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Stored horizontally in a launcher shelter with a movable roof, the missile was erected, fired vertically using rocket boosters to high altitude, and then tipped over into a horizontal Mach 2.5 cruise powered by ramjet engines. This lofted trajectory allowed the missile to operate at a maximum range as great as 430 mi (700 km). Controlled from the ground for most of its flight, when it reached the target area it was commanded to begin a dive, activating an onboard active radar homing seeker for terminal guidance. A radar proximity fuse detonated the warhead, either a large conventional explosive or the W40 nuclear warhead.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Air Force originally planned for a total of 52 sites covering most of the major cities and industrial regions in the US. The US Army was deploying their own systems at the same time, and the two services fought constantly both in political circles and in the press. Development dragged on, and by the time it was ready for deployment in the late 1950s, the nuclear threat had moved from manned bombers to the intercontinental ballistic missile (ICBM). By this time the Army had successfully deployed the much shorter range Nike Hercules that they claimed filled any possible need through the 1960s, in spite of Air Force claims to the contrary.",
"title": ""
},
{
"paragraph_id": 3,
"text": "As testing continued, the Air Force reduced its plans to sixteen sites, and then again to eight with an additional two sites in Canada. The first US site was declared operational in 1959, but with only a single working missile. Bringing the rest of the missiles into service took years, by which time the system was obsolete. Deactivations began in 1969 and by 1972 all Bomarc sites had been shut down. A small number were used as target drones, and only a few remain on display today.",
"title": ""
},
{
"paragraph_id": 4,
"text": "During World War II, the US Army Air Force (USAAF) concluded that existing anti-aircraft guns, only marginally effective against existing generations of propeller-driven aircraft, would not be effective at all against the emerging jet-powered designs. Like the Germans and British before them, they concluded the only successful defence would be to use guided weapons.",
"title": "Design and development"
},
{
"paragraph_id": 5,
"text": "As early as 1944 the US Army started exploring anti-aircraft missiles, examining a variety of concepts. At the time, two basic concepts appeared possible; one would use a short-range rocket that flew directly at the target from below following a course close to the line-of-sight, and the other would fly up to the target's altitude and then tip over and fly horizontally towards the target like a fighter aircraft. As both concepts seemed promising, the Army Air Force was given the task of developing the airplane-like design, while the Army Ordnance Department was given the more ballistic collision-course concept. Official requirements were published in 1945.",
"title": "Design and development"
},
{
"paragraph_id": 6,
"text": "Official requirements were published in 1945; Bell Laboratories won the Ordnance contract for a short-range line-of-sight weapon under Project Nike, while a team of players led by Boeing won the contract for a long-range design known as Ground-to-Air Pilotless Aircraft, or GAPA. GAPA moved to the US Air Force when that branch was formed in 1947. In 1946, the USAAF also started two early research projects into anti-missile systems in Project Thumper (MX-795) and Project Wizard (MX-794).",
"title": "Design and development"
},
{
"paragraph_id": 7,
"text": "Formally organized in 1946 under USAAF project MX-606, by 1950 Boeing had launched more than 100 test rockets in various configurations, all under the designator XSAM-A-1 GAPA. The tests were very promising, and Boeing received a USAF contract in 1949 to develop a production design under project MX-1599.",
"title": "Design and development"
},
{
"paragraph_id": 8,
"text": "The MX-1599 missile was to be a ramjet-powered, nuclear-armed long-range surface-to-air missile to defend the Continental United States from high-flying bombers. The Michigan Aerospace Research Center (MARC) was added to the project soon afterward, and this gave the new missile its name Bomarc (for Boeing and MARC). In 1951, the USAF decided to emphasize its point of view that missiles were nothing else than pilotless aircraft by assigning aircraft designators to its missile projects, and anti-aircraft missiles received F-for-Fighter designations. The Bomarc became the F-99.",
"title": "Design and development"
},
{
"paragraph_id": 9,
"text": "By this time, the Army's Nike project was progressing well, and would enter operational service in 1953. This led the Air Force to begin a lengthy series of attacks on the Army in the press, a common occurrence at the time known as \"policy by press release\". When the Army released its first official information on Ajax to the press, the Air Force responded by leaking information on BOMARC to Aviation Week, and continued to denigrate Nike in the press over the next few years, in one case showing a graphic of Washington being destroyed by nuclear bombs that Ajax failed to stop.",
"title": "Design and development"
},
{
"paragraph_id": 10,
"text": "Tests of the XF-99 test vehicles began in September 1952 and continued through early 1955. The XF-99 tested only the liquid-fueled booster rocket, which would accelerate the missile to ramjet ignition speed. In February 1955, tests of the XF-99A propulsion test vehicles began. These included live ramjets, but still had no guidance system or warhead. The designation YF-99A had been reserved for the operational test vehicles. In August 1955, the USAF discontinued the use of aircraft-like type designators for missiles, and the XF-99A and YF-99A became XIM-99A and YIM-99A, respectively. Originally the USAF had allocated the designation IM-69, but this was changed (possibly at Boeing's request to keep number 99) to IM-99 in October 1955.",
"title": "Design and development"
},
{
"paragraph_id": 11,
"text": "By this time, Ajax was widely deployed around the United States and some overseas locations, and the Army was beginning to develop its much more powerful successor, Nike Hercules. Hercules was an existential threat to BOMARC, as its much greater range and nuclear warhead filled many of the roles that BOMARC was designed for. A new round of fighting in the press broke out, capped by an article in the New York Times entitled \"Air Force Calls Army Nike Unfit To Guard Nation\".",
"title": "Design and development"
},
{
"paragraph_id": 12,
"text": "In October 1957, the first YIM-99A production-representative prototype flew with full guidance, and succeeded to pass the target within destructive range. In late 1957, Boeing received the production contract for the IM-99A Bomarc A, and in September 1959, the first IM-99A squadron became operational.",
"title": "Design and development"
},
{
"paragraph_id": 13,
"text": "The IM-99A had an operational radius of 200 miles (320 km) and was designed to fly at Mach 2.5–2.8 at a cruising altitude of 60,000 feet (18,000 m). It was 46.6 ft (14.2 m) long and weighed 15,500 pounds (7,000 kg). Its armament was either a 1,000-pound (450 kg) conventional warhead or a W40 nuclear warhead (7–10 kiloton yield). A liquid-fuel rocket engine boosted the Bomarc to Mach 2, when its Marquardt RJ43-MA-3 ramjet engines, fueled by 80-octane gasoline, would take over for the remainder of the flight. This was the same model of engine used to power the Lockheed X-7, the Lockheed AQM-60 Kingfisher drone used to test air defenses, and the Lockheed D-21 launched from the back of an M-21, although the Bomarc and Kingfisher engines used different materials due to the longer duration of their flights.",
"title": "Design and development"
},
{
"paragraph_id": 14,
"text": "The operational IM-99A missiles were based horizontally in semi-hardened shelters, nicknamed \"coffins\". After the launch order, the shelter's roof would slide open, and the missile raised to the vertical. After the missile was supplied with fuel for the booster rocket, it would be launched by the Aerojet General LR59-AJ-13 booster. After sufficient speed was reached, the Marquardt RJ43-MA-3 ramjets would ignite and propel the missile to its cruise speed of Mach 2.8 at an altitude of 66,000 ft (20,000 m).",
"title": "Design and development"
},
{
"paragraph_id": 15,
"text": "When the Bomarc was within 10 mi (16 km) of the target, its own Westinghouse AN/DPN-34 radar guided the missile to the interception point. The maximum range of the IM-99A was 250 mi (400 km), and it was fitted with either a conventional high-explosive or a 10 kiloton W-40 nuclear fission warhead.",
"title": "Design and development"
},
{
"paragraph_id": 16,
"text": "The Bomarc relied on the Semi-Automatic Ground Environment (SAGE), an automated control system used by NORAD for detecting, tracking and intercepting enemy bomber aircraft. SAGE allowed for remote launching of the Bomarc missiles, which were housed in a constant combat-ready basis in individual launch shelters in remote areas. At the height of the program, there were 14 Bomarc sites located in the US and two in Canada.",
"title": "Design and development"
},
{
"paragraph_id": 17,
"text": "The liquid-fuel booster of the Bomarc A had several drawbacks. It took two minutes to fuel before launch, which could be a long time in high-speed intercepts, and its hypergolic propellants (hydrazine and nitric acid) were very dangerous to handle, leading to several serious accidents.",
"title": "Design and development"
},
{
"paragraph_id": 18,
"text": "As soon as high-thrust solid-fuel rockets became a reality in the mid-1950s, the USAF began to develop a new solid-fueled Bomarc variant, the IM-99B Bomarc B. It used a Thiokol XM51 booster, and also had improved Marquardt RJ43-MA-7 (and finally the RJ43-MA-11) ramjets. The first IM-99B was launched in May 1959, but problems with the new propulsion system delayed the first fully successful flight until July 1960, when a supersonic MQM-15A Regulus II drone was intercepted. Because the new booster required less space in the missile, more ramjet fuel could be carried, thus increasing the range to 430 mi (700 km). The terminal homing system was also improved, using the world's first pulse Doppler search radar, the Westinghouse AN/DPN-53. All Bomarc Bs were equipped with the W-40 nuclear warhead. In June 1961, the first IM-99B squadron became operational, and Bomarc B quickly replaced most Bomarc A missiles. On 23 March 1961, a Bomarc B successfully intercepted a Regulus II cruise missile flying at 100,000 ft (30,000 m), thus achieving the highest interception in the world up to that date.",
"title": "Design and development"
},
{
"paragraph_id": 19,
"text": "Boeing built 570 Bomarc missiles between 1957 and 1964, 269 CIM-10A, 301 CIM-10B.",
"title": "Design and development"
},
{
"paragraph_id": 20,
"text": "In September 1958 Air Research & Development Command decided to transfer the Bomarc program from its testing at Cape Canaveral Air Force Station to a new facility on Santa Rosa Island, south of Eglin AFB Hurlburt Field on the Gulf of Mexico. To operate the facility and to provide training and operational evaluation in the missile program, Air Defense Command established the 4751st Air Defense Wing (Missile) (4751st ADW) on 15 January 1958. The first launch from Santa Rosa took place on 15 January 1959.",
"title": "Design and development"
},
{
"paragraph_id": 21,
"text": "In 1955, to support a program which called for 40 squadrons of BOMARC (120 missiles to a squadron for a total of 4,800 missiles), ADC reached a decision on the location of these 40 squadrons and suggested operational dates for each. The sequence was as follows: ... l. McGuire 1/60 2. Suffolk 2/60 3. Otis 3/60 4. Dow 4/60 5. Niagara Falls 1/61 6. Plattsburgh 1/61 7. Kinross 2/61 8. K.I. Sawyer 2/61 9. Langley 2/61 10. Truax 3/61 11. Paine 3/61 12. Portland 3/61 ... At the end of 1958, ADC plans called for construction of the following BOMARC bases in the following order: l. McGuire 2. Suffolk 3. Otis 4. Dow 5. Langley 6. Truax 7. Kinross 8. Duluth 9. Ethan Allen 10. Niagara Falls 11. Paine 12. Adair 13. Travis 14. Vandenberg 15. San Diego 16. Malmstrom 17. Grand Forks 18. Minot 19. Youngstown 20. Seymour-Johnson 21. Bunker Hill 22. Sioux Falls 23. Charleston 24. McConnell 25. Holloman 26. McCoy 27. Amarillo 28. Barksdale 29. Williams.",
"title": "Operational history"
},
{
"paragraph_id": 22,
"text": "The first USAF operational Bomarc squadron was the 46th Air Defense Missile Squadron (ADMS), organized on 1 January 1959 and activated on 25 March. The 46th ADMS was assigned to the New York Air Defense Sector at McGuire Air Force Base, New Jersey. The training program, under the 4751st Air Defense Wing used technicians acting as instructors and was established for a four-month duration. Training included missile maintenance; SAGE operations and launch procedures, including the launch of an unarmed missile at Eglin. In September 1959 the squadron assembled at their permanent station, the Bomarc site near McGuire AFB, and trained for operational readiness. The first Bomarc-A were used at McGuire on 19 September 1959 with Kincheloe AFB getting the first operational IM-99Bs. While several of the squadrons replicated earlier fighter interceptor unit numbers, they were all new organizations with no previous historical counterpart.",
"title": "Operational history"
},
{
"paragraph_id": 23,
"text": "ADC's initial plans called for some 52 Bomarc sites around the United States with 120 missiles each but as defense budgets decreased during the 1950s the number of sites dropped substantially. Ongoing development and reliability problems didn't help, nor did Congressional debate over the missile's usefulness and necessity. In June 1959, the Air Force authorized 16 Bomarc sites with 56 missiles each; the initial five would get the IM-99A with the remainder getting the IM-99B. However, in March 1960, HQ USAF cut deployment to eight sites in the United States and two in Canada.",
"title": "Operational history"
},
{
"paragraph_id": 24,
"text": "Within a year of operations, a Bomarc A with a nuclear warhead caught fire at McGuire AFB on 7 June 1960 after its on-board helium tank exploded. While the missile's explosives did not detonate, the heat melted the warhead and released plutonium, which the fire crews spread. The Air Force and the Atomic Energy Commission cleaned up the site and covered it with concrete. This was the only major incident involving the weapon system. The site remained in operation for several years following the fire. Since its closure in 1972, the area has remained off limits, primarily due to low levels of plutonium contamination. Between 2002 and 2004, 21,998 cubic yards of contaminated debris and soils were shipped to what was then known as Envirocare, located in Utah.",
"title": "Operational history"
},
{
"paragraph_id": 25,
"text": "In 1962, the US Air Force started using modified A-models as drones; following the October 1962 tri-service redesignation of aircraft and weapons systems they became CQM-10As. Otherwise the air defense missile squadrons maintained alert while making regular trips to Santa Rosa Island for training and firing practice. After the inactivation of the 4751st ADW(M) on 1 July 1962 and transfer of Hurlburt to Tactical Air Command for air commando operations the 4751st Air Defense Squadron (Missile) remained at Hurlburt and Santa Rosa Island for training purposes.",
"title": "Operational history"
},
{
"paragraph_id": 26,
"text": "In 1964, the liquid-fueled Bomarc-A sites and squadrons began to be deactivated. The sites at Dow and Suffolk County closed first. The remainder continued to be operational for several more years while the government started dismantling the air defense missile network. Niagara Falls was the first BOMARC B installation to close, in December 1969; the others remained on alert through 1972. In April 1972, the last Bomarc B in U.S. Air Force service was retired at McGuire and the 46th ADMS inactivated and the base was deactivated.",
"title": "Operational history"
},
{
"paragraph_id": 27,
"text": "In the era of the intercontinental ballistic missiles the Bomarc, designed to intercept relatively slow manned bombers, had become a useless asset. The remaining Bomarc missiles were used by all armed services as high-speed target drones for tests of other air-defense missiles. The Bomarc A and Bomarc B targets were designated as CQM-10A and CQM-10B, respectively.",
"title": "Operational history"
},
{
"paragraph_id": 28,
"text": "Following the accident, the McGuire complex has never been sold or converted to other uses and remains in Air Force ownership, making it the most intact site of the eight in the US. It has been nominated to the National Register of Historic Sites. Although a number of IM-99/CIM-10 Bomarcs have been placed on public display, because of concerns about the possible environmental hazards of the thoriated magnesium structure of the airframe several have been removed from public view.",
"title": "Operational history"
},
{
"paragraph_id": 29,
"text": "Russ Sneddon, director of the Air Force Armament Museum, Eglin Air Force Base, Florida provided information about missing CIM-10 exhibit airframe serial 59–2016, one of the museum's original artifacts from its founding in 1975 and donated by the 4751st Air Defense Squadron at Hurlburt Field, Eglin Auxiliary Field 9, Eglin AFB. As of December 2006, the suspect missile was stored in a secure compound behind the Armaments Museum. In December 2010, the airframe was still on premises, but partly dismantled.",
"title": "Operational history"
},
{
"paragraph_id": 30,
"text": "The Bomarc Missile Program was highly controversial in Canada. The Progressive Conservative government of Prime Minister John Diefenbaker initially agreed to deploy the missiles, and shortly thereafter controversially scrapped the Avro Arrow, a supersonic manned interceptor aircraft, arguing that the missile program made the Arrow unnecessary.",
"title": "Operational history"
},
{
"paragraph_id": 31,
"text": "Initially, it was unclear whether the missiles would be equipped with nuclear warheads. By 1960 it became known that the missiles were to have a nuclear payload, and a debate ensued about whether Canada should accept nuclear weapons. Ultimately, the Diefenbaker government decided that the Bomarcs should not be equipped with nuclear warheads. The dispute split the Diefenbaker Cabinet, and led to the collapse of the government in 1963. The Official Opposition and Liberal Party leader Lester B. Pearson originally was against nuclear missiles, but reversed his personal position and argued in favour of accepting nuclear warheads. He won the 1963 election, largely on the basis of this issue, and his new Liberal government proceeded to accept nuclear-armed Bomarcs, with the first being deployed on 31 December 1963. When the nuclear warheads were deployed, Pearson's wife, Maryon, resigned her honorary membership in the anti-nuclear weapons group, Voice of Women.",
"title": "Operational history"
},
{
"paragraph_id": 32,
"text": "Canadian operational deployment of the Bomarc involved the formation of two specialized Surface/Air Missile squadrons. The first to begin operations was No. 446 SAM Squadron at RCAF Station North Bay, which was the command and control center for both squadrons. With construction of the compound and related facilities completed in 1961, the squadron received its Bomarcs in 1961, without nuclear warheads. The squadron became fully operational from 31 December 1963, when the nuclear warheads arrived, until disbanding on 31 March 1972. All the warheads were stored separately and under control of Detachment 1 of the USAF 425th Munitions Maintenance Squadron at Stewart Air Force Base. During operational service, the Bomarcs were maintained on stand-by, on a 24-hour basis, but were never fired, although the squadron test-fired the missiles at Eglin AFB, Florida on annual winter retreats.",
"title": "Operational history"
},
{
"paragraph_id": 33,
"text": "No. 447 SAM Squadron operating out of RCAF Station La Macaza, Quebec, was activated on 15 September 1962 although warheads were not delivered until late 1963. The squadron followed the same operational procedures as No. 446, its sister squadron. With the passage of time the operational capability of the 1950s-era Bomarc system no longer met modern requirements; the Department of National Defence deemed that the Bomarc missile defense was no longer a viable system, and ordered both squadrons to be stood down in 1972. The bunkers and ancillary facilities remain at both former sites.",
"title": "Operational history"
},
{
"paragraph_id": 34,
"text": "Locations under construction but not activated. Each site was programmed for 28 IM-99B missiles:",
"title": "Operators"
},
{
"paragraph_id": 35,
"text": "Reference for BOMARC units and locations:",
"title": "Operators"
},
{
"paragraph_id": 36,
"text": "Below is a list of museums or sites which have a Bomarc missile on display:",
"title": "Surviving missiles"
},
{
"paragraph_id": 37,
"text": "The Bomarc missile captured the imagination of the American and Canadian popular music industry, giving rise to a pop music group, the Bomarcs (composed mainly of servicemen stationed on a Florida radar site that tracked Bomarcs), a record label, Bomarc Records, and a moderately successful Canadian pop group, The Beau Marks.",
"title": "Impact on popular music"
},
{
"paragraph_id": 38,
"text": "Aircraft of comparable role, configuration, and era",
"title": "See also"
},
{
"paragraph_id": 39,
"text": "",
"title": "See also"
}
] | The Boeing CIM-10 Bomarc was a supersonic ramjet powered long-range surface-to-air missile (SAM) used during the Cold War for the air defense of North America. In addition to being the first operational long-range SAM and the first operational pulse doppler aviation radar, it was the only SAM deployed by the United States Air Force. Stored horizontally in a launcher shelter with a movable roof, the missile was erected, fired vertically using rocket boosters to high altitude, and then tipped over into a horizontal Mach 2.5 cruise powered by ramjet engines. This lofted trajectory allowed the missile to operate at a maximum range as great as 430 mi (700 km). Controlled from the ground for most of its flight, when it reached the target area it was commanded to begin a dive, activating an onboard active radar homing seeker for terminal guidance. A radar proximity fuse detonated the warhead, either a large conventional explosive or the W40 nuclear warhead. The Air Force originally planned for a total of 52 sites covering most of the major cities and industrial regions in the US. The US Army was deploying their own systems at the same time, and the two services fought constantly both in political circles and in the press. Development dragged on, and by the time it was ready for deployment in the late 1950s, the nuclear threat had moved from manned bombers to the intercontinental ballistic missile (ICBM). By this time the Army had successfully deployed the much shorter range Nike Hercules that they claimed filled any possible need through the 1960s, in spite of Air Force claims to the contrary. As testing continued, the Air Force reduced its plans to sixteen sites, and then again to eight with an additional two sites in Canada. The first US site was declared operational in 1959, but with only a single working missile. Bringing the rest of the missiles into service took years, by which time the system was obsolete. Deactivations began in 1969 and by 1972 all Bomarc sites had been shut down. A small number were used as target drones, and only a few remain on display today. | 2002-02-25T15:51:15Z | 2023-12-12T01:43:43Z | [
"Template:Cvt",
"Template:CAN",
"Template:ISBN",
"Template:Refend",
"Template:Citation needed",
"Template:Coord",
"Template:Cite magazine",
"Template:Boeing model numbers",
"Template:Col-begin",
"Template:Cite news",
"Template:USAF system codes",
"Template:Col-end",
"Template:Aircontent",
"Template:Reflist",
"Template:Cite report",
"Template:About",
"Template:Sfn",
"Template:Rp",
"Template:Kml",
"Template:Cite web",
"Template:Webarchive",
"Template:Cite journal",
"Template:Use dmy dates",
"Template:Convert",
"Template:Refbegin",
"Template:US missiles",
"Template:Portal bar",
"Template:Flagicon",
"Template:Flag",
"Template:Cite encyclopedia",
"Template:Cite book",
"Template:USAF missiles",
"Template:USAF fighters",
"Template:Authority control",
"Template:Short description",
"Template:Infobox weapon",
"Template:R",
"Template:Main",
"Template:Col-break",
"Template:Clear",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/CIM-10_Bomarc |
4,132 | Branco River | The Branco River (Portuguese: Rio Branco; Engl: White River) is the principal affluent of the Rio Negro from the north.
The river drains the Guayanan Highlands moist forests ecoregion. It is enriched by many streams from the Tepui highlands which separate Venezuela and Guyana from Brazil. Its two upper main tributaries are the Uraricoera and the Takutu. The latter almost links its sources with those of the Essequibo; during floods headwaters of the Branco and those of the Essequibo are connected, allowing a level of exchange in the aquatic fauna (such as fish) between the two systems.
The Branco flows nearly south, and finds its way into the Negro through several channels and a chain of lagoons similar to those of the latter river. It is 350 miles (560 km) long, up to its Uraricoera confluence. It has numerous islands, and, 235 miles (378 km) above its mouth, it is broken by a bad series of rapids.
Average, minimum and maximum discharge of the Branco River at near mouth. Period from 1998 to 2022.
As suggested by its name, the Branco (literally "white" in Portuguese) has whitish water that may appear almost milky due to the inorganic sediments it carries. It is traditionally considered a whitewater river, although the major seasonal fluctuations in its physico-chemical characteristics makes a classification difficult and some consider it clearwater. Especially the river's upper parts at the headwaters are clear and flow through rocky country, leading to the suggestion that sediments mainly originate from the lower parts. Furthermore, its chemistry and color may contradict each other compared to the traditional Amazonian river classifications. The Branco River has pH 6–7 and low levels of dissolved organic carbon.
Alfred Russel Wallace mentioned the coloration in "On the Rio Negro", a paper read at the 13 June 1853 meeting of the Royal Geographical Society, in which he said: "[The Rio Branco] is white to a remarkable degree, its waters being actually milky in appearance". Alexander von Humboldt attributed the color to the presence of silicates in the water, principally mica and talc. There is a visible contrast with the waters of the Rio Negro at the confluence of the two rivers. The Rio Negro is a blackwater river with dark tea-colored acidic water (pH 3.5–4.5) that contains high levels of dissolved organic carbon.
Until approximately 20,000 years ago the headwaters of the Branco River flowed not into the Amazon, but via the Takutu Graben in the Rupununi area of Guyana towards the Caribbean. Currently in the rainy season much of the Rupununi area floods, with water draining both to the Amazon (via the Branco River) and the Essequibo River. | [
{
"paragraph_id": 0,
"text": "The Branco River (Portuguese: Rio Branco; Engl: White River) is the principal affluent of the Rio Negro from the north.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The river drains the Guayanan Highlands moist forests ecoregion. It is enriched by many streams from the Tepui highlands which separate Venezuela and Guyana from Brazil. Its two upper main tributaries are the Uraricoera and the Takutu. The latter almost links its sources with those of the Essequibo; during floods headwaters of the Branco and those of the Essequibo are connected, allowing a level of exchange in the aquatic fauna (such as fish) between the two systems.",
"title": "Basin"
},
{
"paragraph_id": 2,
"text": "The Branco flows nearly south, and finds its way into the Negro through several channels and a chain of lagoons similar to those of the latter river. It is 350 miles (560 km) long, up to its Uraricoera confluence. It has numerous islands, and, 235 miles (378 km) above its mouth, it is broken by a bad series of rapids.",
"title": "Basin"
},
{
"paragraph_id": 3,
"text": "Average, minimum and maximum discharge of the Branco River at near mouth. Period from 1998 to 2022.",
"title": "Discharge"
},
{
"paragraph_id": 4,
"text": "As suggested by its name, the Branco (literally \"white\" in Portuguese) has whitish water that may appear almost milky due to the inorganic sediments it carries. It is traditionally considered a whitewater river, although the major seasonal fluctuations in its physico-chemical characteristics makes a classification difficult and some consider it clearwater. Especially the river's upper parts at the headwaters are clear and flow through rocky country, leading to the suggestion that sediments mainly originate from the lower parts. Furthermore, its chemistry and color may contradict each other compared to the traditional Amazonian river classifications. The Branco River has pH 6–7 and low levels of dissolved organic carbon.",
"title": "Water chemistry"
},
{
"paragraph_id": 5,
"text": "Alfred Russel Wallace mentioned the coloration in \"On the Rio Negro\", a paper read at the 13 June 1853 meeting of the Royal Geographical Society, in which he said: \"[The Rio Branco] is white to a remarkable degree, its waters being actually milky in appearance\". Alexander von Humboldt attributed the color to the presence of silicates in the water, principally mica and talc. There is a visible contrast with the waters of the Rio Negro at the confluence of the two rivers. The Rio Negro is a blackwater river with dark tea-colored acidic water (pH 3.5–4.5) that contains high levels of dissolved organic carbon.",
"title": "Water chemistry"
},
{
"paragraph_id": 6,
"text": "Until approximately 20,000 years ago the headwaters of the Branco River flowed not into the Amazon, but via the Takutu Graben in the Rupununi area of Guyana towards the Caribbean. Currently in the rainy season much of the Rupununi area floods, with water draining both to the Amazon (via the Branco River) and the Essequibo River.",
"title": "River capture"
}
] | The Branco River is the principal affluent of the Rio Negro from the north. | 2002-02-25T15:43:11Z | 2023-11-25T00:40:52Z | [
"Template:Citation",
"Template:Cite web",
"Template:Cite journal",
"Template:Commons category",
"Template:-",
"Template:Lang-pt",
"Template:Convert",
"Template:Reflist",
"Template:Cite book",
"Template:About",
"Template:Infobox river"
] | https://en.wikipedia.org/wiki/Branco_River |
4,146 | Bus | A bus (contracted from omnibus, with variants multibus, motorbus, autobus, etc.) is a road vehicle that carries significantly more passengers than an average car or van, but less than the average rail transport. It is most commonly used in public transport, but is also in use for charter purposes, or through private ownership. Although the average bus carries between 30 and 100 passengers, some buses have a capacity of up to 300 passengers. The most common type is the single-deck rigid bus, with double-decker and articulated buses carrying larger loads, and midibuses and minibuses carrying smaller loads. Coaches are used for longer-distance services. Many types of buses, such as city transit buses and inter-city coaches, charge a fare. Other types, such as elementary or secondary school buses or shuttle buses within a post-secondary education campus, are free. In many jurisdictions, bus drivers require a special large vehicle licence above and beyond a regular driving licence.
Buses may be used for scheduled bus transport, scheduled coach transport, school transport, private hire, or tourism; promotional buses may be used for political campaigns and others are privately operated for a wide range of purposes, including rock and pop band tour vehicles.
Horse-drawn buses were used from the 1820s, followed by steam buses in the 1830s, and electric trolleybuses in 1882. The first internal combustion engine buses, or motor buses, were used in 1895. Recently, interest has been growing in hybrid electric buses, fuel cell buses, and electric buses, as well as buses powered by compressed natural gas or biodiesel. As of the 2010s, bus manufacturing is increasingly globalised, with the same designs appearing around the world.
The word bus is a shortened form of the Latin adjectival form omnibus ("for all"), the dative plural of omnis/omne ("all"). The theoretical full name is in French voiture omnibus ("vehicle for all"). The name originates from a mass-transport service started in 1823 by a French corn-mill owner named Stanislas Baudry [fr] in Richebourg, a suburb of Nantes. A by-product of his mill was hot water, and thus next to it he established a spa business. In order to encourage customers he started a horse-drawn transport service from the city centre of Nantes to his establishment. The first vehicles stopped in front of the shop of a hatter named Omnés, which displayed a large sign inscribed "Omnes Omnibus", a pun on his Latin-sounding surname, omnes being the male and female nominative, vocative and accusative form of the Latin adjective omnis/-e ("all"), combined with omnibus, the dative plural form meaning "for all", thus giving his shop the name "Omnés for all", or "everything for everyone".
His transport scheme was a huge success, although not as he had intended as most of his passengers did not visit his spa. He turned the transport service into his principal lucrative business venture and closed the mill and spa. Nantes citizens soon gave the nickname "omnibus" to the vehicle. Having invented the successful concept Baudry moved to Paris and launched the first omnibus service there in April 1828. A similar service was introduced in Manchester in 1824 and in London in 1829.
Regular intercity bus services by steam-powered buses were pioneered in England in the 1830s by Walter Hancock and by associates of Sir Goldsworthy Gurney, among others, running reliable services over road conditions which were too hazardous for horse-drawn transportation.
The first mechanically propelled omnibus appeared on the streets of London on 22 April 1833. Steam carriages were much less likely to overturn, they travelled faster than horse-drawn carriages, they were much cheaper to run, and caused much less damage to the road surface due to their wide tyres.
However, the heavy road tolls imposed by the turnpike trusts discouraged steam road vehicles and left the way clear for the horse bus companies, and from 1861 onwards, harsh legislation virtually eliminated mechanically propelled vehicles from the roads of Great Britain for 30 years, the Locomotive Act 1861 imposing restrictive speed limits on "road locomotives" of 5 mph (8.0 km/h) in towns and cities, and 10 mph (16 km/h) in the country.
In parallel to the development of the bus was the invention of the electric trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the Journal of the Society of Arts in 1881 as an "...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all."
The first such vehicle, the Electromote, was made by his brother Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration.
Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911.
In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales.
Germany's Daimler Motors Corporation also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of 18 km/h (11.2 mph) and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, DMG expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler Motors Corporation also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard.
The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company—it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds of them saw military service on the Western Front during the First World War.
The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. GM purchased the balance of the shares in 1943 to form the GM Truck and Coach Division.
Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking.
Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes.
Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer.
Transit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have ramps to provide access for wheelchair users and people with baby carriages, sometimes as electrically or hydraulically extended under-floor constructs for level access. Prior to more general use of such technology, these wheelchair users could only use specialist para-transit mobility buses.
Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws.
Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-man operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, while articulated buses have three.
Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways.
Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative.
The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s.
United Kingdom and European Union:
United States, Canada and Mexico:
Early bus manufacturing grew out of carriage coach building, and later out of automobile or truck manufacturers. Early buses were merely a bus body fitted to a truck chassis. This body+chassis approach has continued with modern specialist manufacturers, although there also exist integral designs such as the Leyland National where the two are practically inseparable. Specialist builders also exist and concentrate on building buses for special uses or modifying standard buses into specialised products.
Integral designs have the advantages that they have been well-tested for strength and stability, and also are off-the-shelf. However, two incentives cause use of the chassis+body model. First, it allows the buyer and manufacturer both to shop for the best deal for their needs, rather than having to settle on one fixed design—the buyer can choose the body and the chassis separately. Second, over the lifetime of a vehicle (in constant service and heavy traffic), it will likely get minor damage now and again, and being able easily to replace a body panel or window etc. can vastly increase its service life and save the cost and inconvenience of removing it from service.
As with the rest of the automotive industry, into the 20th century, bus manufacturing increasingly became globalized, with manufacturers producing buses far from their intended market to exploit labour and material cost advantages. A typical city bus costs almost US$450,000.
Transit buses, used on public transport bus services, have utilitarian fittings designed for efficient movement of large numbers of people, and often have multiple doors. Coaches are used for longer-distance routes. High-capacity bus rapid transit services may use the bi-articulated bus or tram-style buses such as the Wright StreetCar and the Irisbus Civis.
Buses and coach services often operate to a predetermined published public transport timetable defining the route and the timing, but smaller vehicles may be used on more flexible demand responsive transport services.
Buses play a major part in the tourism industry. Tour buses around the world allow tourists to view local attractions or scenery. These are often open-top buses, but can also be regular buses or coaches.
In local sightseeing, City Sightseeing is the largest operator of local tour buses, operating on a franchised basis all over the world. Specialist tour buses are also often owned and operated by safari parks and other theme parks or resorts. Longer-distance tours are also carried out by bus, either on a turn up and go basis or through a tour operator, and usually allow disembarkation from the bus to allow touring of sites of interest on foot. These may be day trips or longer excursions incorporating hotel stays. Tour buses often carry a tour guide, although the driver or a recorded audio commentary may also perform this function. The tour operator may be a subsidiary of a company that operates buses and coaches for other uses or an independent company that charters buses or coaches. Commuter transport operators may also use their coaches to conduct tours within the target city between the morning and evening commuter transport journey.
Buses and coaches are also a common component of the wider package holiday industry, providing private airport transfers (in addition to general airport buses) and organised tours and day trips for holidaymakers on the package.
Tour buses can also be hired as chartered buses by groups for sightseeing at popular holiday destinations. These private tour buses may offer specific stops, such as all the historical sights, or allow the customers to choose their own itineraries. Tour buses come with professional and informed staff and insurance, and maintain state governed safety standards. Some provide other facilities like entertainment units, luxurious reclining seats, large scenic windows, and even lavatories.
Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world. Some companies such as Topdeck Travel were set up specifically to use buses to drive the hippie trail or travel to places such as North Africa.
In many tourist or travel destinations, a bus is part of the tourist attraction, such as the North American tourist trolleys, London's AEC Routemaster heritage routes, or the customised buses of Malta, Asia, and the Americas. Another example of tourist stops is the homes of celebrities, such as tours based near Hollywood. There are several such services between 6000 and 7000 Hollywood Boulevard in Los Angeles.
In some countries, particularly the US and Canada, buses used to transport schoolchildren have evolved into a specific design with specified mandatory features. American states have also adopted laws regarding motorist conduct around school buses, including large fines and possibly prison for passing a stopped school bus in the process of loading or offloading children passengers. These school buses may have school bus yellow livery and crossing guards. Other countries may mandate the use of seat belts. As a minimum, many countries require a bus carrying students to display a sign, and may also adopt yellow liveries. Student transport often uses older buses cascaded from service use, retrofitted with more seats or seatbelts. Student transport may be operated by local authorities or private contractors. Schools may also own and operate their own buses for other transport needs, such as class field trips or transport to associated sports, music, or other school events.
Due to the costs involved in owning, operating, and driving buses and coaches, much bus and coach use comes from the private hire of vehicles from charter bus companies, either for a day or two or on a longer contract basis, where the charter company provides the vehicles and qualified drivers.
Charter bus operators may be completely independent businesses, or charter hire may be a subsidiary business of a public transport operator that might maintain a separate fleet or use surplus buses, coaches, and dual-purpose coach-seated buses. Many private taxicab companies also operate larger minibus vehicles to cater for group fares. Companies, private groups, and social clubs may hire buses or coaches as a cost-effective method of transporting a group to an event or site, such as a group meeting, racing event, or organised recreational activity such as a summer camp. Schools often hire charter bus services on regular basis for transportation of children to and from their homes. Chartered buses are also used by education institutes for transport to conventions, exhibitions, and field trips. Entertainment or event companies may also hire temporary shuttles buses for transport at events such as festivals or conferences. Party buses are used by companies in a similar manner to limousine hire, for luxury private transport to social events or as a touring experience. Sleeper buses are used by bands or other organisations that tour between entertainment venues and require mobile rest and recreation facilities. Some couples hire preserved buses for their wedding transport, instead of the traditional car. Buses are often hired for parades or processions. Victory parades are often held for triumphant sports teams, who often tour their home town or city in an open-top bus. Sports teams may also contract out their transport to a team bus, for travel to away games, to a competition or to a final event. These buses are often specially decorated in a livery matching the team colours. Private companies often contract out private shuttle bus services, for transport of their customers or patrons, such as hotels, amusement parks, university campuses, or private airport transfer services. This shuttle usage can be as transport between locations, or to and from parking lots. High specification luxury coaches are often chartered by companies for executive or VIP transport. Charter buses may also be used in tourism and for promotion (See Tourism and Promotion sections).
Many organisations, including the police, not for profit, social or charitable groups with a regular need for group transport may find it practical or cost-effective to own and operate a bus for their own needs. These are often minibuses for practical, tax and driver licensing reasons, although they can also be full-size buses. Cadet or scout groups or other youth organizations may also own buses. Companies such as railroads, construction contractors, and agricultural firms may own buses to transport employees to and from remote job sites. Specific charities may exist to fund and operate bus transport, usually using specially modified mobility buses or otherwise accessible buses (See Accessibility section). Some use their contributions to buy vehicles and provide volunteer drivers.
Airport operators make use of special airside airport buses for crew and passenger transport in the secure airside parts of an airport. Some public authorities, police forces, and military forces make use of armoured buses where there is a special need to provide increased passenger protection. The United States Secret Service acquired two in 2010 for transporting dignitaries needing special protection. Police departments make use of police buses for a variety of reasons, such as prisoner transport, officer transport, temporary detention facilities, and as command and control vehicles. Some fire departments also use a converted bus as a command post while those in cold climates might retain a bus as a heated shelter at fire scenes. Many are drawn from retired school or service buses.
Buses are often used for advertising, political campaigning, public information campaigns, public relations, or promotional purposes. These may take the form of temporary charter hire of service buses, or the temporary or permanent conversion and operation of buses, usually of second-hand buses. Extreme examples include converting the bus with displays and decorations or awnings and fittings. Interiors may be fitted out for exhibition or information purposes with special equipment or audio visual devices.
Bus advertising takes many forms, often as interior and exterior adverts and all-over advertising liveries. The practice often extends into the exclusive private hire and use of a bus to promote a brand or product, appearing at large public events, or touring busy streets. The bus is sometimes staffed by promotions personnel, giving out free gifts. Campaign buses are often specially decorated for a political campaign or other social awareness information campaign, designed to bring a specific message to different areas, or used to transport campaign personnel to local areas/meetings. Exhibition buses are often sent to public events such as fairs and festivals for purposes such as recruitment campaigns, for example by private companies or the armed forces. Complex urban planning proposals may be organised into a mobile exhibition bus for the purposes of public consultation.
In some sparsely populated areas, it is common to use brucks, buses with a cargo area to transport both passengers and cargo at the same time. They are especially common in the Nordic countries.
Historically, the types and features of buses have developed according to local needs. Buses were fitted with technology appropriate to the local climate or passenger needs, such as air conditioning in Asia, or cycle mounts on North American buses. The bus types in use around the world where there was little mass production were often sourced secondhand from other countries, such as the Malta bus, and buses in use in Africa. Other countries such as Cuba required novel solutions to import restrictions, with the creation of the "camellos" (camel bus), a specially manufactured trailer bus.
After the Second World War, manufacturers in Europe and the Far East, such as Mercedes-Benz buses and Mitsubishi Fuso expanded into other continents influencing the use of buses previously served by local types. Use of buses around the world has also been influenced by colonial associations or political alliances between countries. Several of the Commonwealth nations followed the British lead and sourced buses from British manufacturers, leading to a prevalence of double-decker buses. Several Eastern Bloc countries adopted trolleybus systems, and their manufacturers such as Trolza exported trolleybuses to other friendly states. In the 1930s, Italy designed the world's only triple decker bus for the busy route between Rome and Tivoli that could carry eighty-eight passengers. It was unique not only in being a triple decker but having a separate smoking compartment on the third level.
The buses to be found in countries around the world often reflect the quality of the local road network, with high-floor resilient truck-based designs prevalent in several less developed countries where buses are subject to tough operating conditions. Population density also has a major impact, where dense urbanisation such as in Japan and the far east has led to the adoption of high capacity long multi-axle buses, often double-deckers while South America and China are implementing large numbers of articulated buses for bus rapid transit schemes.
Euro Bus Expo is a trade show, which is held biennially at the UK's National Exhibition Centre in Birmingham. As the official show of the Confederation of Passenger Transport, the UK's trade association for the bus, coach and light rail industry, the three-day event offers visitors from Europe and beyond the chance to see and experience the very latest vehicles and product and service innovations right across the industry.
Busworld Kortrijk in Kortrijk, Belgium, is the leading bus trade fair in Europe. It is also held biennially.
Most public or private buses and coaches, once they have reached the end of their service with one or more operators, are sent to the wrecking yard for breaking up for scrap and spare parts. Some buses which are not economical to keep running as service buses are often converted for use other than revenue-earning transport. Much like old cars and trucks, buses often pass through a dealership where they can be bought privately or at auction.
Bus operators often find it economical to convert retired buses to use as permanent training buses for driver training, rather than taking a regular service bus out of use. Some large operators have also converted retired buses into tow bus vehicles, to act as tow trucks. With the outsourcing of maintenance staff and facilities, the increase in company health and safety regulations, and the increasing curb weights of buses, many operators now contract their towing needs to a professional vehicle recovery company.
Some buses that have reached the end of their service that are still in good condition are sent for export to other countries.
Some retired buses have been converted to static or mobile cafés, often using historic buses as a tourist attraction. There are also catering buses: buses converted into a mobile canteen and break room. These are commonly seen at external filming locations to feed the cast and crew, and at other large events to feed staff. Another use is as an emergency vehicle, such as high-capacity ambulance bus or mobile command centre.
Some organisations adapt and operate playbuses or learning buses to provide a playground or learning environments to children who might not have access to proper play areas. An ex-London AEC Routemaster bus has been converted to a mobile theatre and catwalk fashion show.
Some buses meet a destructive end by being entered in banger races or at demolition derbys. A larger number of old retired buses have also been converted into mobile holiday homes and campers.
Rather than being scrapped or converted for other uses, sometimes retired buses are saved for preservation. This can be done by individuals, volunteer preservation groups or charitable trusts, museums, or sometimes by the operators themselves as part of a heritage fleet. These buses often need to be restored to their original condition and will have their livery and other details such as internal notices and rollsigns restored to be authentic to a specific time in the bus's history. Some buses that undergo preservation are rescued from a state of great disrepair, but others enter preservation with very little wrong with them. As with other historic vehicles, many preserved buses either in a working or static state form part of the collections of transport museums. Additionally, some buses are preserved so they can appear alongside other period vehicles in television and film. Working buses will often be exhibited at rallies and events, and they are also used as charter buses. While many preserved buses are quite old or even vintage, in some cases relatively new examples of a bus type can enter restoration. In-service examples are still in use by other operators. This often happens when a change in design or operating practice, such as the switch to one person operation or low floor technology, renders some buses redundant while still relatively new. | [
{
"paragraph_id": 0,
"text": "A bus (contracted from omnibus, with variants multibus, motorbus, autobus, etc.) is a road vehicle that carries significantly more passengers than an average car or van, but less than the average rail transport. It is most commonly used in public transport, but is also in use for charter purposes, or through private ownership. Although the average bus carries between 30 and 100 passengers, some buses have a capacity of up to 300 passengers. The most common type is the single-deck rigid bus, with double-decker and articulated buses carrying larger loads, and midibuses and minibuses carrying smaller loads. Coaches are used for longer-distance services. Many types of buses, such as city transit buses and inter-city coaches, charge a fare. Other types, such as elementary or secondary school buses or shuttle buses within a post-secondary education campus, are free. In many jurisdictions, bus drivers require a special large vehicle licence above and beyond a regular driving licence.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Buses may be used for scheduled bus transport, scheduled coach transport, school transport, private hire, or tourism; promotional buses may be used for political campaigns and others are privately operated for a wide range of purposes, including rock and pop band tour vehicles.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Horse-drawn buses were used from the 1820s, followed by steam buses in the 1830s, and electric trolleybuses in 1882. The first internal combustion engine buses, or motor buses, were used in 1895. Recently, interest has been growing in hybrid electric buses, fuel cell buses, and electric buses, as well as buses powered by compressed natural gas or biodiesel. As of the 2010s, bus manufacturing is increasingly globalised, with the same designs appearing around the world.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word bus is a shortened form of the Latin adjectival form omnibus (\"for all\"), the dative plural of omnis/omne (\"all\"). The theoretical full name is in French voiture omnibus (\"vehicle for all\"). The name originates from a mass-transport service started in 1823 by a French corn-mill owner named Stanislas Baudry [fr] in Richebourg, a suburb of Nantes. A by-product of his mill was hot water, and thus next to it he established a spa business. In order to encourage customers he started a horse-drawn transport service from the city centre of Nantes to his establishment. The first vehicles stopped in front of the shop of a hatter named Omnés, which displayed a large sign inscribed \"Omnes Omnibus\", a pun on his Latin-sounding surname, omnes being the male and female nominative, vocative and accusative form of the Latin adjective omnis/-e (\"all\"), combined with omnibus, the dative plural form meaning \"for all\", thus giving his shop the name \"Omnés for all\", or \"everything for everyone\".",
"title": "Name"
},
{
"paragraph_id": 4,
"text": "His transport scheme was a huge success, although not as he had intended as most of his passengers did not visit his spa. He turned the transport service into his principal lucrative business venture and closed the mill and spa. Nantes citizens soon gave the nickname \"omnibus\" to the vehicle. Having invented the successful concept Baudry moved to Paris and launched the first omnibus service there in April 1828. A similar service was introduced in Manchester in 1824 and in London in 1829.",
"title": "Name"
},
{
"paragraph_id": 5,
"text": "Regular intercity bus services by steam-powered buses were pioneered in England in the 1830s by Walter Hancock and by associates of Sir Goldsworthy Gurney, among others, running reliable services over road conditions which were too hazardous for horse-drawn transportation.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The first mechanically propelled omnibus appeared on the streets of London on 22 April 1833. Steam carriages were much less likely to overturn, they travelled faster than horse-drawn carriages, they were much cheaper to run, and caused much less damage to the road surface due to their wide tyres.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "However, the heavy road tolls imposed by the turnpike trusts discouraged steam road vehicles and left the way clear for the horse bus companies, and from 1861 onwards, harsh legislation virtually eliminated mechanically propelled vehicles from the roads of Great Britain for 30 years, the Locomotive Act 1861 imposing restrictive speed limits on \"road locomotives\" of 5 mph (8.0 km/h) in towns and cities, and 10 mph (16 km/h) in the country.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In parallel to the development of the bus was the invention of the electric trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the Journal of the Society of Arts in 1881 as an \"...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all.\"",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The first such vehicle, the Electromote, was made by his brother Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Germany's Daimler Motors Corporation also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of 18 km/h (11.2 mph) and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, DMG expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler Motors Corporation also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company—it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds of them saw military service on the Western Front during the First World War.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. GM purchased the balance of the shares in 1943 to form the GM Truck and Coach Division.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "Transit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have ramps to provide access for wheelchair users and people with baby carriages, sometimes as electrically or hydraulically extended under-floor constructs for level access. Prior to more general use of such technology, these wheelchair users could only use specialist para-transit mobility buses.",
"title": "Design"
},
{
"paragraph_id": 19,
"text": "Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws.",
"title": "Design"
},
{
"paragraph_id": 20,
"text": "Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-man operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, while articulated buses have three.",
"title": "Design"
},
{
"paragraph_id": 21,
"text": "Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways.",
"title": "Design"
},
{
"paragraph_id": 22,
"text": "Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative.",
"title": "Design"
},
{
"paragraph_id": 23,
"text": "The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s.",
"title": "Design"
},
{
"paragraph_id": 24,
"text": "United Kingdom and European Union:",
"title": "Design"
},
{
"paragraph_id": 25,
"text": "United States, Canada and Mexico:",
"title": "Design"
},
{
"paragraph_id": 26,
"text": "Early bus manufacturing grew out of carriage coach building, and later out of automobile or truck manufacturers. Early buses were merely a bus body fitted to a truck chassis. This body+chassis approach has continued with modern specialist manufacturers, although there also exist integral designs such as the Leyland National where the two are practically inseparable. Specialist builders also exist and concentrate on building buses for special uses or modifying standard buses into specialised products.",
"title": "Manufacture"
},
{
"paragraph_id": 27,
"text": "Integral designs have the advantages that they have been well-tested for strength and stability, and also are off-the-shelf. However, two incentives cause use of the chassis+body model. First, it allows the buyer and manufacturer both to shop for the best deal for their needs, rather than having to settle on one fixed design—the buyer can choose the body and the chassis separately. Second, over the lifetime of a vehicle (in constant service and heavy traffic), it will likely get minor damage now and again, and being able easily to replace a body panel or window etc. can vastly increase its service life and save the cost and inconvenience of removing it from service.",
"title": "Manufacture"
},
{
"paragraph_id": 28,
"text": "As with the rest of the automotive industry, into the 20th century, bus manufacturing increasingly became globalized, with manufacturers producing buses far from their intended market to exploit labour and material cost advantages. A typical city bus costs almost US$450,000.",
"title": "Manufacture"
},
{
"paragraph_id": 29,
"text": "Transit buses, used on public transport bus services, have utilitarian fittings designed for efficient movement of large numbers of people, and often have multiple doors. Coaches are used for longer-distance routes. High-capacity bus rapid transit services may use the bi-articulated bus or tram-style buses such as the Wright StreetCar and the Irisbus Civis.",
"title": "Uses"
},
{
"paragraph_id": 30,
"text": "Buses and coach services often operate to a predetermined published public transport timetable defining the route and the timing, but smaller vehicles may be used on more flexible demand responsive transport services.",
"title": "Uses"
},
{
"paragraph_id": 31,
"text": "Buses play a major part in the tourism industry. Tour buses around the world allow tourists to view local attractions or scenery. These are often open-top buses, but can also be regular buses or coaches.",
"title": "Uses"
},
{
"paragraph_id": 32,
"text": "In local sightseeing, City Sightseeing is the largest operator of local tour buses, operating on a franchised basis all over the world. Specialist tour buses are also often owned and operated by safari parks and other theme parks or resorts. Longer-distance tours are also carried out by bus, either on a turn up and go basis or through a tour operator, and usually allow disembarkation from the bus to allow touring of sites of interest on foot. These may be day trips or longer excursions incorporating hotel stays. Tour buses often carry a tour guide, although the driver or a recorded audio commentary may also perform this function. The tour operator may be a subsidiary of a company that operates buses and coaches for other uses or an independent company that charters buses or coaches. Commuter transport operators may also use their coaches to conduct tours within the target city between the morning and evening commuter transport journey.",
"title": "Uses"
},
{
"paragraph_id": 33,
"text": "Buses and coaches are also a common component of the wider package holiday industry, providing private airport transfers (in addition to general airport buses) and organised tours and day trips for holidaymakers on the package.",
"title": "Uses"
},
{
"paragraph_id": 34,
"text": "Tour buses can also be hired as chartered buses by groups for sightseeing at popular holiday destinations. These private tour buses may offer specific stops, such as all the historical sights, or allow the customers to choose their own itineraries. Tour buses come with professional and informed staff and insurance, and maintain state governed safety standards. Some provide other facilities like entertainment units, luxurious reclining seats, large scenic windows, and even lavatories.",
"title": "Uses"
},
{
"paragraph_id": 35,
"text": "Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world. Some companies such as Topdeck Travel were set up specifically to use buses to drive the hippie trail or travel to places such as North Africa.",
"title": "Uses"
},
{
"paragraph_id": 36,
"text": "In many tourist or travel destinations, a bus is part of the tourist attraction, such as the North American tourist trolleys, London's AEC Routemaster heritage routes, or the customised buses of Malta, Asia, and the Americas. Another example of tourist stops is the homes of celebrities, such as tours based near Hollywood. There are several such services between 6000 and 7000 Hollywood Boulevard in Los Angeles.",
"title": "Uses"
},
{
"paragraph_id": 37,
"text": "In some countries, particularly the US and Canada, buses used to transport schoolchildren have evolved into a specific design with specified mandatory features. American states have also adopted laws regarding motorist conduct around school buses, including large fines and possibly prison for passing a stopped school bus in the process of loading or offloading children passengers. These school buses may have school bus yellow livery and crossing guards. Other countries may mandate the use of seat belts. As a minimum, many countries require a bus carrying students to display a sign, and may also adopt yellow liveries. Student transport often uses older buses cascaded from service use, retrofitted with more seats or seatbelts. Student transport may be operated by local authorities or private contractors. Schools may also own and operate their own buses for other transport needs, such as class field trips or transport to associated sports, music, or other school events.",
"title": "Uses"
},
{
"paragraph_id": 38,
"text": "Due to the costs involved in owning, operating, and driving buses and coaches, much bus and coach use comes from the private hire of vehicles from charter bus companies, either for a day or two or on a longer contract basis, where the charter company provides the vehicles and qualified drivers.",
"title": "Uses"
},
{
"paragraph_id": 39,
"text": "Charter bus operators may be completely independent businesses, or charter hire may be a subsidiary business of a public transport operator that might maintain a separate fleet or use surplus buses, coaches, and dual-purpose coach-seated buses. Many private taxicab companies also operate larger minibus vehicles to cater for group fares. Companies, private groups, and social clubs may hire buses or coaches as a cost-effective method of transporting a group to an event or site, such as a group meeting, racing event, or organised recreational activity such as a summer camp. Schools often hire charter bus services on regular basis for transportation of children to and from their homes. Chartered buses are also used by education institutes for transport to conventions, exhibitions, and field trips. Entertainment or event companies may also hire temporary shuttles buses for transport at events such as festivals or conferences. Party buses are used by companies in a similar manner to limousine hire, for luxury private transport to social events or as a touring experience. Sleeper buses are used by bands or other organisations that tour between entertainment venues and require mobile rest and recreation facilities. Some couples hire preserved buses for their wedding transport, instead of the traditional car. Buses are often hired for parades or processions. Victory parades are often held for triumphant sports teams, who often tour their home town or city in an open-top bus. Sports teams may also contract out their transport to a team bus, for travel to away games, to a competition or to a final event. These buses are often specially decorated in a livery matching the team colours. Private companies often contract out private shuttle bus services, for transport of their customers or patrons, such as hotels, amusement parks, university campuses, or private airport transfer services. This shuttle usage can be as transport between locations, or to and from parking lots. High specification luxury coaches are often chartered by companies for executive or VIP transport. Charter buses may also be used in tourism and for promotion (See Tourism and Promotion sections).",
"title": "Uses"
},
{
"paragraph_id": 40,
"text": "Many organisations, including the police, not for profit, social or charitable groups with a regular need for group transport may find it practical or cost-effective to own and operate a bus for their own needs. These are often minibuses for practical, tax and driver licensing reasons, although they can also be full-size buses. Cadet or scout groups or other youth organizations may also own buses. Companies such as railroads, construction contractors, and agricultural firms may own buses to transport employees to and from remote job sites. Specific charities may exist to fund and operate bus transport, usually using specially modified mobility buses or otherwise accessible buses (See Accessibility section). Some use their contributions to buy vehicles and provide volunteer drivers.",
"title": "Uses"
},
{
"paragraph_id": 41,
"text": "Airport operators make use of special airside airport buses for crew and passenger transport in the secure airside parts of an airport. Some public authorities, police forces, and military forces make use of armoured buses where there is a special need to provide increased passenger protection. The United States Secret Service acquired two in 2010 for transporting dignitaries needing special protection. Police departments make use of police buses for a variety of reasons, such as prisoner transport, officer transport, temporary detention facilities, and as command and control vehicles. Some fire departments also use a converted bus as a command post while those in cold climates might retain a bus as a heated shelter at fire scenes. Many are drawn from retired school or service buses.",
"title": "Uses"
},
{
"paragraph_id": 42,
"text": "Buses are often used for advertising, political campaigning, public information campaigns, public relations, or promotional purposes. These may take the form of temporary charter hire of service buses, or the temporary or permanent conversion and operation of buses, usually of second-hand buses. Extreme examples include converting the bus with displays and decorations or awnings and fittings. Interiors may be fitted out for exhibition or information purposes with special equipment or audio visual devices.",
"title": "Uses"
},
{
"paragraph_id": 43,
"text": "Bus advertising takes many forms, often as interior and exterior adverts and all-over advertising liveries. The practice often extends into the exclusive private hire and use of a bus to promote a brand or product, appearing at large public events, or touring busy streets. The bus is sometimes staffed by promotions personnel, giving out free gifts. Campaign buses are often specially decorated for a political campaign or other social awareness information campaign, designed to bring a specific message to different areas, or used to transport campaign personnel to local areas/meetings. Exhibition buses are often sent to public events such as fairs and festivals for purposes such as recruitment campaigns, for example by private companies or the armed forces. Complex urban planning proposals may be organised into a mobile exhibition bus for the purposes of public consultation.",
"title": "Uses"
},
{
"paragraph_id": 44,
"text": "In some sparsely populated areas, it is common to use brucks, buses with a cargo area to transport both passengers and cargo at the same time. They are especially common in the Nordic countries.",
"title": "Uses"
},
{
"paragraph_id": 45,
"text": "Historically, the types and features of buses have developed according to local needs. Buses were fitted with technology appropriate to the local climate or passenger needs, such as air conditioning in Asia, or cycle mounts on North American buses. The bus types in use around the world where there was little mass production were often sourced secondhand from other countries, such as the Malta bus, and buses in use in Africa. Other countries such as Cuba required novel solutions to import restrictions, with the creation of the \"camellos\" (camel bus), a specially manufactured trailer bus.",
"title": "Around the world"
},
{
"paragraph_id": 46,
"text": "After the Second World War, manufacturers in Europe and the Far East, such as Mercedes-Benz buses and Mitsubishi Fuso expanded into other continents influencing the use of buses previously served by local types. Use of buses around the world has also been influenced by colonial associations or political alliances between countries. Several of the Commonwealth nations followed the British lead and sourced buses from British manufacturers, leading to a prevalence of double-decker buses. Several Eastern Bloc countries adopted trolleybus systems, and their manufacturers such as Trolza exported trolleybuses to other friendly states. In the 1930s, Italy designed the world's only triple decker bus for the busy route between Rome and Tivoli that could carry eighty-eight passengers. It was unique not only in being a triple decker but having a separate smoking compartment on the third level.",
"title": "Around the world"
},
{
"paragraph_id": 47,
"text": "The buses to be found in countries around the world often reflect the quality of the local road network, with high-floor resilient truck-based designs prevalent in several less developed countries where buses are subject to tough operating conditions. Population density also has a major impact, where dense urbanisation such as in Japan and the far east has led to the adoption of high capacity long multi-axle buses, often double-deckers while South America and China are implementing large numbers of articulated buses for bus rapid transit schemes.",
"title": "Around the world"
},
{
"paragraph_id": 48,
"text": "Euro Bus Expo is a trade show, which is held biennially at the UK's National Exhibition Centre in Birmingham. As the official show of the Confederation of Passenger Transport, the UK's trade association for the bus, coach and light rail industry, the three-day event offers visitors from Europe and beyond the chance to see and experience the very latest vehicles and product and service innovations right across the industry.",
"title": "Around the world"
},
{
"paragraph_id": 49,
"text": "Busworld Kortrijk in Kortrijk, Belgium, is the leading bus trade fair in Europe. It is also held biennially.",
"title": "Around the world"
},
{
"paragraph_id": 50,
"text": "Most public or private buses and coaches, once they have reached the end of their service with one or more operators, are sent to the wrecking yard for breaking up for scrap and spare parts. Some buses which are not economical to keep running as service buses are often converted for use other than revenue-earning transport. Much like old cars and trucks, buses often pass through a dealership where they can be bought privately or at auction.",
"title": "Use of retired buses"
},
{
"paragraph_id": 51,
"text": "Bus operators often find it economical to convert retired buses to use as permanent training buses for driver training, rather than taking a regular service bus out of use. Some large operators have also converted retired buses into tow bus vehicles, to act as tow trucks. With the outsourcing of maintenance staff and facilities, the increase in company health and safety regulations, and the increasing curb weights of buses, many operators now contract their towing needs to a professional vehicle recovery company.",
"title": "Use of retired buses"
},
{
"paragraph_id": 52,
"text": "Some buses that have reached the end of their service that are still in good condition are sent for export to other countries.",
"title": "Use of retired buses"
},
{
"paragraph_id": 53,
"text": "Some retired buses have been converted to static or mobile cafés, often using historic buses as a tourist attraction. There are also catering buses: buses converted into a mobile canteen and break room. These are commonly seen at external filming locations to feed the cast and crew, and at other large events to feed staff. Another use is as an emergency vehicle, such as high-capacity ambulance bus or mobile command centre.",
"title": "Use of retired buses"
},
{
"paragraph_id": 54,
"text": "Some organisations adapt and operate playbuses or learning buses to provide a playground or learning environments to children who might not have access to proper play areas. An ex-London AEC Routemaster bus has been converted to a mobile theatre and catwalk fashion show.",
"title": "Use of retired buses"
},
{
"paragraph_id": 55,
"text": "Some buses meet a destructive end by being entered in banger races or at demolition derbys. A larger number of old retired buses have also been converted into mobile holiday homes and campers.",
"title": "Use of retired buses"
},
{
"paragraph_id": 56,
"text": "Rather than being scrapped or converted for other uses, sometimes retired buses are saved for preservation. This can be done by individuals, volunteer preservation groups or charitable trusts, museums, or sometimes by the operators themselves as part of a heritage fleet. These buses often need to be restored to their original condition and will have their livery and other details such as internal notices and rollsigns restored to be authentic to a specific time in the bus's history. Some buses that undergo preservation are rescued from a state of great disrepair, but others enter preservation with very little wrong with them. As with other historic vehicles, many preserved buses either in a working or static state form part of the collections of transport museums. Additionally, some buses are preserved so they can appear alongside other period vehicles in television and film. Working buses will often be exhibited at rallies and events, and they are also used as charter buses. While many preserved buses are quite old or even vintage, in some cases relatively new examples of a bus type can enter restoration. In-service examples are still in use by other operators. This often happens when a change in design or operating practice, such as the switch to one person operation or low floor technology, renders some buses redundant while still relatively new.",
"title": "Use of retired buses"
}
] | A bus is a road vehicle that carries significantly more passengers than an average car or van, but less than the average rail transport. It is most commonly used in public transport, but is also in use for charter purposes, or through private ownership. Although the average bus carries between 30 and 100 passengers, some buses have a capacity of up to 300 passengers. The most common type is the single-deck rigid bus, with double-decker and articulated buses carrying larger loads, and midibuses and minibuses carrying smaller loads. Coaches are used for longer-distance services. Many types of buses, such as city transit buses and inter-city coaches, charge a fare. Other types, such as elementary or secondary school buses or shuttle buses within a post-secondary education campus, are free. In many jurisdictions, bus drivers require a special large vehicle licence above and beyond a regular driving licence. Buses may be used for scheduled bus transport, scheduled coach transport, school transport, private hire, or tourism; promotional buses may be used for political campaigns and others are privately operated for a wide range of purposes, including rock and pop band tour vehicles. Horse-drawn buses were used from the 1820s, followed by steam buses in the 1830s, and electric trolleybuses in 1882. The first internal combustion engine buses, or motor buses, were used in 1895. Recently, interest has been growing in hybrid electric buses, fuel cell buses, and electric buses, as well as buses powered by compressed natural gas or biodiesel. As of the 2010s, bus manufacturing is increasingly globalised, with the same designs appearing around the world. | 2001-09-07T13:43:53Z | 2023-12-20T14:01:39Z | [
"Template:Ill",
"Template:Cite EB1911",
"Template:Webarchive",
"Template:Wiktionary",
"Template:Pp-vandalism",
"Template:Use dmy dates",
"Template:Cvt",
"Template:Cite dictionary",
"Template:Cite magazine",
"Template:Short description",
"Template:Convert",
"Template:Cite book",
"Template:Commons and category",
"Template:Public transport",
"Template:Other uses",
"Template:Lang",
"Template:Cite news",
"Template:Bus rapid transit",
"Template:Sfn",
"Template:Main",
"Template:Citation needed",
"Template:Clarify",
"Template:Dubious",
"Template:Div col",
"Template:Clear",
"Template:Cite web",
"Template:See also",
"Template:Portal",
"Template:Div col end",
"Template:Reflist",
"Template:Buses",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Bus |
4,147 | Bali | Bali (/ˈbɑːli/; Balinese: ᬩᬮᬶ) is a province of Indonesia and the westernmost of the Lesser Sunda Islands. East of Java and west of Lombok, the province includes the island of Bali and a few smaller offshore islands, notably Nusa Penida, Nusa Lembongan, and Nusa Ceningan to the southeast. The provincial capital, Denpasar, is the most populous city in the Lesser Sunda Islands and the second-largest, after Makassar, in Eastern Indonesia. The upland town of Ubud in Greater Denpasar is considered Bali's cultural centre. The province is Indonesia's main tourist destination, with a significant rise in tourism since the 1980s. Tourism-related business makes up 80% of its economy.
Bali is the only Hindu-majority province in Indonesia, with 86.9% of the population adhering to Balinese Hinduism. It is renowned for its highly developed arts, including traditional and modern dance, sculpture, painting, leather, metalworking, and music. The Indonesian International Film Festival is held every year in Bali. Other international events that have been held in Bali include Miss World 2013, the 2018 Annual Meetings of the International Monetary Fund and the World Bank Group and the 2022 G20 summit. In March 2017, TripAdvisor named Bali as the world's top destination in its Traveller's Choice award, which it also earned in January 2021.
Bali is part of the Coral Triangle, the area with the highest biodiversity of marine species, especially fish and turtles. In this area alone, over 500 reef-building coral species can be found. For comparison, this is about seven times as many as in the entire Caribbean. Bali is the home of the Subak irrigation system, a UNESCO World Heritage Site. It is also home to a unified confederation of kingdoms composed of 10 traditional royal Balinese houses, each house ruling a specific geographic area. The confederation is the successor of the Bali Kingdom. The royal houses are not recognised by the government of Indonesia; however, they originated before Dutch colonisation.
Bali was inhabited around 2000 BC by Austronesian people who migrated originally from the island of Taiwan to Southeast Asia and Oceania through Maritime Southeast Asia. Culturally and linguistically, the Balinese are closely related to the people of the Indonesian archipelago, Malaysia, Brunei, the Philippines, and Oceania. Stone tools dating from this time have been found near the village of Cekik in the island's west.
In ancient Bali, nine Hindu sects existed, the Pasupata, Bhairawa, Siwa Shidanta, Vaishnava, Bodha, Brahma, Resi, Sora and Ganapatya. Each sect revered a specific deity as its personal Godhead.
Inscriptions from 896 and 911 do not mention a king, until 914, when Sri Kesarivarma is mentioned. They also reveal an independent Bali, with a distinct dialect, where Buddhism and Shaivism were practised simultaneously. Mpu Sindok's great-granddaughter, Mahendradatta (Gunapriyadharmapatni), married the Bali king Udayana Warmadewa (Dharmodayanavarmadeva) around 989, giving birth to Airlangga around 1001. This marriage also brought more Hinduism and Javanese culture to Bali. Princess Sakalendukirana appeared in 1098. Suradhipa reigned from 1115 to 1119, and Jayasakti from 1146 until 1150. Jayapangus appears on inscriptions between 1178 and 1181, while Adikuntiketana and his son Paramesvara in 1204.
Balinese culture was strongly influenced by Indian, Chinese, and particularly Hindu culture, beginning around the 1st century AD. The name Bali dwipa ("Bali island") has been discovered from various inscriptions, including the Blanjong pillar inscription written by Sri Kesari Warmadewa in 914 AD and mentioning Walidwipa. It was during this time that the people developed their complex irrigation system subak to grow rice in wet-field cultivation. Some religious and cultural traditions still practised today can be traced to this period.
The Hindu-Buddhist Majapahit Empire (1293–1520 AD) on eastern Java founded a Balinese colony in 1343. The uncle of Hayam Wuruk is mentioned in the charters of 1384–86. Mass Javanese immigration to Bali occurred in the next century when the Majapahit Empire fell in 1520. Bali's government then became an independent collection of Hindu kingdoms which led to a Balinese national identity and major enhancements in culture, arts, and economy. The nation with various kingdoms became independent for up to 386 years until 1906 when the Dutch subjugated and repulsed the natives for economic control and took it over.
The first known European contact with Bali is thought to have been made in 1512, when a Portuguese expedition led by Antonio Abreu and Francisco Serrão sighted its northern shores. It was the first expedition of a series of bi-annual fleets to the Moluccas, that throughout the 16th century travelled along the coasts of the Sunda Islands. Bali was also mapped in 1512, in the chart of Francisco Rodrigues, aboard the expedition. In 1585, a ship foundered off the Bukit Peninsula and left a few Portuguese in the service of Dewa Agung.
In 1597, the Dutch explorer Cornelis de Houtman arrived at Bali, and the Dutch East India Company was established in 1602. The Dutch government expanded its control across the Indonesian archipelago during the second half of the 19th century. Dutch political and economic control over Bali began in the 1840s on the island's north coast when the Dutch pitted various competing Balinese realms against each other. In the late 1890s, struggles between Balinese kingdoms on the island's south were exploited by the Dutch to increase their control.
In June 1860, the famous Welsh naturalist, Alfred Russel Wallace, travelled to Bali from Singapore, landing at Buleleng on the north coast of the island. Wallace's trip to Bali was instrumental in helping him devise his Wallace Line theory. The Wallace Line is a faunal boundary that runs through the strait between Bali and Lombok. It is a boundary between species. In his travel memoir The Malay Archipelago, Wallace wrote of his experience in Bali, which has a strong mention of the unique Balinese irrigation methods:
I was astonished and delighted; as my visit to Java was some years later, I had never beheld so beautiful and well-cultivated a district out of Europe. A slightly undulating plain extends from the seacoast about ten or twelve miles (16 or 19 kilometres) inland, where it is bounded by a fine range of wooded and cultivated hills. Houses and villages, marked out by dense clumps of coconut palms, tamarind and other fruit trees, are dotted about in every direction; while between them extend luxurious rice grounds, watered by an elaborate system of irrigation that would be the pride of the best-cultivated parts of Europe.
The Dutch mounted large naval and ground assaults at the Sanur region in 1906 and were met by the thousands of members of the royal family and their followers who rather than yield to the superior Dutch force committed ritual suicide (puputan) to avoid the humiliation of surrender. Despite Dutch demands for surrender, an estimated 200 Balinese killed themselves rather than surrender. In the Dutch intervention in Bali, a similar mass suicide occurred in the face of a Dutch assault in Klungkung. Afterwards, the Dutch governours exercised administrative control over the island, but local control over religion and culture generally remained intact. Dutch rule over Bali came later and was never as well established as in other parts of Indonesia such as Java and Maluku.
In the 1930s, anthropologists Margaret Mead and Gregory Bateson, artists Miguel Covarrubias and Walter Spies, and musicologist Colin McPhee all spent time here. Their accounts of the island and its peoples created a western image of Bali as "an enchanted land of aesthetes at peace with themselves and nature". Western tourists began to visit the island. The sensuous image of Bali was enhanced in the West by a quasi-pornographic 1932 documentary Virgins of Bali about a day in the lives of two teenage Balinese girls whom the film's narrator Deane Dickason notes in the first scene "bathe their shamelessly nude bronze bodies". Under the looser version of the Hays code that existed up to 1934, nudity involving "civilised" (i.e. white) women was banned, but permitted with "uncivilised" (i.e. all non-white women), a loophole that was exploited by the producers of Virgins of Bali. The film, which mostly consisted of scenes of topless Balinese women was a great success in 1932, and almost single-handedly made Bali into a popular spot for tourists.
Imperial Japan occupied Bali during World War II. It was not originally a target in their Netherlands East Indies Campaign, but as the airfields on Borneo were inoperative due to heavy rains, the Imperial Japanese Army decided to occupy Bali, which did not suffer from comparable weather. The island had no regular Royal Netherlands East Indies Army (KNIL) troops. There was only a Native Auxiliary Corps Prajoda (Korps Prajoda) consisting of about 600 native soldiers and several Dutch KNIL officers under the command of KNIL Lieutenant Colonel W.P. Roodenburg. On 19 February 1942, the Japanese forces landed near the town of Sanoer (Sanur). The island was quickly captured.
During the Japanese occupation, a Balinese military officer, I Gusti Ngurah Rai, formed a Balinese 'freedom army'. The harshness of Japanese occupation forces made them more resented than the Dutch colonial rulers.
In 1945, Bali was liberated by the British 5th infantry Division under the command of Major-General Robert Mansergh who took the Japanese surrender. Once Japanese forces had been repatriated the island was handed over to the Dutch the following year.
In 1946, the Dutch constituted Bali as one of the 13 administrative districts of the newly proclaimed State of East Indonesia, a rival state to the Republic of Indonesia, which was proclaimed and headed by Sukarno and Hatta. Bali was included in the "Republic of the United States of Indonesia" when the Netherlands recognised Indonesian independence on 29 December 1949. The first governor of Bali, Anak Agung Bagus Suteja, was appointed by President Sukarno in 1958, when Bali became a province.
The 1963 eruption of Mount Agung killed thousands, created economic havoc, and forced many displaced Balinese to be transmigrated to other parts of Indonesia. Mirroring the widening of social divisions across Indonesia in the 1950s and early 1960s, Bali saw conflict between supporters of the traditional caste system, and those rejecting this system. Politically, the opposition was represented by supporters of the Indonesian Communist Party (PKI) and the Indonesian Nationalist Party (PNI), with tensions and ill-feeling further increased by the PKI's land reform programmes. A purported coup attempt in Jakarta was averted by forces led by General Suharto.
The army became the dominant power as it instigated a violent anti-communist purge, in which the army blamed the PKI for the coup. Most estimates suggest that at least 500,000 people were killed across Indonesia, with an estimated 80,000 killed in Bali, equivalent to 5% of the island's population. With no Islamic forces involved as in Java and Sumatra, upper-caste PNI landlords led the extermination of PKI members.
As a result of the 1965–66 upheavals, Suharto was able to manoeuvre Sukarno out of the presidency. His "New Order" government re-established relations with Western countries. The pre-War Bali as "paradise" was revived in a modern form. The resulting large growth in tourism has led to a dramatic increase in Balinese standards of living and significant foreign exchange earned for the country.
A bombing in 2002 by militant Islamists in the tourist area of Kuta killed 202 people, mostly foreigners. This attack, and another in 2005, severely reduced tourism, producing much economic hardship on the island.
On 27 November 2017, Mount Agung erupted five times, causing the evacuation of thousands, disrupting air travel and causing much environmental damage. Further eruptions also occurred between 2018 and 2019.
On 15–16 November 2022, was held in Nusa Dua the 2022 G20 Bali summit, the seventeenth meeting of Group of Twenty (G20).
The island of Bali lies 3.2 km (2.0 mi) east of Java, and is approximately 8 degrees south of the equator. Bali and Java are separated by the Bali Strait. East to west, the island is approximately 153 km (95 mi) wide and spans approximately 112 km (70 mi) north to south; administratively it covers 5,780 km (2,230 sq mi), or 5,577 km (2,153 sq mi) without Nusa Penida District, which comprises three small islands off the southeast coast of Bali. Its population density was roughly 747 people/km (1,930 people/sq mi) in 2020.
Bali's central mountains include several peaks over 2,000 metres (6,600 feet) in elevation and active volcanoes such as Mount Batur. The highest is Mount Agung (3,031 m; 9,944 ft), known as the "mother mountain", which is an active volcano rated as one of the world's most likely sites for a massive eruption within the next 100 years. In late 2017 Mount Agung started erupting and large numbers of people were evacuated, temporarily closing the island's airport. Mountains range from centre to the eastern side, with Mount Agung the easternmost peak. Bali's volcanic nature has contributed to its exceptional fertility and its tall mountain ranges provide the high rainfall that supports the highly productive agriculture sector. South of the mountains is a broad, steadily descending area where most of Bali's large rice crop is grown. The northern side of the mountains slopes more steeply to the sea and is the main coffee-producing area of the island, along with rice, vegetables, and cattle. The longest river, Ayung River, flows approximately 75 km (47 mi) (see List of rivers of Bali).
The island is surrounded by coral reefs. Beaches in the south tend to have white sand while those in the north and west have black sand. Bali has no major waterways, although the Ho River is navigable by small sampan boats. Black sand beaches between Pasut and Klatingdukuh are being developed for tourism, but apart from the seaside temple of Tanah Lot, they are not yet used for significant tourism.
The largest city is the provincial capital, Denpasar, near the southern coast. Its population is around 726,800 (mid 2022). Bali's second-largest city is the old colonial capital, Singaraja, which is located on the north coast and is home to around 150,000 people in 2020. Other important cities include the beach resort, Kuta, which is practically part of Denpasar's urban area, and Ubud, situated at the north of Denpasar, is the island's cultural centre.
Three small islands lie to the immediate south-east and all are administratively part of the Klungkung regency of Bali: Nusa Penida, Nusa Lembongan and Nusa Ceningan. These islands are separated from Bali by the Badung Strait.
To the east, the Lombok Strait separates Bali from Lombok and marks the biogeographical division between the fauna of the Indomalayan realm and the distinctly different fauna of Australasia. The transition is known as the Wallace Line, named after Alfred Russel Wallace, who first proposed a transition zone between these two major biomes. When sea levels dropped during the Pleistocene ice age, Bali was connected to Java and Sumatra and to the mainland of Asia and shared the Asian fauna, but the deep water of the Lombok Strait continued to keep Lombok Island and the Lesser Sunda archipelago isolated.
Being just 8 degrees south of the equator, Bali has a fairly even climate all year round. Average year-round temperature stands at around 30 °C (86 °F) with a humidity level of about 85%.
Daytime temperatures at low elevations vary between 20 and 33 °C (68 and 91 °F), but the temperatures decrease significantly with increasing elevation.
The west monsoon is in place from approximately October to April, and this can bring significant rain, particularly from December to March. During the rainy season, there are comparatively fewer tourists seen in Bali. During the Easter and Christmas holidays, the weather is very unpredictable. Outside of the monsoon period, humidity is relatively low and any rain is unlikely in lowland areas.
Bali lies just to the west of the Wallace Line, and thus has a fauna that is Asian in character, with very little Australasian influence, and has more in common with Java than with Lombok. An exception is the yellow-crested cockatoo, a member of a primarily Australasian family. There are around 280 species of birds, including the critically endangered Bali myna, which is endemic. Others include barn swallow, black-naped oriole, black racket-tailed treepie, crested serpent-eagle, crested treeswift, dollarbird, Java sparrow, lesser adjutant, long-tailed shrike, milky stork, Pacific swallow, red-rumped swallow, sacred kingfisher, sea eagle, woodswallow, savanna nightjar, stork-billed kingfisher, yellow-vented bulbul and great egret.
Until the early 20th century, Bali was possibly home to several large mammals: banteng, leopard and the endemic Bali tiger. The banteng still occurs in its domestic form, whereas leopards are found only in neighbouring Java, and the Bali tiger is extinct. The last definite record of a tiger on Bali dates from 1937 when one was shot, though the subspecies may have survived until the 1940s or 1950s. Pleistocene and Holocene megafaunas include banteng and giant tapir (based on speculations that they might have reached up to the Wallace Line), and rhinoceros.
Squirrels are quite commonly encountered, less often is the Asian palm civet, which is also kept in coffee farms to produce kopi luwak. Bats are well represented, perhaps the most famous place to encounter them remaining is the Goa Lawah (Temple of the Bats) where they are worshipped by the locals and also constitute a tourist attraction. They also occur in other cave temples, for instance at Gangga Beach. Two species of monkey occur. The crab-eating macaque, known locally as "kera", is quite common around human settlements and temples, where it becomes accustomed to being fed by humans, particularly in any of the three "monkey forest" temples, such as the popular one in the Ubud area. They are also quite often kept as pets by locals. The second monkey, endemic to Java and some surrounding islands such as Bali, is far rarer and more elusive and is the Javan langur, locally known as "lutung". They occur in a few places apart from the West Bali National Park. They are born an orange colour, though they would have already changed to a more blackish colouration by their first year. In Java, however, there is more of a tendency for this species to retain its juvenile orange colour into adulthood, and a mixture of black and orange monkeys can be seen together as a family. Other rarer mammals include the leopard cat, Sunda pangolin and black giant squirrel.
Snakes include the king cobra and reticulated python. The water monitor can grow to at least 1.5 m (4.9 ft) in length and 50 kg (110 lb) and can move quickly.
The rich coral reefs around the coast, particularly around popular diving spots such as Tulamben, Amed, Menjangan or neighbouring Nusa Penida, host a wide range of marine life, for instance hawksbill turtle, giant sunfish, giant manta ray, giant moray eel, bumphead parrotfish, hammerhead shark, reef shark, barracuda, and sea snakes. Dolphins are commonly encountered on the north coast near Singaraja and Lovina.
A team of scientists surveyed from 29 April 2011, to 11 May 2011, at 33 sea sites around Bali. They discovered 952 species of reef fish of which 8 were new discoveries at Pemuteran, Gilimanuk, Nusa Dua, Tulamben and Candidasa, and 393 coral species, including two new ones at Padangbai and between Padangbai and Amed. The average coverage level of healthy coral was 36% (better than in Raja Ampat and Halmahera by 29% or in Fakfak and Kaimana by 25%) with the highest coverage found in Gili Selang and Gili Mimpang in Candidasa, Karangasem Regency.
Among the larger trees the most common are: banyan trees, jackfruit, coconuts, bamboo species, acacia trees and also endless rows of coconuts and banana species. Numerous flowers can be seen: hibiscus, frangipani, bougainvillea, poinsettia, oleander, jasmine, water lily, lotus, roses, begonias, orchids and hydrangeas exist. On higher grounds that receive more moisture, for instance, around Kintamani, certain species of fern trees, mushrooms and even pine trees thrive well. Rice comes in many varieties. Other plants with agricultural value include: salak, mangosteen, corn, Kintamani orange, coffee and water spinach.
Over-exploitation by the tourist industry has led to 200 out of 400 rivers on the island drying up. Research suggests that the southern part of Bali would face a water shortage. To ease the shortage, the central government plans to build a water catchment and processing facility at Petanu River in Gianyar. The 300 litres capacity of water per second will be channelled to Denpasar, Badung and Gianyar in 2013.
A 2010 Environment Ministry report on its environmental quality index gave Bali a score of 99.65, which was the highest score of Indonesia's 33 provinces. The score considers the level of total suspended solids, dissolved oxygen, and chemical oxygen demand in water.
Erosion at Lebih Beach has seen seven metres (23 feet) of land lost every year. Decades ago, this beach was used for holy pilgrimages with more than 10,000 people, but they have now moved to Masceti Beach.
In 2017, a year when Bali received nearly 5.7 million tourists, government officials declared a "garbage emergency" in response to the covering of 3.6-mile stretch of coastline in plastic waste brought in by the tide, amid concerns that the pollution could dissuade visitors from returning. Indonesia is one of the world's worst plastic polluters, with some estimates suggesting the country is the source of around 10 per cent of the world's plastic waste.
In the national legislature, Bali is represented by nine members, with a single electoral district covering the whole province. The Bali Regional People's Representative Council, the provincial legislature, has 55 members. The province's politics has historically been dominated by the Indonesian Democratic Party of Struggle (PDI-P), which has won by far the most votes in every election in Bali since the first free elections in 1999.
The province is divided into eight regencies (kabupaten) and one city (kota). These are, with their areas and their populations at the 2010 census and the 2020 census, together with the official estimates as at mid 2022 and the Human Development Index for each regency and city.
In the 1970s, the Balinese economy was largely agriculture-based in terms of both output and employment. Tourism is now the largest single industry in terms of income, and as a result, Bali is one of Indonesia's wealthiest regions. In 2003, around 80% of Bali's economy was tourism related. By the end of June 2011, the rate of non-performing loans of all banks in Bali were 2.23%, lower than the average of Indonesian banking industry non-performing loan rates (about 5%). The economy, however, suffered significantly as a result of the terrorist bombings in 2002 and 2005. The tourism industry has since recovered from these events.
Although tourism produces the GDP's largest output, agriculture is still the island's biggest employer. Fishing also provides a significant number of jobs. Bali is also famous for its artisans who produce a vast array of handicrafts, including batik and ikat cloth and clothing, wooden carvings, stone carvings, painted art and silverware. Notably, individual villages typically adopt a single product, such as wind chimes or wooden furniture.
The Arabica coffee production region is the highland region of Kintamani near Mount Batur. Generally, Balinese coffee is processed using the wet method. This results in a sweet, soft coffee with good consistency. Typical flavours include lemon and other citrus notes. Many coffee farmers in Kintamani are members of a traditional farming system called Subak Abian, which is based on the Hindu philosophy of "Tri Hita Karana". According to this philosophy, the three causes of happiness are good relations with God, other people, and the environment. The Subak Abian system is ideally suited to the production of fair trade and organic coffee production. Arabica coffee from Kintamani is the first product in Indonesia to request a geographical indication.
In 1963 the Bali Beach Hotel in Sanur was built by Sukarno and boosted tourism in Bali. Before the Bali Beach Hotel construction, there were only three significant tourist-class hotels on the island. Construction of hotels and restaurants began to spread throughout Bali. Tourism further increased in Bali after the Ngurah Rai International Airport opened in 1970. The Buleleng regency government encouraged the tourism sector as one of the mainstays for economic progress and social welfare.
The tourism industry is primarily focused in the south, while also significant in the other parts of the island. The prominent tourist locations are the town of Kuta (with its beach), and its outer suburbs of Legian and Seminyak (which were once independent townships), the east coast town of Sanur (once the only tourist hub), Ubud towards the centre of the island, to the south of the Ngurah Rai International Airport, Jimbaran and the newer developments of Nusa Dua and Pecatu.
The United States government lifted its travel warnings in 2008. The Australian government issued an advisory on Friday, 4 May 2012, with the overall level of this advisory lowered to 'Exercise a high degree of caution'. The Swedish government issued a new warning on Sunday, 10 June 2012, because of one tourist who died from methanol poisoning. Australia last issued an advisory on Monday, 5 January 2015, due to new terrorist threats.
An offshoot of tourism is the growing real estate industry. Bali's real estate has been rapidly developing in the main tourist areas of Kuta, Legian, Seminyak, and Oberoi. Most recently, high-end 5-star projects are under development on the Bukit peninsula, on the island's south side. Expensive villas are being developed along the cliff sides of south Bali, with commanding panoramic ocean views. Foreign and domestic, many Jakarta individuals and companies are fairly active, and investment into other areas of the island also continues to grow. Land prices, despite the worldwide economic crisis, have remained stable.
In the last half of 2008, Indonesia's currency had dropped approximately 30% against the US dollar, providing many overseas visitors with improved value for their currencies.
Bali's tourism economy survived the Islamist terrorist bombings of 2002 and 2005, and the tourism industry has slowly recovered and surpassed its pre-terrorist bombing levels; the long-term trend has been a steady increase in visitor arrivals. In 2010, Bali received 2.57 million foreign tourists, which surpassed the target of 2.0–2.3 million tourists. The average occupancy of starred hotels achieved 65%, so the island still should be able to accommodate tourists for some years without any addition of new rooms/hotels, although at the peak season some of them are fully booked.
Bali received the Best Island award from Travel and Leisure in 2010. Bali won because of its attractive surroundings (both mountain and coastal areas), diverse tourist attractions, excellent international and local restaurants, and the friendliness of the local people. The Balinese culture and its religion are also considered the main factor of the award. One of the most prestigious events that symbolize a strong relationship between a god and its followers is Kecak dance. According to BBC Travel released in 2011, Bali is one of the World's Best Islands, ranking second after Santorini, Greece.
In 2006, Elizabeth Gilbert's memoir Eat, Pray, Love was published, and in August 2010 it was adapted into the film Eat Pray Love. It took place at Ubud and Padang-Padang Beach in Bali. Both the book and the film fuelled a boom in tourism in Ubud, the hill town and cultural and tourist centre that was the focus of Gilbert's quest for balance and love through traditional spirituality and healing.
In January 2016, after musician David Bowie died, it was revealed that in his will, Bowie asked for his ashes to be scattered in Bali, conforming to Buddhist rituals. He had visited and performed in several Southeast Asian cities early in his career, including Bangkok and Singapore.
Since 2011, China has displaced Japan as the second-largest supplier of tourists to Bali, while Australia still tops the list while India has also emerged as a greater supply of tourists. Chinese tourists increased by 17% in 2011 from 2010 due to the impact of ACFTA and new direct flights to Bali. In January 2012, Chinese tourists increased by 222.18% compared to January 2011, while Japanese tourists declined by 23.54% year on year.
Bali authorities reported the island had 2.88 million foreign tourists and 5 million domestic tourists in 2012, marginally surpassing the expectations of 2.8 million foreign tourists.
Based on a Bank Indonesia survey in May 2013, 34.39 per cent of tourists are upper-middle class, spending between $1,286 and $5,592, and are dominated by Australia, India, France, China, Germany and the UK. Some Chinese tourists have increased their levels of spending from previous years. 30.26 per cent of tourists are middle class, spending between $662 and $1,285. In 2017 it was expected that Chinese tourists would outnumber Australian tourists.
In January 2020, 10,000 Chinese tourists cancelled trips to Bali due to the COVID-19 pandemic. Because of the COVID-19 pandemic travel restrictions, Bali welcomed 1.07 million international travelers in 2020, most of them between January and March, which is -87% compared to 2019. In the first half of 2021, they welcomed 43 international travelers. The pandemic presented a major blow on Bali's tourism-dependent economy. On 3 February 2022, Bali reopened again for the first foreign tourists after 2 years of being closed due to the pandemic.
In 2022 Indonesia's Minister of Health, Budi Sadikin, stated that the tourism industry in Bali will be complemented by the medical industry.
At the beginning of 2023, the governor of Bali demanded a ban on the use of motorcycles by tourists. This happened after a series of accidents. Wayan Koster proposed to cancel the violators' visas. The move sparked widespread outrage on social media.
The Ngurah Rai International Airport is located near Jimbaran, on the isthmus at the southernmost part of the island. Lt. Col. Wisnu Airfield is in northwest Bali.
A coastal road circles the island, and three major two-lane arteries cross the central mountains at passes reaching 1,750 m in height (at Penelokan). The Ngurah Rai Bypass is a four-lane expressway that partly encircles Denpasar. Bali has no railway lines. There is a car ferry between Gilimanuk on the west coast of Bali to Ketapang on Java.
In December 2010 the Government of Indonesia invited investors to build a new Tanah Ampo Cruise Terminal at Karangasem, Bali with a projected worth of $30 million. On 17 July 2011, the first cruise ship (Sun Princess) anchored about 400 metres (1,300 feet) away from the wharf of Tanah Ampo harbour. The current pier is only 154 metres (505 feet) but will eventually be extended to 300 to 350 metres (980–1,150 feet) to accommodate international cruise ships. The harbour is safer than the existing facility at Benoa and has a scenic backdrop of east Bali mountains and green rice fields. The tender for improvement was subject to delays, and as of July 2013 the situation was unclear with cruise line operators complaining and even refusing to use the existing facility at Tanah Ampo.
A memorandum of understanding was signed by two ministers, Bali's governor and Indonesian Train Company to build 565 kilometres (351 miles) of railway along the coast around the island. As of July 2015, no details of these proposed railways have been released. In 2019 it was reported in Gapura Bali that Wayan Koster, governor of Bali, "is keen to improve Bali's transportation infrastructure and is considering plans to build an electric rail network across the island".
On 16 March 2011 (Tanjung) Benoa port received the "Best Port Welcome 2010" award from London's "Dream World Cruise Destination" magazine. Government plans to expand the role of Benoa port as export-import port to boost Bali's trade and industry sector. In 2013, The Tourism and Creative Economy Ministry advised that 306 cruise liners were scheduled to visit Indonesia, an increase of 43 per cent compared to the previous year.
In May 2011, an integrated Area Traffic Control System (ATCS) was implemented to reduce traffic jams at four crossing points: Ngurah Rai statue, Dewa Ruci Kuta crossing, Jimbaran crossing and Sanur crossing. ATCS is an integrated system connecting all traffic lights, CCTVs and other traffic signals with a monitoring office at the police headquarters. It has successfully been implemented in other ASEAN countries and will be implemented at other crossings in Bali.
On 21 December 2011, construction started on the Nusa Dua-Benoa-Ngurah Rai International Airport toll road, which will also provide a special lane for motorcycles. This has been done by seven state-owned enterprises led by PT Jasa Marga with 60% of the shares. PT Jasa Marga Bali Tol will construct the 9.91-kilometre-long (6.16-mile) toll road (totally 12.7 kilometres (7.89 miles) with access road). The construction is estimated to cost Rp.2.49 trillion ($273.9 million). The project goes through 2 kilometres (1 mile) of mangrove forest and through 2.3 kilometres (1.4 miles) of beach, both within 5.4 hectares (13 acres) area. The elevated toll road is built over the mangrove forest on 18,000 concrete pillars that occupied two hectares of mangrove forest. This was compensated by the planting of 300,000 mangrove trees along the road. On 21 December 2011, the Dewa Ruci 450-metre (1,480-foot) underpass has also started on the busy Dewa Ruci junction near Bali Kuta Galeria with an estimated cost of Rp136 billion ($14.9 million) from the state budget. On 23 September 2013, the Bali Mandara Toll Road was opened, with the Dewa Ruci Junction (Simpang Siur) underpass being opened previously.
To solve chronic traffic problems, the province will also build a toll road connecting Serangan with Tohpati, a toll road connecting Kuta, Denpasar, and Tohpati, and a flyover connecting Kuta and Ngurah Rai Airport.
The population of Bali was 3,890,757 as of the 2010 census, and 4,317,404 at the 2020 census; the official estimate as at mid 2022 was 4,415,100. In 2021, the Indonesian Ministry of Justice estimated that there were 109,801 foreigners living on Bali, with most originating from Russia, the USA, Australia, the UK, Germany, Japan, France, Italy, and the Netherlands.
A DNA study in 2005 by Karafet et al. found that 12% of Balinese Y-chromosomes are of likely Indian origin, while 84% are of likely Austronesian origin, and 2% of likely Melanesian origin.
Pre-modern Bali had four castes, as Jeff Lewis and Belinda Lewis state, but with a "very strong tradition of communal decision-making and interdependence". The four castes have been classified as Sudra (Shudra), Wesia (Vaishyas), Satria (Kshatriyas) and Brahmana (Brahmin).
The 19th-century scholars such as Crawfurd and Friederich suggested that the Balinese caste system had Indian origins, but Helen Creese states that scholars such as Brumund who had visited and stayed on the island of Bali suggested that his field observations conflicted with the "received understandings concerning its Indian origins". In Bali, the Shudra (locally spelt Soedra) has typically been the temple priests, though depending on the demographics, a temple priest may also be from the other three castes. In most regions, it has been the Shudra who typically make offerings to the gods on behalf of the Hindu devotees, chant prayers, recite meweda (Vedas), and set the course of Balinese temple festivals.
Religion in Bali (2022)
About 86.70% of Bali's population adheres to Balinese Hinduism, formed as a combination of existing local beliefs and Hindu influences from mainland Southeast Asia and South Asia. Minority religions include Islam (10.10%), Christianity (2.50%), and Buddhism (0.68%) as for 2018.
The general beliefs and practices of Agama Hindu Dharma mix ancient traditions and contemporary pressures placed by Indonesian laws that permit only monotheist belief under the national ideology of Pancasila. Traditionally, Hinduism in Indonesia had a pantheon of deities and that tradition of belief continues in practice; further, Hinduism in Indonesia granted freedom and flexibility to Hindus as to when, how and where to pray. However, officially, the Indonesian government considers and advertises Indonesian Hinduism as a monotheistic religion with certain officially recognised beliefs that comply with its national ideology. Indonesian school textbooks describe Hinduism as having one supreme being, Hindus offering three daily mandatory prayers, and Hinduism as having certain common beliefs that in part parallel those of Islam. Scholars contest whether these Indonesian government recognised and assigned beliefs to reflect the traditional beliefs and practices of Hindus in Indonesia before Indonesia gained independence from Dutch colonial rule.
Balinese Hinduism has roots in Indian Hinduism and Buddhism, which arrived through Java. Hindu influences reached the Indonesian Archipelago as early as the first century. Historical evidence is unclear about the diffusion process of cultural and spiritual ideas from India. Java legends refer to Saka-era, traced to 78 AD. Stories from the Mahabharata Epic have been traced in Indonesian islands to the 1st century; however, the versions mirror those found in the southeast Indian peninsular region (now Tamil Nadu and southern Karnataka and Andhra Pradesh).
The Bali tradition adopted the pre-existing animistic traditions of the indigenous people. This influence strengthened the belief that the gods and goddesses are present in all things. Every element of nature, therefore, possesses its power, which reflects the power of the gods. A rock, tree, dagger, or woven cloth is a potential home for spirits whose energy can be directed for good or evil. Balinese Hinduism is deeply interwoven with art and ritual. Ritualising states of self-control are a notable feature of religious expression among the people, who for this reason have become famous for their graceful and decorous behaviour.
Apart from the majority of Balinese Hindus, there also exist Chinese immigrants whose traditions have melded with that of the locals. As a result, these Sino-Balinese embrace their original religion, which is a mixture of Buddhism, Christianity, Taoism, and Confucianism, and find a way to harmonise it with the local traditions. Hence, it is not uncommon to find local Sino-Balinese during the local temple's odalan. Moreover, Balinese Hindu priests are invited to perform rites alongside a Chinese priest in the event of the death of a Sino-Balinese. Nevertheless, the Sino-Balinese claim to embrace Buddhism for administrative purposes, such as their Identity Cards. The Roman Catholic community has a diocese, the Diocese of Denpasar that encompasses the province of Bali and West Nusa Tenggara and has its cathedral located in Denpasar.
Balinese and Indonesian are the most widely spoken languages in Bali, and the vast majority of Balinese people are bilingual or trilingual. The most common spoken language around the tourist areas is Indonesian, as many people in the tourist sector are not solely Balinese, but migrants from Java, Lombok, Sumatra, and other parts of Indonesia. The Balinese language is heavily stratified due to the Balinese caste system. Kawi and Sanskrit are also commonly used by some Hindu priests in Bali, as Hindu literature was mostly written in Sanskrit.
English and Chinese are the next most common languages (and the primary foreign languages) of many Balinese, owing to the requirements of the tourism industry, as well as the English-speaking community and huge Chinese-Indonesian population. Other foreign languages, such as Japanese, Korean, French, Russian or German are often used in multilingual signs for foreign tourists.
Bali is renowned for its diverse and sophisticated art forms, such as painting, sculpture, woodcarving, handcrafts, and performing arts. Balinese cuisine is also distinctive, and unlike the rest of Indonesia, pork is commonly found in Balinese dishes such as Babi Guling. Balinese percussion orchestra music, known as gamelan, is highly developed and varied. Balinese performing arts often portray stories from Hindu epics such as the Ramayana but with heavy Balinese influence. Famous Balinese dances include pendet, legong, baris, topeng, barong, gong keybar, and kecak (the monkey dance). Bali boasts one of the most diverse and innovative performing arts cultures in the world, with paid performances at thousands of temple festivals, private ceremonies, and public shows.
Kaja and kelod are the Balinese equivalents of North and South, which refer to one's orientation between the island's largest mountain Gunung Agung (kaja), and the sea (kelod). In addition to spatial orientation, kaja and kelod have the connotation of good and evil; gods and ancestors are believed to live on the mountain whereas demons live in the sea. Buildings such as temples and residential homes are spatially oriented by having the most sacred spaces closest to the mountain and the unclean places nearest to the sea.
Most temples have an inner courtyard and an outer courtyard which are arranged with the inner courtyard furthest kaja. These spaces serve as performance venues since most Balinese rituals are accompanied by any combination of music, dance, and drama. The performances that take place in the inner courtyard are classified as wali, the most sacred rituals which are offerings exclusively for the gods, while the outer courtyard is where bebali ceremonies are held, which are intended for gods and people. Lastly, performances meant solely for the entertainment of humans take place outside the temple's walls and are called bali-balihan. This three-tiered system of classification was standardised in 1971 by a committee of Balinese officials and artists to better protect the sanctity of the oldest and most sacred Balinese rituals from being performed for a paying audience.
Tourism, Bali's chief industry, has provided the island with a foreign audience that is eager to pay for entertainment, thus creating new performance opportunities and more demand for performers. The impact of tourism is controversial since before it became integrated into the economy, the Balinese performing arts did not exist as a capitalist venture, and were not performed for entertainment outside of their respective ritual context. Since the 1930s sacred rituals such as the barong dance have been performed both in their original contexts, as well as exclusively for paying tourists. This has led to new versions of many of these performances that have developed according to the preferences of foreign audiences; some villages have a barong mask specifically for non-ritual performances and an older mask that is only used for sacred performances.
Throughout the year, there are many festivals celebrated locally or island-wide according to the traditional calendars. The Hindu New Year, Nyepi, is celebrated in the spring by a day of silence. On this day everyone stays at home and tourists are encouraged (or required) to remain in their hotels. On the day before New Year, large and colourful sculptures of Ogoh-ogoh monsters are paraded and burned in the evening to drive away evil spirits. Other festivals throughout the year are specified by the Balinese pawukon calendrical system.
Celebrations are held for many occasions such as a tooth-filing (coming-of-age ritual), cremation or odalan (temple festival). One of the most important concepts that Balinese ceremonies have in common is that of désa kala patra, which refers to how ritual performances must be appropriate in both the specific and general social context. Many ceremonial art forms such as wayang kulit and topeng are highly improvisatory, providing flexibility for the performer to adapt the performance to the current situation. Many celebrations call for a loud, boisterous atmosphere with much activity, and the resulting aesthetic, ramé, is distinctively Balinese. Often two or more gamelan ensembles will be performing well within earshot, and sometimes compete with each other to be heard. Likewise, the audience members talk amongst themselves, get up and walk around, or even cheer on the performance, which adds to the many layers of activity and the liveliness typical of ramé.
Balinese society continues to revolve around each family's ancestral village, to which the cycle of life and religion is closely tied. Coercive aspects of traditional society, such as customary law sanctions imposed by traditional authorities such as village councils (including "kasepekang", or shunning) have risen in importance as a consequence of the democratisation and decentralisation of Indonesia since 1998.
Other than Balinese sacred rituals and festivals, the government presents Bali Arts Festival to showcase Bali's performing arts and various artworks produced by the local talents that they have. It is held once a year, from the second week of June until the end of July. Southeast Asia's biggest annual festival of words and ideas Ubud Writers and Readers Festival is held at Ubud in October, which is participated by the world's most celebrated writers, artists, thinkers, and performers.
One unusual tradition is the naming of children in Bali. In general, Balinese people name their children depending on the order they are born, and the names are the same for both males and females.
Bali was the host of Miss World 2013 (63rd edition of the Miss World pageant). It was the first time Indonesia hosted an international beauty pageant. In 2022, Bali also co-hosted Miss Grand International 2022 along with Jakarta, West Java, and Banten.
Bali is a major world surfing destination with popular breaks dotted across the southern coastline and around the offshore island of Nusa Lembongan.
As part of the Coral Triangle, Bali, including Nusa Penida, offers a wide range of dive sites with varying types of reefs, and tropical aquatic life.
Bali was the host of 2008 Asian Beach Games. It was the second time Indonesia hosted an Asia-level multi-sport event, after Jakarta held the 1962 Asian Games.
In 2023, Bali was the location for a major eSports event, the Dota 2 Bali Major, the third and final Major of the Dota Pro Circuit season. The event was held at the Ayana Estate and the Champa Garden, and it was the first time that a Dota Pro Circuit Major was held in Indonesia.
In football, Bali is home to Bali United football club, which plays in Liga 1. The team was relocated from Samarinda, East Kalimantan to Gianyar, Bali. Harbiansyah Hanafiah, the main commissioner of Bali United explained that he changed the name and moved the home base because there was no representative from Bali in the highest football tier in Indonesia. Another reason was due to local fans in Samarinda preferring to support Pusamania Borneo F.C. rather than Persisam.
In June 2012, Subak, the irrigation system for paddy fields in Jatiluwih, central Bali was listed as a Natural UNESCO World Heritage Site. | [
{
"paragraph_id": 0,
"text": "Bali (/ˈbɑːli/; Balinese: ᬩᬮᬶ) is a province of Indonesia and the westernmost of the Lesser Sunda Islands. East of Java and west of Lombok, the province includes the island of Bali and a few smaller offshore islands, notably Nusa Penida, Nusa Lembongan, and Nusa Ceningan to the southeast. The provincial capital, Denpasar, is the most populous city in the Lesser Sunda Islands and the second-largest, after Makassar, in Eastern Indonesia. The upland town of Ubud in Greater Denpasar is considered Bali's cultural centre. The province is Indonesia's main tourist destination, with a significant rise in tourism since the 1980s. Tourism-related business makes up 80% of its economy.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bali is the only Hindu-majority province in Indonesia, with 86.9% of the population adhering to Balinese Hinduism. It is renowned for its highly developed arts, including traditional and modern dance, sculpture, painting, leather, metalworking, and music. The Indonesian International Film Festival is held every year in Bali. Other international events that have been held in Bali include Miss World 2013, the 2018 Annual Meetings of the International Monetary Fund and the World Bank Group and the 2022 G20 summit. In March 2017, TripAdvisor named Bali as the world's top destination in its Traveller's Choice award, which it also earned in January 2021.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Bali is part of the Coral Triangle, the area with the highest biodiversity of marine species, especially fish and turtles. In this area alone, over 500 reef-building coral species can be found. For comparison, this is about seven times as many as in the entire Caribbean. Bali is the home of the Subak irrigation system, a UNESCO World Heritage Site. It is also home to a unified confederation of kingdoms composed of 10 traditional royal Balinese houses, each house ruling a specific geographic area. The confederation is the successor of the Bali Kingdom. The royal houses are not recognised by the government of Indonesia; however, they originated before Dutch colonisation.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Bali was inhabited around 2000 BC by Austronesian people who migrated originally from the island of Taiwan to Southeast Asia and Oceania through Maritime Southeast Asia. Culturally and linguistically, the Balinese are closely related to the people of the Indonesian archipelago, Malaysia, Brunei, the Philippines, and Oceania. Stone tools dating from this time have been found near the village of Cekik in the island's west.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In ancient Bali, nine Hindu sects existed, the Pasupata, Bhairawa, Siwa Shidanta, Vaishnava, Bodha, Brahma, Resi, Sora and Ganapatya. Each sect revered a specific deity as its personal Godhead.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Inscriptions from 896 and 911 do not mention a king, until 914, when Sri Kesarivarma is mentioned. They also reveal an independent Bali, with a distinct dialect, where Buddhism and Shaivism were practised simultaneously. Mpu Sindok's great-granddaughter, Mahendradatta (Gunapriyadharmapatni), married the Bali king Udayana Warmadewa (Dharmodayanavarmadeva) around 989, giving birth to Airlangga around 1001. This marriage also brought more Hinduism and Javanese culture to Bali. Princess Sakalendukirana appeared in 1098. Suradhipa reigned from 1115 to 1119, and Jayasakti from 1146 until 1150. Jayapangus appears on inscriptions between 1178 and 1181, while Adikuntiketana and his son Paramesvara in 1204.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Balinese culture was strongly influenced by Indian, Chinese, and particularly Hindu culture, beginning around the 1st century AD. The name Bali dwipa (\"Bali island\") has been discovered from various inscriptions, including the Blanjong pillar inscription written by Sri Kesari Warmadewa in 914 AD and mentioning Walidwipa. It was during this time that the people developed their complex irrigation system subak to grow rice in wet-field cultivation. Some religious and cultural traditions still practised today can be traced to this period.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Hindu-Buddhist Majapahit Empire (1293–1520 AD) on eastern Java founded a Balinese colony in 1343. The uncle of Hayam Wuruk is mentioned in the charters of 1384–86. Mass Javanese immigration to Bali occurred in the next century when the Majapahit Empire fell in 1520. Bali's government then became an independent collection of Hindu kingdoms which led to a Balinese national identity and major enhancements in culture, arts, and economy. The nation with various kingdoms became independent for up to 386 years until 1906 when the Dutch subjugated and repulsed the natives for economic control and took it over.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The first known European contact with Bali is thought to have been made in 1512, when a Portuguese expedition led by Antonio Abreu and Francisco Serrão sighted its northern shores. It was the first expedition of a series of bi-annual fleets to the Moluccas, that throughout the 16th century travelled along the coasts of the Sunda Islands. Bali was also mapped in 1512, in the chart of Francisco Rodrigues, aboard the expedition. In 1585, a ship foundered off the Bukit Peninsula and left a few Portuguese in the service of Dewa Agung.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 1597, the Dutch explorer Cornelis de Houtman arrived at Bali, and the Dutch East India Company was established in 1602. The Dutch government expanded its control across the Indonesian archipelago during the second half of the 19th century. Dutch political and economic control over Bali began in the 1840s on the island's north coast when the Dutch pitted various competing Balinese realms against each other. In the late 1890s, struggles between Balinese kingdoms on the island's south were exploited by the Dutch to increase their control.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In June 1860, the famous Welsh naturalist, Alfred Russel Wallace, travelled to Bali from Singapore, landing at Buleleng on the north coast of the island. Wallace's trip to Bali was instrumental in helping him devise his Wallace Line theory. The Wallace Line is a faunal boundary that runs through the strait between Bali and Lombok. It is a boundary between species. In his travel memoir The Malay Archipelago, Wallace wrote of his experience in Bali, which has a strong mention of the unique Balinese irrigation methods:",
"title": "History"
},
{
"paragraph_id": 11,
"text": "I was astonished and delighted; as my visit to Java was some years later, I had never beheld so beautiful and well-cultivated a district out of Europe. A slightly undulating plain extends from the seacoast about ten or twelve miles (16 or 19 kilometres) inland, where it is bounded by a fine range of wooded and cultivated hills. Houses and villages, marked out by dense clumps of coconut palms, tamarind and other fruit trees, are dotted about in every direction; while between them extend luxurious rice grounds, watered by an elaborate system of irrigation that would be the pride of the best-cultivated parts of Europe.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Dutch mounted large naval and ground assaults at the Sanur region in 1906 and were met by the thousands of members of the royal family and their followers who rather than yield to the superior Dutch force committed ritual suicide (puputan) to avoid the humiliation of surrender. Despite Dutch demands for surrender, an estimated 200 Balinese killed themselves rather than surrender. In the Dutch intervention in Bali, a similar mass suicide occurred in the face of a Dutch assault in Klungkung. Afterwards, the Dutch governours exercised administrative control over the island, but local control over religion and culture generally remained intact. Dutch rule over Bali came later and was never as well established as in other parts of Indonesia such as Java and Maluku.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In the 1930s, anthropologists Margaret Mead and Gregory Bateson, artists Miguel Covarrubias and Walter Spies, and musicologist Colin McPhee all spent time here. Their accounts of the island and its peoples created a western image of Bali as \"an enchanted land of aesthetes at peace with themselves and nature\". Western tourists began to visit the island. The sensuous image of Bali was enhanced in the West by a quasi-pornographic 1932 documentary Virgins of Bali about a day in the lives of two teenage Balinese girls whom the film's narrator Deane Dickason notes in the first scene \"bathe their shamelessly nude bronze bodies\". Under the looser version of the Hays code that existed up to 1934, nudity involving \"civilised\" (i.e. white) women was banned, but permitted with \"uncivilised\" (i.e. all non-white women), a loophole that was exploited by the producers of Virgins of Bali. The film, which mostly consisted of scenes of topless Balinese women was a great success in 1932, and almost single-handedly made Bali into a popular spot for tourists.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Imperial Japan occupied Bali during World War II. It was not originally a target in their Netherlands East Indies Campaign, but as the airfields on Borneo were inoperative due to heavy rains, the Imperial Japanese Army decided to occupy Bali, which did not suffer from comparable weather. The island had no regular Royal Netherlands East Indies Army (KNIL) troops. There was only a Native Auxiliary Corps Prajoda (Korps Prajoda) consisting of about 600 native soldiers and several Dutch KNIL officers under the command of KNIL Lieutenant Colonel W.P. Roodenburg. On 19 February 1942, the Japanese forces landed near the town of Sanoer (Sanur). The island was quickly captured.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "During the Japanese occupation, a Balinese military officer, I Gusti Ngurah Rai, formed a Balinese 'freedom army'. The harshness of Japanese occupation forces made them more resented than the Dutch colonial rulers.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 1945, Bali was liberated by the British 5th infantry Division under the command of Major-General Robert Mansergh who took the Japanese surrender. Once Japanese forces had been repatriated the island was handed over to the Dutch the following year.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 1946, the Dutch constituted Bali as one of the 13 administrative districts of the newly proclaimed State of East Indonesia, a rival state to the Republic of Indonesia, which was proclaimed and headed by Sukarno and Hatta. Bali was included in the \"Republic of the United States of Indonesia\" when the Netherlands recognised Indonesian independence on 29 December 1949. The first governor of Bali, Anak Agung Bagus Suteja, was appointed by President Sukarno in 1958, when Bali became a province.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The 1963 eruption of Mount Agung killed thousands, created economic havoc, and forced many displaced Balinese to be transmigrated to other parts of Indonesia. Mirroring the widening of social divisions across Indonesia in the 1950s and early 1960s, Bali saw conflict between supporters of the traditional caste system, and those rejecting this system. Politically, the opposition was represented by supporters of the Indonesian Communist Party (PKI) and the Indonesian Nationalist Party (PNI), with tensions and ill-feeling further increased by the PKI's land reform programmes. A purported coup attempt in Jakarta was averted by forces led by General Suharto.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The army became the dominant power as it instigated a violent anti-communist purge, in which the army blamed the PKI for the coup. Most estimates suggest that at least 500,000 people were killed across Indonesia, with an estimated 80,000 killed in Bali, equivalent to 5% of the island's population. With no Islamic forces involved as in Java and Sumatra, upper-caste PNI landlords led the extermination of PKI members.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "As a result of the 1965–66 upheavals, Suharto was able to manoeuvre Sukarno out of the presidency. His \"New Order\" government re-established relations with Western countries. The pre-War Bali as \"paradise\" was revived in a modern form. The resulting large growth in tourism has led to a dramatic increase in Balinese standards of living and significant foreign exchange earned for the country.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "A bombing in 2002 by militant Islamists in the tourist area of Kuta killed 202 people, mostly foreigners. This attack, and another in 2005, severely reduced tourism, producing much economic hardship on the island.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "On 27 November 2017, Mount Agung erupted five times, causing the evacuation of thousands, disrupting air travel and causing much environmental damage. Further eruptions also occurred between 2018 and 2019.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "On 15–16 November 2022, was held in Nusa Dua the 2022 G20 Bali summit, the seventeenth meeting of Group of Twenty (G20).",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The island of Bali lies 3.2 km (2.0 mi) east of Java, and is approximately 8 degrees south of the equator. Bali and Java are separated by the Bali Strait. East to west, the island is approximately 153 km (95 mi) wide and spans approximately 112 km (70 mi) north to south; administratively it covers 5,780 km (2,230 sq mi), or 5,577 km (2,153 sq mi) without Nusa Penida District, which comprises three small islands off the southeast coast of Bali. Its population density was roughly 747 people/km (1,930 people/sq mi) in 2020.",
"title": "Geography"
},
{
"paragraph_id": 25,
"text": "Bali's central mountains include several peaks over 2,000 metres (6,600 feet) in elevation and active volcanoes such as Mount Batur. The highest is Mount Agung (3,031 m; 9,944 ft), known as the \"mother mountain\", which is an active volcano rated as one of the world's most likely sites for a massive eruption within the next 100 years. In late 2017 Mount Agung started erupting and large numbers of people were evacuated, temporarily closing the island's airport. Mountains range from centre to the eastern side, with Mount Agung the easternmost peak. Bali's volcanic nature has contributed to its exceptional fertility and its tall mountain ranges provide the high rainfall that supports the highly productive agriculture sector. South of the mountains is a broad, steadily descending area where most of Bali's large rice crop is grown. The northern side of the mountains slopes more steeply to the sea and is the main coffee-producing area of the island, along with rice, vegetables, and cattle. The longest river, Ayung River, flows approximately 75 km (47 mi) (see List of rivers of Bali).",
"title": "Geography"
},
{
"paragraph_id": 26,
"text": "The island is surrounded by coral reefs. Beaches in the south tend to have white sand while those in the north and west have black sand. Bali has no major waterways, although the Ho River is navigable by small sampan boats. Black sand beaches between Pasut and Klatingdukuh are being developed for tourism, but apart from the seaside temple of Tanah Lot, they are not yet used for significant tourism.",
"title": "Geography"
},
{
"paragraph_id": 27,
"text": "The largest city is the provincial capital, Denpasar, near the southern coast. Its population is around 726,800 (mid 2022). Bali's second-largest city is the old colonial capital, Singaraja, which is located on the north coast and is home to around 150,000 people in 2020. Other important cities include the beach resort, Kuta, which is practically part of Denpasar's urban area, and Ubud, situated at the north of Denpasar, is the island's cultural centre.",
"title": "Geography"
},
{
"paragraph_id": 28,
"text": "Three small islands lie to the immediate south-east and all are administratively part of the Klungkung regency of Bali: Nusa Penida, Nusa Lembongan and Nusa Ceningan. These islands are separated from Bali by the Badung Strait.",
"title": "Geography"
},
{
"paragraph_id": 29,
"text": "To the east, the Lombok Strait separates Bali from Lombok and marks the biogeographical division between the fauna of the Indomalayan realm and the distinctly different fauna of Australasia. The transition is known as the Wallace Line, named after Alfred Russel Wallace, who first proposed a transition zone between these two major biomes. When sea levels dropped during the Pleistocene ice age, Bali was connected to Java and Sumatra and to the mainland of Asia and shared the Asian fauna, but the deep water of the Lombok Strait continued to keep Lombok Island and the Lesser Sunda archipelago isolated.",
"title": "Geography"
},
{
"paragraph_id": 30,
"text": "Being just 8 degrees south of the equator, Bali has a fairly even climate all year round. Average year-round temperature stands at around 30 °C (86 °F) with a humidity level of about 85%.",
"title": "Geography"
},
{
"paragraph_id": 31,
"text": "Daytime temperatures at low elevations vary between 20 and 33 °C (68 and 91 °F), but the temperatures decrease significantly with increasing elevation.",
"title": "Geography"
},
{
"paragraph_id": 32,
"text": "The west monsoon is in place from approximately October to April, and this can bring significant rain, particularly from December to March. During the rainy season, there are comparatively fewer tourists seen in Bali. During the Easter and Christmas holidays, the weather is very unpredictable. Outside of the monsoon period, humidity is relatively low and any rain is unlikely in lowland areas.",
"title": "Geography"
},
{
"paragraph_id": 33,
"text": "Bali lies just to the west of the Wallace Line, and thus has a fauna that is Asian in character, with very little Australasian influence, and has more in common with Java than with Lombok. An exception is the yellow-crested cockatoo, a member of a primarily Australasian family. There are around 280 species of birds, including the critically endangered Bali myna, which is endemic. Others include barn swallow, black-naped oriole, black racket-tailed treepie, crested serpent-eagle, crested treeswift, dollarbird, Java sparrow, lesser adjutant, long-tailed shrike, milky stork, Pacific swallow, red-rumped swallow, sacred kingfisher, sea eagle, woodswallow, savanna nightjar, stork-billed kingfisher, yellow-vented bulbul and great egret.",
"title": "Ecology"
},
{
"paragraph_id": 34,
"text": "Until the early 20th century, Bali was possibly home to several large mammals: banteng, leopard and the endemic Bali tiger. The banteng still occurs in its domestic form, whereas leopards are found only in neighbouring Java, and the Bali tiger is extinct. The last definite record of a tiger on Bali dates from 1937 when one was shot, though the subspecies may have survived until the 1940s or 1950s. Pleistocene and Holocene megafaunas include banteng and giant tapir (based on speculations that they might have reached up to the Wallace Line), and rhinoceros.",
"title": "Ecology"
},
{
"paragraph_id": 35,
"text": "Squirrels are quite commonly encountered, less often is the Asian palm civet, which is also kept in coffee farms to produce kopi luwak. Bats are well represented, perhaps the most famous place to encounter them remaining is the Goa Lawah (Temple of the Bats) where they are worshipped by the locals and also constitute a tourist attraction. They also occur in other cave temples, for instance at Gangga Beach. Two species of monkey occur. The crab-eating macaque, known locally as \"kera\", is quite common around human settlements and temples, where it becomes accustomed to being fed by humans, particularly in any of the three \"monkey forest\" temples, such as the popular one in the Ubud area. They are also quite often kept as pets by locals. The second monkey, endemic to Java and some surrounding islands such as Bali, is far rarer and more elusive and is the Javan langur, locally known as \"lutung\". They occur in a few places apart from the West Bali National Park. They are born an orange colour, though they would have already changed to a more blackish colouration by their first year. In Java, however, there is more of a tendency for this species to retain its juvenile orange colour into adulthood, and a mixture of black and orange monkeys can be seen together as a family. Other rarer mammals include the leopard cat, Sunda pangolin and black giant squirrel.",
"title": "Ecology"
},
{
"paragraph_id": 36,
"text": "Snakes include the king cobra and reticulated python. The water monitor can grow to at least 1.5 m (4.9 ft) in length and 50 kg (110 lb) and can move quickly.",
"title": "Ecology"
},
{
"paragraph_id": 37,
"text": "The rich coral reefs around the coast, particularly around popular diving spots such as Tulamben, Amed, Menjangan or neighbouring Nusa Penida, host a wide range of marine life, for instance hawksbill turtle, giant sunfish, giant manta ray, giant moray eel, bumphead parrotfish, hammerhead shark, reef shark, barracuda, and sea snakes. Dolphins are commonly encountered on the north coast near Singaraja and Lovina.",
"title": "Ecology"
},
{
"paragraph_id": 38,
"text": "A team of scientists surveyed from 29 April 2011, to 11 May 2011, at 33 sea sites around Bali. They discovered 952 species of reef fish of which 8 were new discoveries at Pemuteran, Gilimanuk, Nusa Dua, Tulamben and Candidasa, and 393 coral species, including two new ones at Padangbai and between Padangbai and Amed. The average coverage level of healthy coral was 36% (better than in Raja Ampat and Halmahera by 29% or in Fakfak and Kaimana by 25%) with the highest coverage found in Gili Selang and Gili Mimpang in Candidasa, Karangasem Regency.",
"title": "Ecology"
},
{
"paragraph_id": 39,
"text": "Among the larger trees the most common are: banyan trees, jackfruit, coconuts, bamboo species, acacia trees and also endless rows of coconuts and banana species. Numerous flowers can be seen: hibiscus, frangipani, bougainvillea, poinsettia, oleander, jasmine, water lily, lotus, roses, begonias, orchids and hydrangeas exist. On higher grounds that receive more moisture, for instance, around Kintamani, certain species of fern trees, mushrooms and even pine trees thrive well. Rice comes in many varieties. Other plants with agricultural value include: salak, mangosteen, corn, Kintamani orange, coffee and water spinach.",
"title": "Ecology"
},
{
"paragraph_id": 40,
"text": "Over-exploitation by the tourist industry has led to 200 out of 400 rivers on the island drying up. Research suggests that the southern part of Bali would face a water shortage. To ease the shortage, the central government plans to build a water catchment and processing facility at Petanu River in Gianyar. The 300 litres capacity of water per second will be channelled to Denpasar, Badung and Gianyar in 2013.",
"title": "Environment"
},
{
"paragraph_id": 41,
"text": "A 2010 Environment Ministry report on its environmental quality index gave Bali a score of 99.65, which was the highest score of Indonesia's 33 provinces. The score considers the level of total suspended solids, dissolved oxygen, and chemical oxygen demand in water.",
"title": "Environment"
},
{
"paragraph_id": 42,
"text": "Erosion at Lebih Beach has seen seven metres (23 feet) of land lost every year. Decades ago, this beach was used for holy pilgrimages with more than 10,000 people, but they have now moved to Masceti Beach.",
"title": "Environment"
},
{
"paragraph_id": 43,
"text": "In 2017, a year when Bali received nearly 5.7 million tourists, government officials declared a \"garbage emergency\" in response to the covering of 3.6-mile stretch of coastline in plastic waste brought in by the tide, amid concerns that the pollution could dissuade visitors from returning. Indonesia is one of the world's worst plastic polluters, with some estimates suggesting the country is the source of around 10 per cent of the world's plastic waste.",
"title": "Environment"
},
{
"paragraph_id": 44,
"text": "In the national legislature, Bali is represented by nine members, with a single electoral district covering the whole province. The Bali Regional People's Representative Council, the provincial legislature, has 55 members. The province's politics has historically been dominated by the Indonesian Democratic Party of Struggle (PDI-P), which has won by far the most votes in every election in Bali since the first free elections in 1999.",
"title": "Government"
},
{
"paragraph_id": 45,
"text": "The province is divided into eight regencies (kabupaten) and one city (kota). These are, with their areas and their populations at the 2010 census and the 2020 census, together with the official estimates as at mid 2022 and the Human Development Index for each regency and city.",
"title": "Government"
},
{
"paragraph_id": 46,
"text": "In the 1970s, the Balinese economy was largely agriculture-based in terms of both output and employment. Tourism is now the largest single industry in terms of income, and as a result, Bali is one of Indonesia's wealthiest regions. In 2003, around 80% of Bali's economy was tourism related. By the end of June 2011, the rate of non-performing loans of all banks in Bali were 2.23%, lower than the average of Indonesian banking industry non-performing loan rates (about 5%). The economy, however, suffered significantly as a result of the terrorist bombings in 2002 and 2005. The tourism industry has since recovered from these events.",
"title": "Economy"
},
{
"paragraph_id": 47,
"text": "Although tourism produces the GDP's largest output, agriculture is still the island's biggest employer. Fishing also provides a significant number of jobs. Bali is also famous for its artisans who produce a vast array of handicrafts, including batik and ikat cloth and clothing, wooden carvings, stone carvings, painted art and silverware. Notably, individual villages typically adopt a single product, such as wind chimes or wooden furniture.",
"title": "Economy"
},
{
"paragraph_id": 48,
"text": "The Arabica coffee production region is the highland region of Kintamani near Mount Batur. Generally, Balinese coffee is processed using the wet method. This results in a sweet, soft coffee with good consistency. Typical flavours include lemon and other citrus notes. Many coffee farmers in Kintamani are members of a traditional farming system called Subak Abian, which is based on the Hindu philosophy of \"Tri Hita Karana\". According to this philosophy, the three causes of happiness are good relations with God, other people, and the environment. The Subak Abian system is ideally suited to the production of fair trade and organic coffee production. Arabica coffee from Kintamani is the first product in Indonesia to request a geographical indication.",
"title": "Economy"
},
{
"paragraph_id": 49,
"text": "In 1963 the Bali Beach Hotel in Sanur was built by Sukarno and boosted tourism in Bali. Before the Bali Beach Hotel construction, there were only three significant tourist-class hotels on the island. Construction of hotels and restaurants began to spread throughout Bali. Tourism further increased in Bali after the Ngurah Rai International Airport opened in 1970. The Buleleng regency government encouraged the tourism sector as one of the mainstays for economic progress and social welfare.",
"title": "Economy"
},
{
"paragraph_id": 50,
"text": "The tourism industry is primarily focused in the south, while also significant in the other parts of the island. The prominent tourist locations are the town of Kuta (with its beach), and its outer suburbs of Legian and Seminyak (which were once independent townships), the east coast town of Sanur (once the only tourist hub), Ubud towards the centre of the island, to the south of the Ngurah Rai International Airport, Jimbaran and the newer developments of Nusa Dua and Pecatu.",
"title": "Economy"
},
{
"paragraph_id": 51,
"text": "The United States government lifted its travel warnings in 2008. The Australian government issued an advisory on Friday, 4 May 2012, with the overall level of this advisory lowered to 'Exercise a high degree of caution'. The Swedish government issued a new warning on Sunday, 10 June 2012, because of one tourist who died from methanol poisoning. Australia last issued an advisory on Monday, 5 January 2015, due to new terrorist threats.",
"title": "Economy"
},
{
"paragraph_id": 52,
"text": "An offshoot of tourism is the growing real estate industry. Bali's real estate has been rapidly developing in the main tourist areas of Kuta, Legian, Seminyak, and Oberoi. Most recently, high-end 5-star projects are under development on the Bukit peninsula, on the island's south side. Expensive villas are being developed along the cliff sides of south Bali, with commanding panoramic ocean views. Foreign and domestic, many Jakarta individuals and companies are fairly active, and investment into other areas of the island also continues to grow. Land prices, despite the worldwide economic crisis, have remained stable.",
"title": "Economy"
},
{
"paragraph_id": 53,
"text": "In the last half of 2008, Indonesia's currency had dropped approximately 30% against the US dollar, providing many overseas visitors with improved value for their currencies.",
"title": "Economy"
},
{
"paragraph_id": 54,
"text": "Bali's tourism economy survived the Islamist terrorist bombings of 2002 and 2005, and the tourism industry has slowly recovered and surpassed its pre-terrorist bombing levels; the long-term trend has been a steady increase in visitor arrivals. In 2010, Bali received 2.57 million foreign tourists, which surpassed the target of 2.0–2.3 million tourists. The average occupancy of starred hotels achieved 65%, so the island still should be able to accommodate tourists for some years without any addition of new rooms/hotels, although at the peak season some of them are fully booked.",
"title": "Economy"
},
{
"paragraph_id": 55,
"text": "Bali received the Best Island award from Travel and Leisure in 2010. Bali won because of its attractive surroundings (both mountain and coastal areas), diverse tourist attractions, excellent international and local restaurants, and the friendliness of the local people. The Balinese culture and its religion are also considered the main factor of the award. One of the most prestigious events that symbolize a strong relationship between a god and its followers is Kecak dance. According to BBC Travel released in 2011, Bali is one of the World's Best Islands, ranking second after Santorini, Greece.",
"title": "Economy"
},
{
"paragraph_id": 56,
"text": "In 2006, Elizabeth Gilbert's memoir Eat, Pray, Love was published, and in August 2010 it was adapted into the film Eat Pray Love. It took place at Ubud and Padang-Padang Beach in Bali. Both the book and the film fuelled a boom in tourism in Ubud, the hill town and cultural and tourist centre that was the focus of Gilbert's quest for balance and love through traditional spirituality and healing.",
"title": "Economy"
},
{
"paragraph_id": 57,
"text": "In January 2016, after musician David Bowie died, it was revealed that in his will, Bowie asked for his ashes to be scattered in Bali, conforming to Buddhist rituals. He had visited and performed in several Southeast Asian cities early in his career, including Bangkok and Singapore.",
"title": "Economy"
},
{
"paragraph_id": 58,
"text": "Since 2011, China has displaced Japan as the second-largest supplier of tourists to Bali, while Australia still tops the list while India has also emerged as a greater supply of tourists. Chinese tourists increased by 17% in 2011 from 2010 due to the impact of ACFTA and new direct flights to Bali. In January 2012, Chinese tourists increased by 222.18% compared to January 2011, while Japanese tourists declined by 23.54% year on year.",
"title": "Economy"
},
{
"paragraph_id": 59,
"text": "Bali authorities reported the island had 2.88 million foreign tourists and 5 million domestic tourists in 2012, marginally surpassing the expectations of 2.8 million foreign tourists.",
"title": "Economy"
},
{
"paragraph_id": 60,
"text": "Based on a Bank Indonesia survey in May 2013, 34.39 per cent of tourists are upper-middle class, spending between $1,286 and $5,592, and are dominated by Australia, India, France, China, Germany and the UK. Some Chinese tourists have increased their levels of spending from previous years. 30.26 per cent of tourists are middle class, spending between $662 and $1,285. In 2017 it was expected that Chinese tourists would outnumber Australian tourists.",
"title": "Economy"
},
{
"paragraph_id": 61,
"text": "In January 2020, 10,000 Chinese tourists cancelled trips to Bali due to the COVID-19 pandemic. Because of the COVID-19 pandemic travel restrictions, Bali welcomed 1.07 million international travelers in 2020, most of them between January and March, which is -87% compared to 2019. In the first half of 2021, they welcomed 43 international travelers. The pandemic presented a major blow on Bali's tourism-dependent economy. On 3 February 2022, Bali reopened again for the first foreign tourists after 2 years of being closed due to the pandemic.",
"title": "Economy"
},
{
"paragraph_id": 62,
"text": "In 2022 Indonesia's Minister of Health, Budi Sadikin, stated that the tourism industry in Bali will be complemented by the medical industry.",
"title": "Economy"
},
{
"paragraph_id": 63,
"text": "At the beginning of 2023, the governor of Bali demanded a ban on the use of motorcycles by tourists. This happened after a series of accidents. Wayan Koster proposed to cancel the violators' visas. The move sparked widespread outrage on social media.",
"title": "Economy"
},
{
"paragraph_id": 64,
"text": "The Ngurah Rai International Airport is located near Jimbaran, on the isthmus at the southernmost part of the island. Lt. Col. Wisnu Airfield is in northwest Bali.",
"title": "Transportation"
},
{
"paragraph_id": 65,
"text": "A coastal road circles the island, and three major two-lane arteries cross the central mountains at passes reaching 1,750 m in height (at Penelokan). The Ngurah Rai Bypass is a four-lane expressway that partly encircles Denpasar. Bali has no railway lines. There is a car ferry between Gilimanuk on the west coast of Bali to Ketapang on Java.",
"title": "Transportation"
},
{
"paragraph_id": 66,
"text": "In December 2010 the Government of Indonesia invited investors to build a new Tanah Ampo Cruise Terminal at Karangasem, Bali with a projected worth of $30 million. On 17 July 2011, the first cruise ship (Sun Princess) anchored about 400 metres (1,300 feet) away from the wharf of Tanah Ampo harbour. The current pier is only 154 metres (505 feet) but will eventually be extended to 300 to 350 metres (980–1,150 feet) to accommodate international cruise ships. The harbour is safer than the existing facility at Benoa and has a scenic backdrop of east Bali mountains and green rice fields. The tender for improvement was subject to delays, and as of July 2013 the situation was unclear with cruise line operators complaining and even refusing to use the existing facility at Tanah Ampo.",
"title": "Transportation"
},
{
"paragraph_id": 67,
"text": "A memorandum of understanding was signed by two ministers, Bali's governor and Indonesian Train Company to build 565 kilometres (351 miles) of railway along the coast around the island. As of July 2015, no details of these proposed railways have been released. In 2019 it was reported in Gapura Bali that Wayan Koster, governor of Bali, \"is keen to improve Bali's transportation infrastructure and is considering plans to build an electric rail network across the island\".",
"title": "Transportation"
},
{
"paragraph_id": 68,
"text": "On 16 March 2011 (Tanjung) Benoa port received the \"Best Port Welcome 2010\" award from London's \"Dream World Cruise Destination\" magazine. Government plans to expand the role of Benoa port as export-import port to boost Bali's trade and industry sector. In 2013, The Tourism and Creative Economy Ministry advised that 306 cruise liners were scheduled to visit Indonesia, an increase of 43 per cent compared to the previous year.",
"title": "Transportation"
},
{
"paragraph_id": 69,
"text": "In May 2011, an integrated Area Traffic Control System (ATCS) was implemented to reduce traffic jams at four crossing points: Ngurah Rai statue, Dewa Ruci Kuta crossing, Jimbaran crossing and Sanur crossing. ATCS is an integrated system connecting all traffic lights, CCTVs and other traffic signals with a monitoring office at the police headquarters. It has successfully been implemented in other ASEAN countries and will be implemented at other crossings in Bali.",
"title": "Transportation"
},
{
"paragraph_id": 70,
"text": "On 21 December 2011, construction started on the Nusa Dua-Benoa-Ngurah Rai International Airport toll road, which will also provide a special lane for motorcycles. This has been done by seven state-owned enterprises led by PT Jasa Marga with 60% of the shares. PT Jasa Marga Bali Tol will construct the 9.91-kilometre-long (6.16-mile) toll road (totally 12.7 kilometres (7.89 miles) with access road). The construction is estimated to cost Rp.2.49 trillion ($273.9 million). The project goes through 2 kilometres (1 mile) of mangrove forest and through 2.3 kilometres (1.4 miles) of beach, both within 5.4 hectares (13 acres) area. The elevated toll road is built over the mangrove forest on 18,000 concrete pillars that occupied two hectares of mangrove forest. This was compensated by the planting of 300,000 mangrove trees along the road. On 21 December 2011, the Dewa Ruci 450-metre (1,480-foot) underpass has also started on the busy Dewa Ruci junction near Bali Kuta Galeria with an estimated cost of Rp136 billion ($14.9 million) from the state budget. On 23 September 2013, the Bali Mandara Toll Road was opened, with the Dewa Ruci Junction (Simpang Siur) underpass being opened previously.",
"title": "Transportation"
},
{
"paragraph_id": 71,
"text": "To solve chronic traffic problems, the province will also build a toll road connecting Serangan with Tohpati, a toll road connecting Kuta, Denpasar, and Tohpati, and a flyover connecting Kuta and Ngurah Rai Airport.",
"title": "Transportation"
},
{
"paragraph_id": 72,
"text": "The population of Bali was 3,890,757 as of the 2010 census, and 4,317,404 at the 2020 census; the official estimate as at mid 2022 was 4,415,100. In 2021, the Indonesian Ministry of Justice estimated that there were 109,801 foreigners living on Bali, with most originating from Russia, the USA, Australia, the UK, Germany, Japan, France, Italy, and the Netherlands.",
"title": "Demographics"
},
{
"paragraph_id": 73,
"text": "A DNA study in 2005 by Karafet et al. found that 12% of Balinese Y-chromosomes are of likely Indian origin, while 84% are of likely Austronesian origin, and 2% of likely Melanesian origin.",
"title": "Demographics"
},
{
"paragraph_id": 74,
"text": "Pre-modern Bali had four castes, as Jeff Lewis and Belinda Lewis state, but with a \"very strong tradition of communal decision-making and interdependence\". The four castes have been classified as Sudra (Shudra), Wesia (Vaishyas), Satria (Kshatriyas) and Brahmana (Brahmin).",
"title": "Demographics"
},
{
"paragraph_id": 75,
"text": "The 19th-century scholars such as Crawfurd and Friederich suggested that the Balinese caste system had Indian origins, but Helen Creese states that scholars such as Brumund who had visited and stayed on the island of Bali suggested that his field observations conflicted with the \"received understandings concerning its Indian origins\". In Bali, the Shudra (locally spelt Soedra) has typically been the temple priests, though depending on the demographics, a temple priest may also be from the other three castes. In most regions, it has been the Shudra who typically make offerings to the gods on behalf of the Hindu devotees, chant prayers, recite meweda (Vedas), and set the course of Balinese temple festivals.",
"title": "Demographics"
},
{
"paragraph_id": 76,
"text": "Religion in Bali (2022)",
"title": "Demographics"
},
{
"paragraph_id": 77,
"text": "About 86.70% of Bali's population adheres to Balinese Hinduism, formed as a combination of existing local beliefs and Hindu influences from mainland Southeast Asia and South Asia. Minority religions include Islam (10.10%), Christianity (2.50%), and Buddhism (0.68%) as for 2018.",
"title": "Demographics"
},
{
"paragraph_id": 78,
"text": "The general beliefs and practices of Agama Hindu Dharma mix ancient traditions and contemporary pressures placed by Indonesian laws that permit only monotheist belief under the national ideology of Pancasila. Traditionally, Hinduism in Indonesia had a pantheon of deities and that tradition of belief continues in practice; further, Hinduism in Indonesia granted freedom and flexibility to Hindus as to when, how and where to pray. However, officially, the Indonesian government considers and advertises Indonesian Hinduism as a monotheistic religion with certain officially recognised beliefs that comply with its national ideology. Indonesian school textbooks describe Hinduism as having one supreme being, Hindus offering three daily mandatory prayers, and Hinduism as having certain common beliefs that in part parallel those of Islam. Scholars contest whether these Indonesian government recognised and assigned beliefs to reflect the traditional beliefs and practices of Hindus in Indonesia before Indonesia gained independence from Dutch colonial rule.",
"title": "Demographics"
},
{
"paragraph_id": 79,
"text": "Balinese Hinduism has roots in Indian Hinduism and Buddhism, which arrived through Java. Hindu influences reached the Indonesian Archipelago as early as the first century. Historical evidence is unclear about the diffusion process of cultural and spiritual ideas from India. Java legends refer to Saka-era, traced to 78 AD. Stories from the Mahabharata Epic have been traced in Indonesian islands to the 1st century; however, the versions mirror those found in the southeast Indian peninsular region (now Tamil Nadu and southern Karnataka and Andhra Pradesh).",
"title": "Demographics"
},
{
"paragraph_id": 80,
"text": "The Bali tradition adopted the pre-existing animistic traditions of the indigenous people. This influence strengthened the belief that the gods and goddesses are present in all things. Every element of nature, therefore, possesses its power, which reflects the power of the gods. A rock, tree, dagger, or woven cloth is a potential home for spirits whose energy can be directed for good or evil. Balinese Hinduism is deeply interwoven with art and ritual. Ritualising states of self-control are a notable feature of religious expression among the people, who for this reason have become famous for their graceful and decorous behaviour.",
"title": "Demographics"
},
{
"paragraph_id": 81,
"text": "Apart from the majority of Balinese Hindus, there also exist Chinese immigrants whose traditions have melded with that of the locals. As a result, these Sino-Balinese embrace their original religion, which is a mixture of Buddhism, Christianity, Taoism, and Confucianism, and find a way to harmonise it with the local traditions. Hence, it is not uncommon to find local Sino-Balinese during the local temple's odalan. Moreover, Balinese Hindu priests are invited to perform rites alongside a Chinese priest in the event of the death of a Sino-Balinese. Nevertheless, the Sino-Balinese claim to embrace Buddhism for administrative purposes, such as their Identity Cards. The Roman Catholic community has a diocese, the Diocese of Denpasar that encompasses the province of Bali and West Nusa Tenggara and has its cathedral located in Denpasar.",
"title": "Demographics"
},
{
"paragraph_id": 82,
"text": "Balinese and Indonesian are the most widely spoken languages in Bali, and the vast majority of Balinese people are bilingual or trilingual. The most common spoken language around the tourist areas is Indonesian, as many people in the tourist sector are not solely Balinese, but migrants from Java, Lombok, Sumatra, and other parts of Indonesia. The Balinese language is heavily stratified due to the Balinese caste system. Kawi and Sanskrit are also commonly used by some Hindu priests in Bali, as Hindu literature was mostly written in Sanskrit.",
"title": "Demographics"
},
{
"paragraph_id": 83,
"text": "English and Chinese are the next most common languages (and the primary foreign languages) of many Balinese, owing to the requirements of the tourism industry, as well as the English-speaking community and huge Chinese-Indonesian population. Other foreign languages, such as Japanese, Korean, French, Russian or German are often used in multilingual signs for foreign tourists.",
"title": "Demographics"
},
{
"paragraph_id": 84,
"text": "Bali is renowned for its diverse and sophisticated art forms, such as painting, sculpture, woodcarving, handcrafts, and performing arts. Balinese cuisine is also distinctive, and unlike the rest of Indonesia, pork is commonly found in Balinese dishes such as Babi Guling. Balinese percussion orchestra music, known as gamelan, is highly developed and varied. Balinese performing arts often portray stories from Hindu epics such as the Ramayana but with heavy Balinese influence. Famous Balinese dances include pendet, legong, baris, topeng, barong, gong keybar, and kecak (the monkey dance). Bali boasts one of the most diverse and innovative performing arts cultures in the world, with paid performances at thousands of temple festivals, private ceremonies, and public shows.",
"title": "Culture"
},
{
"paragraph_id": 85,
"text": "Kaja and kelod are the Balinese equivalents of North and South, which refer to one's orientation between the island's largest mountain Gunung Agung (kaja), and the sea (kelod). In addition to spatial orientation, kaja and kelod have the connotation of good and evil; gods and ancestors are believed to live on the mountain whereas demons live in the sea. Buildings such as temples and residential homes are spatially oriented by having the most sacred spaces closest to the mountain and the unclean places nearest to the sea.",
"title": "Culture"
},
{
"paragraph_id": 86,
"text": "Most temples have an inner courtyard and an outer courtyard which are arranged with the inner courtyard furthest kaja. These spaces serve as performance venues since most Balinese rituals are accompanied by any combination of music, dance, and drama. The performances that take place in the inner courtyard are classified as wali, the most sacred rituals which are offerings exclusively for the gods, while the outer courtyard is where bebali ceremonies are held, which are intended for gods and people. Lastly, performances meant solely for the entertainment of humans take place outside the temple's walls and are called bali-balihan. This three-tiered system of classification was standardised in 1971 by a committee of Balinese officials and artists to better protect the sanctity of the oldest and most sacred Balinese rituals from being performed for a paying audience.",
"title": "Culture"
},
{
"paragraph_id": 87,
"text": "Tourism, Bali's chief industry, has provided the island with a foreign audience that is eager to pay for entertainment, thus creating new performance opportunities and more demand for performers. The impact of tourism is controversial since before it became integrated into the economy, the Balinese performing arts did not exist as a capitalist venture, and were not performed for entertainment outside of their respective ritual context. Since the 1930s sacred rituals such as the barong dance have been performed both in their original contexts, as well as exclusively for paying tourists. This has led to new versions of many of these performances that have developed according to the preferences of foreign audiences; some villages have a barong mask specifically for non-ritual performances and an older mask that is only used for sacred performances.",
"title": "Culture"
},
{
"paragraph_id": 88,
"text": "Throughout the year, there are many festivals celebrated locally or island-wide according to the traditional calendars. The Hindu New Year, Nyepi, is celebrated in the spring by a day of silence. On this day everyone stays at home and tourists are encouraged (or required) to remain in their hotels. On the day before New Year, large and colourful sculptures of Ogoh-ogoh monsters are paraded and burned in the evening to drive away evil spirits. Other festivals throughout the year are specified by the Balinese pawukon calendrical system.",
"title": "Culture"
},
{
"paragraph_id": 89,
"text": "Celebrations are held for many occasions such as a tooth-filing (coming-of-age ritual), cremation or odalan (temple festival). One of the most important concepts that Balinese ceremonies have in common is that of désa kala patra, which refers to how ritual performances must be appropriate in both the specific and general social context. Many ceremonial art forms such as wayang kulit and topeng are highly improvisatory, providing flexibility for the performer to adapt the performance to the current situation. Many celebrations call for a loud, boisterous atmosphere with much activity, and the resulting aesthetic, ramé, is distinctively Balinese. Often two or more gamelan ensembles will be performing well within earshot, and sometimes compete with each other to be heard. Likewise, the audience members talk amongst themselves, get up and walk around, or even cheer on the performance, which adds to the many layers of activity and the liveliness typical of ramé.",
"title": "Culture"
},
{
"paragraph_id": 90,
"text": "Balinese society continues to revolve around each family's ancestral village, to which the cycle of life and religion is closely tied. Coercive aspects of traditional society, such as customary law sanctions imposed by traditional authorities such as village councils (including \"kasepekang\", or shunning) have risen in importance as a consequence of the democratisation and decentralisation of Indonesia since 1998.",
"title": "Culture"
},
{
"paragraph_id": 91,
"text": "Other than Balinese sacred rituals and festivals, the government presents Bali Arts Festival to showcase Bali's performing arts and various artworks produced by the local talents that they have. It is held once a year, from the second week of June until the end of July. Southeast Asia's biggest annual festival of words and ideas Ubud Writers and Readers Festival is held at Ubud in October, which is participated by the world's most celebrated writers, artists, thinkers, and performers.",
"title": "Culture"
},
{
"paragraph_id": 92,
"text": "One unusual tradition is the naming of children in Bali. In general, Balinese people name their children depending on the order they are born, and the names are the same for both males and females.",
"title": "Culture"
},
{
"paragraph_id": 93,
"text": "Bali was the host of Miss World 2013 (63rd edition of the Miss World pageant). It was the first time Indonesia hosted an international beauty pageant. In 2022, Bali also co-hosted Miss Grand International 2022 along with Jakarta, West Java, and Banten.",
"title": "Culture"
},
{
"paragraph_id": 94,
"text": "Bali is a major world surfing destination with popular breaks dotted across the southern coastline and around the offshore island of Nusa Lembongan.",
"title": "Sports"
},
{
"paragraph_id": 95,
"text": "As part of the Coral Triangle, Bali, including Nusa Penida, offers a wide range of dive sites with varying types of reefs, and tropical aquatic life.",
"title": "Sports"
},
{
"paragraph_id": 96,
"text": "Bali was the host of 2008 Asian Beach Games. It was the second time Indonesia hosted an Asia-level multi-sport event, after Jakarta held the 1962 Asian Games.",
"title": "Sports"
},
{
"paragraph_id": 97,
"text": "In 2023, Bali was the location for a major eSports event, the Dota 2 Bali Major, the third and final Major of the Dota Pro Circuit season. The event was held at the Ayana Estate and the Champa Garden, and it was the first time that a Dota Pro Circuit Major was held in Indonesia.",
"title": "Sports"
},
{
"paragraph_id": 98,
"text": "In football, Bali is home to Bali United football club, which plays in Liga 1. The team was relocated from Samarinda, East Kalimantan to Gianyar, Bali. Harbiansyah Hanafiah, the main commissioner of Bali United explained that he changed the name and moved the home base because there was no representative from Bali in the highest football tier in Indonesia. Another reason was due to local fans in Samarinda preferring to support Pusamania Borneo F.C. rather than Persisam.",
"title": "Sports"
},
{
"paragraph_id": 99,
"text": "In June 2012, Subak, the irrigation system for paddy fields in Jatiluwih, central Bali was listed as a Natural UNESCO World Heritage Site.",
"title": "Heritage sites"
}
] | Bali is a province of Indonesia and the westernmost of the Lesser Sunda Islands. East of Java and west of Lombok, the province includes the island of Bali and a few smaller offshore islands, notably Nusa Penida, Nusa Lembongan, and Nusa Ceningan to the southeast. The provincial capital, Denpasar, is the most populous city in the Lesser Sunda Islands and the second-largest, after Makassar, in Eastern Indonesia. The upland town of Ubud in Greater Denpasar is considered Bali's cultural centre. The province is Indonesia's main tourist destination, with a significant rise in tourism since the 1980s. Tourism-related business makes up 80% of its economy. Bali is the only Hindu-majority province in Indonesia, with 86.9% of the population adhering to Balinese Hinduism. It is renowned for its highly developed arts, including traditional and modern dance, sculpture, painting, leather, metalworking, and music. The Indonesian International Film Festival is held every year in Bali. Other international events that have been held in Bali include Miss World 2013, the 2018 Annual Meetings of the International Monetary Fund and the World Bank Group and the 2022 G20 summit. In March 2017, TripAdvisor named Bali as the world's top destination in its Traveller's Choice award, which it also earned in January 2021. Bali is part of the Coral Triangle, the area with the highest biodiversity of marine species, especially fish and turtles. In this area alone, over 500 reef-building coral species can be found. For comparison, this is about seven times as many as in the entire Caribbean. Bali is the home of the Subak irrigation system, a UNESCO World Heritage Site. It is also home to a unified confederation of kingdoms composed of 10 traditional royal Balinese houses, each house ruling a specific geographic area. The confederation is the successor of the Bali Kingdom. The royal houses are not recognised by the government of Indonesia; however, they originated before Dutch colonisation. | 2001-10-22T03:28:44Z | 2023-12-30T19:46:18Z | [
"Template:Reflist",
"Template:Cite book",
"Template:Cite EB1911",
"Template:Contains special characters",
"Template:Lang-ban",
"Template:Citation needed",
"Template:Flagcountry",
"Template:Convert",
"Template:Cn",
"Template:Cite web",
"Template:Doi",
"Template:Rp",
"Template:Portal",
"Template:Isbn",
"Template:Cbignore",
"Template:Pie chart",
"Template:Cite iucn",
"Template:Provinces of Indonesia",
"Template:Authority control",
"Template:Infobox settlement",
"Template:Main",
"Template:Cvt",
"Template:Fontcolor",
"Template:Other uses",
"Template:IPAc-en",
"Template:Historical populations",
"Template:ISBN",
"Template:Webarchive",
"Template:Prone to spam",
"Template:Osmrelation-inline",
"Template:Short description",
"Template:Use dmy dates",
"Template:Use Australian English",
"Template:Cite journal",
"Template:Official website",
"Template:See also",
"Template:Cite news",
"Template:Google books",
"Template:Sister project links"
] | https://en.wikipedia.org/wiki/Bali |
4,149 | Bulgarian language | Bulgarian (/bʌlˈɡɛəriən/ , /bʊlˈ-/ bu(u)l-GAIR-ee-ən; български език, bŭlgarski ezik, pronounced [ˈbɤɫɡɐrski] ) is an Eastern South Slavic language spoken in Southeast Europe, primarily in Bulgaria. It is the language of the Bulgarians.
Along with the closely related Macedonian language (collectively forming the East South Slavic languages), it is a member of the Balkan sprachbund and South Slavic dialect continuum of the Indo-European language family. The two languages have several characteristics that set them apart from all other Slavic languages, including the elimination of case declension, the development of a suffixed definite article, and the lack of a verb infinitive. They retain and have further developed the Proto-Slavic verb system (albeit analytically). One such major development is the innovation of evidential verb forms to encode for the source of information: witnessed, inferred, or reported.
It is the official language of Bulgaria, and since 2007 has been among the official languages of the European Union. It is also spoken by the Bulgarian historical communities in North Macedonia, Ukraine, Moldova, Serbia, Romania, Hungary, Albania and Greece.
One can divide the development of the Bulgarian language into several periods.
Bulgarian was the first Slavic language attested in writing. As Slavic linguistic unity lasted into late antiquity, the oldest manuscripts initially referred to this language as ѧзꙑкъ словѣньскъ, "the Slavic language". In the Middle Bulgarian period this name was gradually replaced by the name ѧзꙑкъ блъгарьскъ, the "Bulgarian language". In some cases, this name was used not only with regard to the contemporary Middle Bulgarian language of the copyist but also to the period of Old Bulgarian. A most notable example of anachronism is the Service of Saint Cyril from Skopje (Скопски миней), a 13th-century Middle Bulgarian manuscript from northern Macedonia according to which St. Cyril preached with "Bulgarian" books among the Moravian Slavs. The first mention of the language as the "Bulgarian language" instead of the "Slavonic language" comes in the work of the Greek clergy of the Archbishopric of Ohrid in the 11th century, for example in the Greek hagiography of Clement of Ohrid by Theophylact of Ohrid (late 11th century).
During the Middle Bulgarian period, the language underwent dramatic changes, losing the Slavonic case system, but preserving the rich verb system (while the development was exactly the opposite in other Slavic languages) and developing a definite article. It was influenced by its non-Slavic neighbors in the Balkan language area (mostly grammatically) and later also by Turkish, which was the official language of the Ottoman Empire, in the form of the Ottoman Turkish language, mostly lexically. The damaskin texts mark the transition from Middle Bulgarian to New Bulgarian, which was standardized in the 19th century.
As a national revival occurred toward the end of the period of Ottoman rule (mostly during the 19th century), a modern Bulgarian literary language gradually emerged that drew heavily on Church Slavonic/Old Bulgarian (and to some extent on literary Russian, which had preserved many lexical items from Church Slavonic) and later reduced the number of Turkish and other Balkan loans. Today one difference between Bulgarian dialects in the country and literary spoken Bulgarian is the significant presence of Old Bulgarian words and even word forms in the latter. Russian loans are distinguished from Old Bulgarian ones on the basis of the presence of specifically Russian phonetic changes, as in оборот (turnover, rev), непонятен (incomprehensible), ядро (nucleus) and others. Many other loans from French, English and the classical languages have subsequently entered the language as well.
Modern Bulgarian was based essentially on the Eastern dialects of the language, but its pronunciation is in many respects a compromise between East and West Bulgarian (see especially the phonetic sections below). Following the efforts of some figures of the National awakening of Bulgaria (most notably Neofit Rilski and Ivan Bogorov), there had been many attempts to codify a standard Bulgarian language; however, there was much argument surrounding the choice of norms. Between 1835 and 1878 more than 25 proposals were put forward and "linguistic chaos" ensued. Eventually the eastern dialects prevailed, and in 1899 the Bulgarian Ministry of Education officially codified a standard Bulgarian language based on the Drinov-Ivanchev orthography.
Bulgarian is the official language of Bulgaria, where it is used in all spheres of public life. As of 2011, it is spoken as a first language by about 6 million people in the country, or about four out of every five Bulgarian citizens.
There is also a significant Bulgarian diaspora abroad. One of the main historically established communities are the Bessarabian Bulgarians, whose settlement in the Bessarabia region of nowadays Moldova and Ukraine dates mostly to the early 19th century. There were 134,000 Bulgarian speakers in Ukraine at the 2001 census, 41,800 in Moldova as of the 2014 census (of which 15,300 were habitual users of the language), and presumably a significant proportion of the 13,200 ethnic Bulgarians residing in neighbouring Transnistria in 2016.
Another community abroad are the Banat Bulgarians, who migrated in the 17th century to the Banat region now split between Romania, Serbia and Hungary. They speak the Banat Bulgarian dialect, which has had its own written standard and a historically important literary tradition.
There are Bulgarian speakers in neighbouring countries as well. The regional dialects of Bulgarian and Macedonian form a dialect continuum, and there is no well-defined boundary where one language ends and the other begins. Within the limits of the Republic of North Macedonia a strong separate Macedonian identity has emerged since the Second World War, even though there still are a small number of citizens who identify their language as Bulgarian. Beyond the borders of North Macedonia, the situation is more fluid, and the pockets of speakers of the related regional dialects in Albania and in Greece variously identify their language as Macedonian or as Bulgarian. In Serbia, there were 13,300 speakers as of 2011, mainly concentrated in the so-called Western Outlands along the border with Bulgaria. Bulgarian is also spoken in Turkey: natively by Pomaks, and as a second language by many Bulgarian Turks who emigrated from Bulgaria, mostly during the "Big Excursion" of 1989.
The language is also represented among the diaspora in Western Europe and North America, which has been steadily growing since the 1990s. Countries with significant numbers of speakers include Germany, Spain, Italy, the United Kingdom (38,500 speakers in England and Wales as of 2011), France, the United States, and Canada (19,100 in 2011).
The language is mainly split into two broad dialect areas, based on the different reflexes of the Proto-Slavic yat vowel (Ѣ). This split, which occurred at some point during the Middle Ages, led to the development of Bulgaria's:
The literary language norm, which is generally based on the Eastern dialects, also has the Eastern alternating reflex of yat. However, it has not incorporated the general Eastern umlaut of all synchronic or even historic "ya" sounds into "e" before front vowels – e.g. поляна (polyana) vs. полени (poleni) "meadow – meadows" or even жаба (zhaba) vs. жеби (zhebi) "frog – frogs", even though it co-occurs with the yat alternation in almost all Eastern dialects that have it (except a few dialects along the yat border, e.g. in the Pleven region).
More examples of the yat umlaut in the literary language are:
Until 1945, Bulgarian orthography did not reveal this alternation and used the original Old Slavic Cyrillic letter yat (Ѣ), which was commonly called двойно е (dvoyno e) at the time, to express the historical yat vowel or at least root vowels displaying the ya – e alternation. The letter was used in each occurrence of such a root, regardless of the actual pronunciation of the vowel: thus, both mlyako and mlekar were spelled with (Ѣ). Among other things, this was seen as a way to "reconcile" the Western and the Eastern dialects and maintain language unity at a time when much of Bulgaria's Western dialect area was controlled by Serbia and Greece, but there were still hopes and occasional attempts to recover it. With the 1945 orthographic reform, this letter was abolished and the present spelling was introduced, reflecting the alternation in pronunciation.
This had implications for some grammatical constructions:
Sometimes, with the changes, words began to be spelled as other words with different meanings, e.g.:
In spite of the literary norm regarding the yat vowel, many people living in Western Bulgaria, including the capital Sofia, will fail to observe its rules. While the norm requires the realizations vidyal vs. videli (he has seen; they have seen), some natives of Western Bulgaria will preserve their local dialect pronunciation with "e" for all instances of "yat" (e.g. videl, videli). Others, attempting to adhere to the norm, will actually use the "ya" sound even in cases where the standard language has "e" (e.g. vidyal, vidyali). The latter hypercorrection is called свръхякане (svrah-yakane ≈"over-ya-ing").
Bulgarian is the only Slavic language whose literary standard does not naturally contain the iotated sound /jɛ/ (or its palatalized variant /ʲɛ/, except in non-Slavic foreign-loaned words). The sound is common in all modern Slavic languages (e.g. Czech medvěd /ˈmɛdvjɛt/ "bear", Polish pięć /pʲɛɲtɕ/ "five", Serbo-Croatian jelen /jělen/ "deer", Ukrainian немає /nemájɛ/ "there is not ...", Macedonian пишување /piʃuvaɲʲɛ/ "writing", etc.), as well as some Western Bulgarian dialectal forms – e.g. ора̀н’е /oˈraɲʲɛ/ (standard Bulgarian: оране /oˈranɛ/, "ploughing"), however it is not represented in standard Bulgarian speech or writing. Even where /jɛ/ occurs in other Slavic words, in Standard Bulgarian it is usually transcribed and pronounced as pure /ɛ/ – e.g. Boris Yeltsin is "Eltsin" (Борис Елцин), Yekaterinburg is "Ekaterinburg" (Екатеринбург) and Sarajevo is "Saraevo" (Сараево), although - because the sound is contained in a stressed syllable at the beginning of the word - Jelena Janković is "Yelena" – Йелена Янкович.
Until the period immediately following the Second World War, all Bulgarian and the majority of foreign linguists referred to the South Slavic dialect continuum spanning the area of modern Bulgaria, North Macedonia and parts of Northern Greece as a group of Bulgarian dialects. In contrast, Serbian sources tended to label them "south Serbian" dialects. Some local naming conventions included bolgárski, bugárski and so forth. The codifiers of the standard Bulgarian language, however, did not wish to make any allowances for a pluricentric "Bulgaro-Macedonian" compromise. In 1870 Marin Drinov, who played a decisive role in the standardization of the Bulgarian language, rejected the proposal of Parteniy Zografski and Kuzman Shapkarev for a mixed eastern and western Bulgarian/Macedonian foundation of the standard Bulgarian language, stating in his article in the newspaper Makedoniya: "Such an artificial assembly of written language is something impossible, unattainable and never heard of."
After 1944 the People's Republic of Bulgaria and the Socialist Federal Republic of Yugoslavia began a policy of making Macedonia into the connecting link for the establishment of a new Balkan Federative Republic and stimulating here a development of distinct Macedonian consciousness. With the proclamation of the Socialist Republic of Macedonia as part of the Yugoslav federation, the new authorities also started measures that would overcome the pro-Bulgarian feeling among parts of its population and in 1945 a separate Macedonian language was codified. After 1958, when the pressure from Moscow decreased, Sofia reverted to the view that the Macedonian language did not exist as a separate language. Nowadays, Bulgarian and Greek linguists, as well as some linguists from other countries, still consider the various Macedonian dialects as part of the broader Bulgarian pluricentric dialectal continuum. Outside Bulgaria and Greece, Macedonian is generally considered an autonomous language within the South Slavic dialect continuum. Sociolinguists agree that the question whether Macedonian is a dialect of Bulgarian or a language is a political one and cannot be resolved on a purely linguistic basis, because dialect continua do not allow for either/or judgements.
In 886 AD, the Bulgarian Empire introduced the Glagolitic alphabet which was devised by the Saints Cyril and Methodius in the 850s. The Glagolitic alphabet was gradually superseded in later centuries by the Cyrillic script, developed around the Preslav Literary School, Bulgaria in the late 9th century.
Several Cyrillic alphabets with 28 to 44 letters were used in the beginning and the middle of the 19th century during the efforts on the codification of Modern Bulgarian until an alphabet with 32 letters, proposed by Marin Drinov, gained prominence in the 1870s. The alphabet of Marin Drinov was used until the orthographic reform of 1945, when the letters yat (uppercase Ѣ, lowercase ѣ) and yus (uppercase Ѫ, lowercase ѫ) were removed from its alphabet, reducing the number of letters to 30.
With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek scripts.
Bulgarian possesses a phonology similar to that of the rest of the South Slavic languages, notably lacking Serbo-Croatian's phonemic vowel length and tones and alveo-palatal affricates. There is a general dichotomy between Eastern and Western dialects, with Eastern ones featuring consonant palatalization before front vowels (/ɛ/ and /i/) and substantial vowel reduction of the low vowels /ɛ/, /ɔ/ and /a/ in unstressed position, sometimes leading to neutralisation between /ɛ/ and /i/, /ɔ/ and /u/, and /a/ and /ɤ/. Both patterns have partial parallels in Russian, leading to partially similar sounds. In turn, the Western dialects generally do not have any allophonic palatalization and exhibit minor, if any, vowel reduction.
Standard Bulgarian keeps a middle ground between the macrodialects. It allows palatalizaton only before central and back vowels and only partial reduction of /a/ and /ɔ/. Reduction of /ɛ/, consonant palatalisation before front vowels and plain articulation of palatalized consonants before central and back vowels is strongly discouraged and labelled as provincial.
Bulgarian has six vowel phonemes, but at least eight distinct phones can be distinguished when reduced allophones are taken into consideration. There is currently no consensus on the number of Bulgarian consonants, with one school of thought advocating for the existence of only 22 consonant phonemes and another one claiming that there are not fewer than 39 consonant phonemes. The main bone of contention is how to treat palatalized consonants: as separate phonemes or as allophones of their respective plain counterparts.
The 22-consonant model is based on a general consensus reached by all major Bulgarian linguists in the 1930s and 1940s. In turn, the 39-consonant model was launched in the beginning of the 1950s under the influence of the ideas of Russian linguist Nikolai Trubetzkoy.
Despite frequent objections, the support of the Bulgarian Academy of Sciences has ensured Trubetzkoy's model virtual monopoly in state-issued phonologies and grammars since the 1960s. However, its reception abroad has been lukewarm, with a number of authors either calling the model into question or outright rejecting it. Thus, the Handbook of the International Phonetic Association only lists 22 consonants in Bulgarian's consonant inventory.
The parts of speech in Bulgarian are divided in ten types, which are categorized in two broad classes: mutable and immutable. The difference is that mutable parts of speech vary grammatically, whereas the immutable ones do not change, regardless of their use. The five classes of mutables are: nouns, adjectives, numerals, pronouns and verbs. Syntactically, the first four of these form the group of the noun or the nominal group. The immutables are: adverbs, prepositions, conjunctions, particles and interjections. Verbs and adverbs form the group of the verb or the verbal group.
Nouns and adjectives have the categories grammatical gender, number, case (only vocative) and definiteness in Bulgarian. Adjectives and adjectival pronouns agree with nouns in number and gender. Pronouns have gender and number and retain (as in nearly all Indo-European languages) a more significant part of the case system.
There are three grammatical genders in Bulgarian: masculine, feminine and neuter. The gender of the noun can largely be inferred from its ending: nouns ending in a consonant ("zero ending") are generally masculine (for example, град /ɡrat/ 'city', син /sin/ 'son', мъж /mɤʃ/ 'man'; those ending in –а/–я (-a/-ya) (жена /ʒɛˈna/ 'woman', дъщеря /dɐʃtɛrˈja/ 'daughter', улица /ˈulitsɐ/ 'street') are normally feminine; and nouns ending in –е, –о are almost always neuter (дете /dɛˈtɛ/ 'child', езеро /ˈɛzɛro/ 'lake'), as are those rare words (usually loanwords) that end in –и, –у, and –ю (цунами /tsuˈnami/ 'tsunami', табу /tɐˈbu/ 'taboo', меню /mɛˈnju/ 'menu'). Perhaps the most significant exception from the above are the relatively numerous nouns that end in a consonant and yet are feminine: these comprise, firstly, a large group of nouns with zero ending expressing quality, degree or an abstraction, including all nouns ending on –ост/–ест -{ost/est} (мъдрост /ˈmɤdrost/ 'wisdom', низост /ˈnizost/ 'vileness', прелест /ˈprɛlɛst/ 'loveliness', болест /ˈbɔlɛst/ 'sickness', любов /ljuˈbɔf/ 'love'), and secondly, a much smaller group of irregular nouns with zero ending which define tangible objects or concepts (кръв /krɤf/ 'blood', кост /kɔst/ 'bone', вечер /ˈvɛtʃɛr/ 'evening', нощ /nɔʃt/ 'night'). There are also some commonly used words that end in a vowel and yet are masculine: баща 'father', дядо 'grandfather', чичо / вуйчо 'uncle', and others.
The plural forms of the nouns do not express their gender as clearly as the singular ones, but may also provide some clues to it: the ending –и (-i) is more likely to be used with a masculine or feminine noun (факти /ˈfakti/ 'facts', болести /ˈbɔlɛsti/ 'sicknesses'), while one in –а/–я belongs more often to a neuter noun (езера /ɛzɛˈra/ 'lakes'). Also, the plural ending –ове /ovɛ/ occurs only in masculine nouns.
Two numbers are distinguished in Bulgarian–singular and plural. A variety of plural suffixes is used, and the choice between them is partly determined by their ending in singular and partly influenced by gender; in addition, irregular declension and alternative plural forms are common. Words ending in –а/–я (which are usually feminine) generally have the plural ending –и, upon dropping of the singular ending. Of nouns ending in a consonant, the feminine ones also use –и, whereas the masculine ones usually have –и for polysyllables and –ове for monosyllables (however, exceptions are especially common in this group). Nouns ending in –о/–е (most of which are neuter) mostly use the suffixes –а, –я (both of which require the dropping of the singular endings) and –та.
With cardinal numbers and related words such as няколко ('several'), masculine nouns use a special count form in –а/–я, which stems from the Proto-Slavonic dual: два/три стола ('two/three chairs') versus тези столове ('these chairs'); cf. feminine две/три/тези книги ('two/three/these books') and neuter две/три/тези легла ('two/three/these beds'). However, a recently developed language norm requires that count forms should only be used with masculine nouns that do not denote persons. Thus, двама/трима ученици ('two/three students') is perceived as more correct than двама/трима ученика, while the distinction is retained in cases such as два/три молива ('two/three pencils') versus тези моливи ('these pencils').
Cases exist only in the personal and some other pronouns (as they do in many other modern Indo-European languages), with nominative, accusative, dative and vocative forms. Vestiges are present in a number of phraseological units and sayings. The major exception are vocative forms, which are still in use for masculine (with the endings -е, -о and -ю) and feminine nouns (-[ь/й]о and -е) in the singular.
In modern Bulgarian, definiteness is expressed by a definite article which is postfixed to the noun, much like in the Scandinavian languages or Romanian (indefinite: човек, 'person'; definite: човекът, "the person") or to the first nominal constituent of definite noun phrases (indefinite: добър човек, 'a good person'; definite: добрият човек, "the good person"). There are four singular definite articles. Again, the choice between them is largely determined by the noun's ending in the singular. Nouns that end in a consonant and are masculine use –ът/–ят, when they are grammatical subjects, and –а/–я elsewhere. Nouns that end in a consonant and are feminine, as well as nouns that end in –а/–я (most of which are feminine, too) use –та. Nouns that end in –е/–о use –то.
The plural definite article is –те for all nouns except for those whose plural form ends in –а/–я; these get –та instead. When postfixed to adjectives the definite articles are –ят/–я for masculine gender (again, with the longer form being reserved for grammatical subjects), –та for feminine gender, –то for neuter gender, and –те for plural.
Both groups agree in gender and number with the noun they are appended to. They may also take the definite article as explained above.
Pronouns may vary in gender, number, and definiteness, and are the only parts of speech that have retained case inflections. Three cases are exhibited by some groups of pronouns – nominative, accusative and dative. The distinguishable types of pronouns include the following: personal, relative, reflexive, interrogative, negative, indefinitive, summative and possessive.
A Bulgarian verb has many distinct forms, as it varies in person, number, voice, aspect, mood, tense and in some cases gender.
Finite verbal forms are simple or compound and agree with subjects in person (first, second and third) and number (singular, plural). In addition to that, past compound forms using participles vary in gender (masculine, feminine, neuter) and voice (active and passive) as well as aspect (perfective/aorist and imperfective).
Bulgarian verbs express lexical aspect: perfective verbs signify the completion of the action of the verb and form past perfective (aorist) forms; imperfective ones are neutral with regard to it and form past imperfective forms. Most Bulgarian verbs can be grouped in perfective-imperfective pairs (imperfective/perfective: идвам/дойда "come", пристигам/пристигна "arrive"). Perfective verbs can be usually formed from imperfective ones by suffixation or prefixation, but the resultant verb often deviates in meaning from the original. In the pair examples above, aspect is stem-specific and therefore there is no difference in meaning.
In Bulgarian, there is also grammatical aspect. Three grammatical aspects are distinguishable: neutral, perfect and pluperfect. The neutral aspect comprises the three simple tenses and the future tense. The pluperfect is manifest in tenses that use double or triple auxiliary "be" participles like the past pluperfect subjunctive. Perfect constructions use a single auxiliary "be".
The traditional interpretation is that in addition to the four moods (наклонения /nəkloˈnɛnijɐ/) shared by most other European languages – indicative (изявително, /izʲəˈvitɛɫno/) imperative (повелително /poveˈlitelno/), subjunctive (подчинително /pottʃiˈnitɛɫno/) and conditional (условно, /oˈsɫɔvno/) – in Bulgarian there is one more to describe a general category of unwitnessed events – the inferential (преизказно /prɛˈiskɐzno/) mood. However, most contemporary Bulgarian linguists usually exclude the subjunctive mood and the inferential mood from the list of Bulgarian moods (thus placing the number of Bulgarian moods at a total of 3: indicative, imperative and conditional) and do not consider them to be moods but view them as verbial morphosyntactic constructs or separate gramemes of the verb class. The possible existence of a few other moods has been discussed in the literature. Most Bulgarian school grammars teach the traditional view of 4 Bulgarian moods (as described above, but excluding the subjunctive and including the inferential).
There are three grammatically distinctive positions in time – present, past and future – which combine with aspect and mood to produce a number of formations. Normally, in grammar books these formations are viewed as separate tenses – i. e. "past imperfect" would mean that the verb is in past tense, in the imperfective aspect, and in the indicative mood (since no other mood is shown). There are more than 40 different tenses across Bulgarian's two aspects and five moods.
In the indicative mood, there are three simple tenses:
In the indicative there are also the following compound tenses:
The four perfect constructions above can vary in aspect depending on the aspect of the main-verb participle; they are in fact pairs of imperfective and perfective aspects. Verbs in forms using past participles also vary in voice and gender.
There is only one simple tense in the imperative mood, the present, and there are simple forms only for the second-person singular, -и/-й (-i, -y/i), and plural, -ете/-йте (-ete, -yte), e.g. уча /ˈutʃɐ/ ('to study'): учи /oˈtʃi/, sg., учете /oˈtʃɛtɛ/, pl.; играя /ˈiɡrajɐ/ 'to play': играй /iɡˈraj/, играйте /iɡˈrajtɛ/. There are compound imperative forms for all persons and numbers in the present compound imperative (да играе, da iɡˈrae/), the present perfect compound imperative (да е играл, /dɐ ɛ iɡˈraɫ/) and the rarely used present pluperfect compound imperative (да е бил играл, /dɐ ɛ bil iɡˈraɫ/).
The conditional mood consists of five compound tenses, most of which are not grammatically distinguishable. The present, future and past conditional use a special past form of the stem би- (bi – "be") and the past participle (бих учил, /bix ˈutʃiɫ/, 'I would study'). The past future conditional and the past future perfect conditional coincide in form with the respective indicative tenses.
The subjunctive mood is rarely documented as a separate verb form in Bulgarian, (being, morphologically, a sub-instance of the quasi-infinitive construction with the particle да and a normal finite verb form), but nevertheless it is used regularly. The most common form, often mistaken for the present tense, is the present subjunctive ([по-добре] да отида (ˈpɔdobrɛ) dɐ oˈtidɐ/, 'I had better go'). The difference between the present indicative and the present subjunctive tense is that the subjunctive can be formed by both perfective and imperfective verbs. It has completely replaced the infinitive and the supine from complex expressions (see below). It is also employed to express opinion about possible future events. The past perfect subjunctive ([по добре] да бях отишъл (ˈpɔdobrɛ) dɐ bʲax oˈtiʃɐl/, 'I'd had better be gone') refers to possible events in the past, which did not take place, and the present pluperfect subjunctive (да съм бил отишъл /dɐ sɐm bil oˈtiʃɐl/), which may be used about both past and future events arousing feelings of incontinence, suspicion, etc.
The inferential mood has five pure tenses. Two of them are simple – past aorist inferential and past imperfect inferential – and are formed by the past participles of perfective and imperfective verbs, respectively. There are also three compound tenses – past future inferential, past future perfect inferential and past perfect inferential. All these tenses' forms are gender-specific in the singular. There are also conditional and compound-imperative crossovers. The existence of inferential forms has been attributed to Turkic influences by most Bulgarian linguists. Morphologically, they are derived from the perfect.
Bulgarian has the following participles:
The participles are inflected by gender, number, and definiteness, and are coordinated with the subject when forming compound tenses (see tenses above). When used in an attributive role, the inflection attributes are coordinated with the noun that is being attributed.
Bulgarian uses reflexive verbal forms (i.e. actions which are performed by the agent onto him- or herself) which behave in a similar way as they do in many other Indo-European languages, such as French and Spanish. The reflexive is expressed by the invariable particle se, originally a clitic form of the accusative reflexive pronoun. Thus –
When the action is performed on others, other particles are used, just like in any normal verb, e.g. –
Sometimes, the reflexive verb form has a similar but not necessarily identical meaning to the non-reflexive verb –
In other cases, the reflexive verb has a completely different meaning from its non-reflexive counterpart –
When the action is performed on an indirect object, the particles change to si and its derivatives –
In some cases, the particle si is ambiguous between the indirect object and the possessive meaning –
The difference between transitive and intransitive verbs can lead to significant differences in meaning with minimal change, e.g. –
The particle si is often used to indicate a more personal relationship to the action, e.g. –
The most productive way to form adverbs is to derive them from the neuter singular form of the corresponding adjective—e.g. бързо (fast), силно (hard), странно (strange)—but adjectives ending in -ки use the masculine singular form (i.e. ending in -ки), instead—e.g. юнашки (heroically), мъжки (bravely, like a man), майсторски (skillfully). The same pattern is used to form adverbs from the (adjective-like) ordinal numerals, e.g. първо (firstly), второ (secondly), трето (thirdly), and in some cases from (adjective-like) cardinal numerals, e.g. двойно (twice as/double), тройно (three times as), петорно (five times as).
The remaining adverbs are formed in ways that are no longer productive in the language. A small number are original (not derived from other words), for example: тук (here), там (there), вътре (inside), вън (outside), много (very/much) etc. The rest are mostly fossilized case forms, such as:
Adverbs can sometimes be reduplicated to emphasize the qualitative or quantitative properties of actions, moods or relations as performed by the subject of the sentence: "бавно-бавно" ("rather slowly"), "едва-едва" ("with great difficulty"), "съвсем-съвсем" ("quite", "thoroughly").
Questions in Bulgarian which do not use a question word (such as who? what? etc.) are formed with the particle ли after the verb; a subject is not necessary, as the verbal conjugation suggests who is performing the action:
While the particle ли generally goes after the verb, it can go after a noun or adjective if a contrast is needed:
A verb is not always necessary, e.g. when presenting a choice:
Rhetorical questions can be formed by adding ли to a question word, thus forming a "double interrogative" –
The same construction +не ('no') is an emphasized positive –
The verb съм /sɤm/ – 'to be' is also used as an auxiliary for forming the perfect, the passive and the conditional:
Two alternate forms of съм exist:
The impersonal verb ще (lit. 'it wants') is used to for forming the (positive) future tense:
The negative future is formed with the invariable construction няма да /ˈɲamɐ dɐ/ (see няма below):
The past tense of this verb – щях /ʃtʲax/ is conjugated to form the past conditional ('would have' – again, with да, since it is irrealis):
The verbs имам /ˈimɐm/ ('to have') and нямам /ˈɲamɐm/ ('to not have'):
In Bulgarian, there are several conjunctions all translating into English as "but", which are all used in distinct situations. They are но (no), ама (amà), а (a), ами (amì), and ала (alà) (and обаче (obache) – "however", identical in use to но).
While there is some overlapping between their uses, in many cases they are specific. For example, ami is used for a choice – ne tova, ami onova – "not this one, but that one" (compare Spanish sino), while ama is often used to provide extra information or an opinion – kazah go, ama sgreshih – "I said it, but I was wrong". Meanwhile, a provides contrast between two situations, and in some sentences can even be translated as "although", "while" or even "and" – az rabotya, a toy blee – "I'm working, and he's daydreaming".
Very often, different words can be used to alter the emphasis of a sentence – e.g. while pusha, no ne tryabva and pusha, a ne tryabva both mean "I smoke, but I shouldn't", the first sounds more like a statement of fact ("...but I mustn't"), while the second feels more like a judgement ("...but I oughtn't"). Similarly, az ne iskam, ama toy iska and az ne iskam, a toy iska both mean "I don't want to, but he does", however the first emphasizes the fact that he wants to, while the second emphasizes the wanting rather than the person.
Ala is interesting in that, while it feels archaic, it is often used in poetry and frequently in children's stories, since it has quite a moral/ominous feel to it.
Some common expressions use these words, and some can be used alone as interjections:
Bulgarian has several abstract particles which are used to strengthen a statement. These have no precise translation in English. The particles are strictly informal and can even be considered rude by some people and in some situations. They are mostly used at the end of questions or instructions.
These are "tagged" on to the beginning or end of a sentence to express the mood of the speaker in relation to the situation. They are mostly interrogative or slightly imperative in nature. There is no change in the grammatical mood when these are used (although they may be expressed through different grammatical moods in other languages).
These express intent or desire, perhaps even pleading. They can be seen as a sort of cohortative side to the language. (Since they can be used by themselves, they could even be considered as verbs in their own right.) They are also highly informal.
These particles can be combined with the vocative particles for greater effect, e.g. ya da vidya, be (let me see), or even exclusively in combinations with them, with no other elements, e.g. hayde, de! (come on!); nedey, de! (I told you not to!).
Bulgarian has several pronouns of quality which have no direct parallels in English – kakav (what sort of); takuv (this sort of); onakuv (that sort of – colloq.); nyakakav (some sort of); nikakav (no sort of); vsyakakav (every sort of); and the relative pronoun kakavto (the sort of ... that ... ). The adjective ednakuv ("the same") derives from the same radical.
Example phrases include:
An interesting phenomenon is that these can be strung along one after another in quite long constructions, e.g.
An extreme, albeit colloquial, example with almost no intrinsic lexical meaning – yet which is meaningful to the Bulgarian ear – would be :
The subject of the sentence is simply the pronoun "taya" (lit. "this one here"; colloq. "she").
Another interesting phenomenon that is observed in colloquial speech is the use of takova (neuter of takyv) not only as a substitute for an adjective, but also as a substitute for a verb. In that case the base form takova is used as the third person singular in the present indicative and all other forms are formed by analogy to other verbs in the language. Sometimes the "verb" may even acquire a derivational prefix that changes its meaning. Examples:
Another use of takova in colloquial speech is the word takovata, which can be used as a substitution for a noun, but also, if the speaker does not remember or is not sure how to say something, they might say takovata and then pause to think about it:
As a result of this versatility, the word takova can readily be used as a euphemism for taboo subjects. It is commonly used to substitute, for example, words relating to reproductive organs or sexual acts:
Similar "meaningless" expressions are extremely common in spoken Bulgarian, especially when the speaker is finding it difficult to describe or express something.
Bulgarian employs clitic doubling, mostly for emphatic purposes. For example, the following constructions are common in colloquial Bulgarian:
The phenomenon is practically obligatory in the spoken language in the case of inversion signalling information structure (in writing, clitic doubling may be skipped in such instances, with a somewhat bookish effect):
Sometimes, the doubling signals syntactic relations, thus:
This is contrasted with:
In this case, clitic doubling can be a colloquial alternative of the more formal or bookish passive voice, which would be constructed as follows:
Clitic doubling is also fully obligatory, both in the spoken and in the written norm, in clauses including several special expressions that use the short accusative and dative pronouns such as "играе ми се" (I feel like playing), студено ми е (I am cold), and боли ме ръката (my arm hurts):
Except the above examples, clitic doubling is considered inappropriate in a formal context.
Most of the vocabulary of modern Bulgarian consists of terms inherited from Proto-Slavic and local Bulgarian innovations and formations of those through the mediation of Old and Middle Bulgarian. The native terms in Bulgarian account for 70% to 80% of the lexicon.
The remaining 25% to 30% are loanwords from a number of languages, as well as derivations of such words. Bulgarian adopted also a few words of Thracian and Bulgar origin. The languages which have contributed most to Bulgarian as a way of foreign vocabulary borrowings are:
The classical languages Latin and Greek are the source of many words, used mostly in international terminology. Many Latin terms entered Bulgarian during the time when present-day Bulgaria was part of the Roman Empire and also in the later centuries through Romanian, Aromanian, and Megleno-Romanian during Bulgarian Empires. The loanwords of Greek origin in Bulgarian are a product of the influence of the liturgical language of the Orthodox Church. Many of the numerous loanwords from another Turkic language, Ottoman Turkish and, via Ottoman Turkish, from Arabic were adopted into Bulgarian during the long period of Ottoman rule, but have been replaced with native Bulgarian terms. Furthermore, after the independence of Bulgaria from the Ottoman Empire in 1878, Bulgarian intellectuals imported many French language vocabulary. In addition, both specialized (usually coming from the field of science) and commonplace English words (notably abstract, commodity/service-related or technical terms) have also penetrated Bulgarian since the second half of the 20th century, especially since 1989. A noteworthy portion of this English-derived terminology has attained some unique features in the process of its introduction to native speakers, and this has resulted in peculiar derivations that set the newly formed loanwords apart from the original words (mainly in pronunciation), although many loanwords are completely identical to the source words. A growing number of international neologisms are also being widely adopted, causing controversy between younger generations who, in general, are raised in the era of digital globalization, and the older, more conservative educated purists.
Article 1 of the Universal Declaration of Human Rights in Bulgarian:
The romanization of the text into Latin alphabet:
Bulgarian pronunciation transliterated in broad IPA:
Article 1 of the Universal Declaration of Human Rights in English:
Linguistic reports
Dictionaries
Courses | [
{
"paragraph_id": 0,
"text": "Bulgarian (/bʌlˈɡɛəriən/ , /bʊlˈ-/ bu(u)l-GAIR-ee-ən; български език, bŭlgarski ezik, pronounced [ˈbɤɫɡɐrski] ) is an Eastern South Slavic language spoken in Southeast Europe, primarily in Bulgaria. It is the language of the Bulgarians.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Along with the closely related Macedonian language (collectively forming the East South Slavic languages), it is a member of the Balkan sprachbund and South Slavic dialect continuum of the Indo-European language family. The two languages have several characteristics that set them apart from all other Slavic languages, including the elimination of case declension, the development of a suffixed definite article, and the lack of a verb infinitive. They retain and have further developed the Proto-Slavic verb system (albeit analytically). One such major development is the innovation of evidential verb forms to encode for the source of information: witnessed, inferred, or reported.",
"title": ""
},
{
"paragraph_id": 2,
"text": "It is the official language of Bulgaria, and since 2007 has been among the official languages of the European Union. It is also spoken by the Bulgarian historical communities in North Macedonia, Ukraine, Moldova, Serbia, Romania, Hungary, Albania and Greece.",
"title": ""
},
{
"paragraph_id": 3,
"text": "One can divide the development of the Bulgarian language into several periods.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Bulgarian was the first Slavic language attested in writing. As Slavic linguistic unity lasted into late antiquity, the oldest manuscripts initially referred to this language as ѧзꙑкъ словѣньскъ, \"the Slavic language\". In the Middle Bulgarian period this name was gradually replaced by the name ѧзꙑкъ блъгарьскъ, the \"Bulgarian language\". In some cases, this name was used not only with regard to the contemporary Middle Bulgarian language of the copyist but also to the period of Old Bulgarian. A most notable example of anachronism is the Service of Saint Cyril from Skopje (Скопски миней), a 13th-century Middle Bulgarian manuscript from northern Macedonia according to which St. Cyril preached with \"Bulgarian\" books among the Moravian Slavs. The first mention of the language as the \"Bulgarian language\" instead of the \"Slavonic language\" comes in the work of the Greek clergy of the Archbishopric of Ohrid in the 11th century, for example in the Greek hagiography of Clement of Ohrid by Theophylact of Ohrid (late 11th century).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "During the Middle Bulgarian period, the language underwent dramatic changes, losing the Slavonic case system, but preserving the rich verb system (while the development was exactly the opposite in other Slavic languages) and developing a definite article. It was influenced by its non-Slavic neighbors in the Balkan language area (mostly grammatically) and later also by Turkish, which was the official language of the Ottoman Empire, in the form of the Ottoman Turkish language, mostly lexically. The damaskin texts mark the transition from Middle Bulgarian to New Bulgarian, which was standardized in the 19th century.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "As a national revival occurred toward the end of the period of Ottoman rule (mostly during the 19th century), a modern Bulgarian literary language gradually emerged that drew heavily on Church Slavonic/Old Bulgarian (and to some extent on literary Russian, which had preserved many lexical items from Church Slavonic) and later reduced the number of Turkish and other Balkan loans. Today one difference between Bulgarian dialects in the country and literary spoken Bulgarian is the significant presence of Old Bulgarian words and even word forms in the latter. Russian loans are distinguished from Old Bulgarian ones on the basis of the presence of specifically Russian phonetic changes, as in оборот (turnover, rev), непонятен (incomprehensible), ядро (nucleus) and others. Many other loans from French, English and the classical languages have subsequently entered the language as well.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Modern Bulgarian was based essentially on the Eastern dialects of the language, but its pronunciation is in many respects a compromise between East and West Bulgarian (see especially the phonetic sections below). Following the efforts of some figures of the National awakening of Bulgaria (most notably Neofit Rilski and Ivan Bogorov), there had been many attempts to codify a standard Bulgarian language; however, there was much argument surrounding the choice of norms. Between 1835 and 1878 more than 25 proposals were put forward and \"linguistic chaos\" ensued. Eventually the eastern dialects prevailed, and in 1899 the Bulgarian Ministry of Education officially codified a standard Bulgarian language based on the Drinov-Ivanchev orthography.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Bulgarian is the official language of Bulgaria, where it is used in all spheres of public life. As of 2011, it is spoken as a first language by about 6 million people in the country, or about four out of every five Bulgarian citizens.",
"title": "Geographic distribution"
},
{
"paragraph_id": 9,
"text": "There is also a significant Bulgarian diaspora abroad. One of the main historically established communities are the Bessarabian Bulgarians, whose settlement in the Bessarabia region of nowadays Moldova and Ukraine dates mostly to the early 19th century. There were 134,000 Bulgarian speakers in Ukraine at the 2001 census, 41,800 in Moldova as of the 2014 census (of which 15,300 were habitual users of the language), and presumably a significant proportion of the 13,200 ethnic Bulgarians residing in neighbouring Transnistria in 2016.",
"title": "Geographic distribution"
},
{
"paragraph_id": 10,
"text": "Another community abroad are the Banat Bulgarians, who migrated in the 17th century to the Banat region now split between Romania, Serbia and Hungary. They speak the Banat Bulgarian dialect, which has had its own written standard and a historically important literary tradition.",
"title": "Geographic distribution"
},
{
"paragraph_id": 11,
"text": "There are Bulgarian speakers in neighbouring countries as well. The regional dialects of Bulgarian and Macedonian form a dialect continuum, and there is no well-defined boundary where one language ends and the other begins. Within the limits of the Republic of North Macedonia a strong separate Macedonian identity has emerged since the Second World War, even though there still are a small number of citizens who identify their language as Bulgarian. Beyond the borders of North Macedonia, the situation is more fluid, and the pockets of speakers of the related regional dialects in Albania and in Greece variously identify their language as Macedonian or as Bulgarian. In Serbia, there were 13,300 speakers as of 2011, mainly concentrated in the so-called Western Outlands along the border with Bulgaria. Bulgarian is also spoken in Turkey: natively by Pomaks, and as a second language by many Bulgarian Turks who emigrated from Bulgaria, mostly during the \"Big Excursion\" of 1989.",
"title": "Geographic distribution"
},
{
"paragraph_id": 12,
"text": "The language is also represented among the diaspora in Western Europe and North America, which has been steadily growing since the 1990s. Countries with significant numbers of speakers include Germany, Spain, Italy, the United Kingdom (38,500 speakers in England and Wales as of 2011), France, the United States, and Canada (19,100 in 2011).",
"title": "Geographic distribution"
},
{
"paragraph_id": 13,
"text": "The language is mainly split into two broad dialect areas, based on the different reflexes of the Proto-Slavic yat vowel (Ѣ). This split, which occurred at some point during the Middle Ages, led to the development of Bulgaria's:",
"title": "Dialects"
},
{
"paragraph_id": 14,
"text": "The literary language norm, which is generally based on the Eastern dialects, also has the Eastern alternating reflex of yat. However, it has not incorporated the general Eastern umlaut of all synchronic or even historic \"ya\" sounds into \"e\" before front vowels – e.g. поляна (polyana) vs. полени (poleni) \"meadow – meadows\" or even жаба (zhaba) vs. жеби (zhebi) \"frog – frogs\", even though it co-occurs with the yat alternation in almost all Eastern dialects that have it (except a few dialects along the yat border, e.g. in the Pleven region).",
"title": "Dialects"
},
{
"paragraph_id": 15,
"text": "More examples of the yat umlaut in the literary language are:",
"title": "Dialects"
},
{
"paragraph_id": 16,
"text": "Until 1945, Bulgarian orthography did not reveal this alternation and used the original Old Slavic Cyrillic letter yat (Ѣ), which was commonly called двойно е (dvoyno e) at the time, to express the historical yat vowel or at least root vowels displaying the ya – e alternation. The letter was used in each occurrence of such a root, regardless of the actual pronunciation of the vowel: thus, both mlyako and mlekar were spelled with (Ѣ). Among other things, this was seen as a way to \"reconcile\" the Western and the Eastern dialects and maintain language unity at a time when much of Bulgaria's Western dialect area was controlled by Serbia and Greece, but there were still hopes and occasional attempts to recover it. With the 1945 orthographic reform, this letter was abolished and the present spelling was introduced, reflecting the alternation in pronunciation.",
"title": "Dialects"
},
{
"paragraph_id": 17,
"text": "This had implications for some grammatical constructions:",
"title": "Dialects"
},
{
"paragraph_id": 18,
"text": "Sometimes, with the changes, words began to be spelled as other words with different meanings, e.g.:",
"title": "Dialects"
},
{
"paragraph_id": 19,
"text": "In spite of the literary norm regarding the yat vowel, many people living in Western Bulgaria, including the capital Sofia, will fail to observe its rules. While the norm requires the realizations vidyal vs. videli (he has seen; they have seen), some natives of Western Bulgaria will preserve their local dialect pronunciation with \"e\" for all instances of \"yat\" (e.g. videl, videli). Others, attempting to adhere to the norm, will actually use the \"ya\" sound even in cases where the standard language has \"e\" (e.g. vidyal, vidyali). The latter hypercorrection is called свръхякане (svrah-yakane ≈\"over-ya-ing\").",
"title": "Dialects"
},
{
"paragraph_id": 20,
"text": "Bulgarian is the only Slavic language whose literary standard does not naturally contain the iotated sound /jɛ/ (or its palatalized variant /ʲɛ/, except in non-Slavic foreign-loaned words). The sound is common in all modern Slavic languages (e.g. Czech medvěd /ˈmɛdvjɛt/ \"bear\", Polish pięć /pʲɛɲtɕ/ \"five\", Serbo-Croatian jelen /jělen/ \"deer\", Ukrainian немає /nemájɛ/ \"there is not ...\", Macedonian пишување /piʃuvaɲʲɛ/ \"writing\", etc.), as well as some Western Bulgarian dialectal forms – e.g. ора̀н’е /oˈraɲʲɛ/ (standard Bulgarian: оране /oˈranɛ/, \"ploughing\"), however it is not represented in standard Bulgarian speech or writing. Even where /jɛ/ occurs in other Slavic words, in Standard Bulgarian it is usually transcribed and pronounced as pure /ɛ/ – e.g. Boris Yeltsin is \"Eltsin\" (Борис Елцин), Yekaterinburg is \"Ekaterinburg\" (Екатеринбург) and Sarajevo is \"Saraevo\" (Сараево), although - because the sound is contained in a stressed syllable at the beginning of the word - Jelena Janković is \"Yelena\" – Йелена Янкович.",
"title": "Dialects"
},
{
"paragraph_id": 21,
"text": "Until the period immediately following the Second World War, all Bulgarian and the majority of foreign linguists referred to the South Slavic dialect continuum spanning the area of modern Bulgaria, North Macedonia and parts of Northern Greece as a group of Bulgarian dialects. In contrast, Serbian sources tended to label them \"south Serbian\" dialects. Some local naming conventions included bolgárski, bugárski and so forth. The codifiers of the standard Bulgarian language, however, did not wish to make any allowances for a pluricentric \"Bulgaro-Macedonian\" compromise. In 1870 Marin Drinov, who played a decisive role in the standardization of the Bulgarian language, rejected the proposal of Parteniy Zografski and Kuzman Shapkarev for a mixed eastern and western Bulgarian/Macedonian foundation of the standard Bulgarian language, stating in his article in the newspaper Makedoniya: \"Such an artificial assembly of written language is something impossible, unattainable and never heard of.\"",
"title": "Relationship to Macedonian"
},
{
"paragraph_id": 22,
"text": "After 1944 the People's Republic of Bulgaria and the Socialist Federal Republic of Yugoslavia began a policy of making Macedonia into the connecting link for the establishment of a new Balkan Federative Republic and stimulating here a development of distinct Macedonian consciousness. With the proclamation of the Socialist Republic of Macedonia as part of the Yugoslav federation, the new authorities also started measures that would overcome the pro-Bulgarian feeling among parts of its population and in 1945 a separate Macedonian language was codified. After 1958, when the pressure from Moscow decreased, Sofia reverted to the view that the Macedonian language did not exist as a separate language. Nowadays, Bulgarian and Greek linguists, as well as some linguists from other countries, still consider the various Macedonian dialects as part of the broader Bulgarian pluricentric dialectal continuum. Outside Bulgaria and Greece, Macedonian is generally considered an autonomous language within the South Slavic dialect continuum. Sociolinguists agree that the question whether Macedonian is a dialect of Bulgarian or a language is a political one and cannot be resolved on a purely linguistic basis, because dialect continua do not allow for either/or judgements.",
"title": "Relationship to Macedonian"
},
{
"paragraph_id": 23,
"text": "In 886 AD, the Bulgarian Empire introduced the Glagolitic alphabet which was devised by the Saints Cyril and Methodius in the 850s. The Glagolitic alphabet was gradually superseded in later centuries by the Cyrillic script, developed around the Preslav Literary School, Bulgaria in the late 9th century.",
"title": "Alphabet"
},
{
"paragraph_id": 24,
"text": "Several Cyrillic alphabets with 28 to 44 letters were used in the beginning and the middle of the 19th century during the efforts on the codification of Modern Bulgarian until an alphabet with 32 letters, proposed by Marin Drinov, gained prominence in the 1870s. The alphabet of Marin Drinov was used until the orthographic reform of 1945, when the letters yat (uppercase Ѣ, lowercase ѣ) and yus (uppercase Ѫ, lowercase ѫ) were removed from its alphabet, reducing the number of letters to 30.",
"title": "Alphabet"
},
{
"paragraph_id": 25,
"text": "With the accession of Bulgaria to the European Union on 1 January 2007, Cyrillic became the third official script of the European Union, following the Latin and Greek scripts.",
"title": "Alphabet"
},
{
"paragraph_id": 26,
"text": "Bulgarian possesses a phonology similar to that of the rest of the South Slavic languages, notably lacking Serbo-Croatian's phonemic vowel length and tones and alveo-palatal affricates. There is a general dichotomy between Eastern and Western dialects, with Eastern ones featuring consonant palatalization before front vowels (/ɛ/ and /i/) and substantial vowel reduction of the low vowels /ɛ/, /ɔ/ and /a/ in unstressed position, sometimes leading to neutralisation between /ɛ/ and /i/, /ɔ/ and /u/, and /a/ and /ɤ/. Both patterns have partial parallels in Russian, leading to partially similar sounds. In turn, the Western dialects generally do not have any allophonic palatalization and exhibit minor, if any, vowel reduction.",
"title": "Phonology"
},
{
"paragraph_id": 27,
"text": "Standard Bulgarian keeps a middle ground between the macrodialects. It allows palatalizaton only before central and back vowels and only partial reduction of /a/ and /ɔ/. Reduction of /ɛ/, consonant palatalisation before front vowels and plain articulation of palatalized consonants before central and back vowels is strongly discouraged and labelled as provincial.",
"title": "Phonology"
},
{
"paragraph_id": 28,
"text": "Bulgarian has six vowel phonemes, but at least eight distinct phones can be distinguished when reduced allophones are taken into consideration. There is currently no consensus on the number of Bulgarian consonants, with one school of thought advocating for the existence of only 22 consonant phonemes and another one claiming that there are not fewer than 39 consonant phonemes. The main bone of contention is how to treat palatalized consonants: as separate phonemes or as allophones of their respective plain counterparts.",
"title": "Phonology"
},
{
"paragraph_id": 29,
"text": "The 22-consonant model is based on a general consensus reached by all major Bulgarian linguists in the 1930s and 1940s. In turn, the 39-consonant model was launched in the beginning of the 1950s under the influence of the ideas of Russian linguist Nikolai Trubetzkoy.",
"title": "Phonology"
},
{
"paragraph_id": 30,
"text": "Despite frequent objections, the support of the Bulgarian Academy of Sciences has ensured Trubetzkoy's model virtual monopoly in state-issued phonologies and grammars since the 1960s. However, its reception abroad has been lukewarm, with a number of authors either calling the model into question or outright rejecting it. Thus, the Handbook of the International Phonetic Association only lists 22 consonants in Bulgarian's consonant inventory.",
"title": "Phonology"
},
{
"paragraph_id": 31,
"text": "The parts of speech in Bulgarian are divided in ten types, which are categorized in two broad classes: mutable and immutable. The difference is that mutable parts of speech vary grammatically, whereas the immutable ones do not change, regardless of their use. The five classes of mutables are: nouns, adjectives, numerals, pronouns and verbs. Syntactically, the first four of these form the group of the noun or the nominal group. The immutables are: adverbs, prepositions, conjunctions, particles and interjections. Verbs and adverbs form the group of the verb or the verbal group.",
"title": "Grammar"
},
{
"paragraph_id": 32,
"text": "Nouns and adjectives have the categories grammatical gender, number, case (only vocative) and definiteness in Bulgarian. Adjectives and adjectival pronouns agree with nouns in number and gender. Pronouns have gender and number and retain (as in nearly all Indo-European languages) a more significant part of the case system.",
"title": "Grammar"
},
{
"paragraph_id": 33,
"text": "There are three grammatical genders in Bulgarian: masculine, feminine and neuter. The gender of the noun can largely be inferred from its ending: nouns ending in a consonant (\"zero ending\") are generally masculine (for example, град /ɡrat/ 'city', син /sin/ 'son', мъж /mɤʃ/ 'man'; those ending in –а/–я (-a/-ya) (жена /ʒɛˈna/ 'woman', дъщеря /dɐʃtɛrˈja/ 'daughter', улица /ˈulitsɐ/ 'street') are normally feminine; and nouns ending in –е, –о are almost always neuter (дете /dɛˈtɛ/ 'child', езеро /ˈɛzɛro/ 'lake'), as are those rare words (usually loanwords) that end in –и, –у, and –ю (цунами /tsuˈnami/ 'tsunami', табу /tɐˈbu/ 'taboo', меню /mɛˈnju/ 'menu'). Perhaps the most significant exception from the above are the relatively numerous nouns that end in a consonant and yet are feminine: these comprise, firstly, a large group of nouns with zero ending expressing quality, degree or an abstraction, including all nouns ending on –ост/–ест -{ost/est} (мъдрост /ˈmɤdrost/ 'wisdom', низост /ˈnizost/ 'vileness', прелест /ˈprɛlɛst/ 'loveliness', болест /ˈbɔlɛst/ 'sickness', любов /ljuˈbɔf/ 'love'), and secondly, a much smaller group of irregular nouns with zero ending which define tangible objects or concepts (кръв /krɤf/ 'blood', кост /kɔst/ 'bone', вечер /ˈvɛtʃɛr/ 'evening', нощ /nɔʃt/ 'night'). There are also some commonly used words that end in a vowel and yet are masculine: баща 'father', дядо 'grandfather', чичо / вуйчо 'uncle', and others.",
"title": "Grammar"
},
{
"paragraph_id": 34,
"text": "The plural forms of the nouns do not express their gender as clearly as the singular ones, but may also provide some clues to it: the ending –и (-i) is more likely to be used with a masculine or feminine noun (факти /ˈfakti/ 'facts', болести /ˈbɔlɛsti/ 'sicknesses'), while one in –а/–я belongs more often to a neuter noun (езера /ɛzɛˈra/ 'lakes'). Also, the plural ending –ове /ovɛ/ occurs only in masculine nouns.",
"title": "Grammar"
},
{
"paragraph_id": 35,
"text": "Two numbers are distinguished in Bulgarian–singular and plural. A variety of plural suffixes is used, and the choice between them is partly determined by their ending in singular and partly influenced by gender; in addition, irregular declension and alternative plural forms are common. Words ending in –а/–я (which are usually feminine) generally have the plural ending –и, upon dropping of the singular ending. Of nouns ending in a consonant, the feminine ones also use –и, whereas the masculine ones usually have –и for polysyllables and –ове for monosyllables (however, exceptions are especially common in this group). Nouns ending in –о/–е (most of which are neuter) mostly use the suffixes –а, –я (both of which require the dropping of the singular endings) and –та.",
"title": "Grammar"
},
{
"paragraph_id": 36,
"text": "With cardinal numbers and related words such as няколко ('several'), masculine nouns use a special count form in –а/–я, which stems from the Proto-Slavonic dual: два/три стола ('two/three chairs') versus тези столове ('these chairs'); cf. feminine две/три/тези книги ('two/three/these books') and neuter две/три/тези легла ('two/three/these beds'). However, a recently developed language norm requires that count forms should only be used with masculine nouns that do not denote persons. Thus, двама/трима ученици ('two/three students') is perceived as more correct than двама/трима ученика, while the distinction is retained in cases such as два/три молива ('two/three pencils') versus тези моливи ('these pencils').",
"title": "Grammar"
},
{
"paragraph_id": 37,
"text": "Cases exist only in the personal and some other pronouns (as they do in many other modern Indo-European languages), with nominative, accusative, dative and vocative forms. Vestiges are present in a number of phraseological units and sayings. The major exception are vocative forms, which are still in use for masculine (with the endings -е, -о and -ю) and feminine nouns (-[ь/й]о and -е) in the singular.",
"title": "Grammar"
},
{
"paragraph_id": 38,
"text": "In modern Bulgarian, definiteness is expressed by a definite article which is postfixed to the noun, much like in the Scandinavian languages or Romanian (indefinite: човек, 'person'; definite: човекът, \"the person\") or to the first nominal constituent of definite noun phrases (indefinite: добър човек, 'a good person'; definite: добрият човек, \"the good person\"). There are four singular definite articles. Again, the choice between them is largely determined by the noun's ending in the singular. Nouns that end in a consonant and are masculine use –ът/–ят, when they are grammatical subjects, and –а/–я elsewhere. Nouns that end in a consonant and are feminine, as well as nouns that end in –а/–я (most of which are feminine, too) use –та. Nouns that end in –е/–о use –то.",
"title": "Grammar"
},
{
"paragraph_id": 39,
"text": "The plural definite article is –те for all nouns except for those whose plural form ends in –а/–я; these get –та instead. When postfixed to adjectives the definite articles are –ят/–я for masculine gender (again, with the longer form being reserved for grammatical subjects), –та for feminine gender, –то for neuter gender, and –те for plural.",
"title": "Grammar"
},
{
"paragraph_id": 40,
"text": "Both groups agree in gender and number with the noun they are appended to. They may also take the definite article as explained above.",
"title": "Grammar"
},
{
"paragraph_id": 41,
"text": "Pronouns may vary in gender, number, and definiteness, and are the only parts of speech that have retained case inflections. Three cases are exhibited by some groups of pronouns – nominative, accusative and dative. The distinguishable types of pronouns include the following: personal, relative, reflexive, interrogative, negative, indefinitive, summative and possessive.",
"title": "Grammar"
},
{
"paragraph_id": 42,
"text": "A Bulgarian verb has many distinct forms, as it varies in person, number, voice, aspect, mood, tense and in some cases gender.",
"title": "Grammar"
},
{
"paragraph_id": 43,
"text": "Finite verbal forms are simple or compound and agree with subjects in person (first, second and third) and number (singular, plural). In addition to that, past compound forms using participles vary in gender (masculine, feminine, neuter) and voice (active and passive) as well as aspect (perfective/aorist and imperfective).",
"title": "Grammar"
},
{
"paragraph_id": 44,
"text": "Bulgarian verbs express lexical aspect: perfective verbs signify the completion of the action of the verb and form past perfective (aorist) forms; imperfective ones are neutral with regard to it and form past imperfective forms. Most Bulgarian verbs can be grouped in perfective-imperfective pairs (imperfective/perfective: идвам/дойда \"come\", пристигам/пристигна \"arrive\"). Perfective verbs can be usually formed from imperfective ones by suffixation or prefixation, but the resultant verb often deviates in meaning from the original. In the pair examples above, aspect is stem-specific and therefore there is no difference in meaning.",
"title": "Grammar"
},
{
"paragraph_id": 45,
"text": "In Bulgarian, there is also grammatical aspect. Three grammatical aspects are distinguishable: neutral, perfect and pluperfect. The neutral aspect comprises the three simple tenses and the future tense. The pluperfect is manifest in tenses that use double or triple auxiliary \"be\" participles like the past pluperfect subjunctive. Perfect constructions use a single auxiliary \"be\".",
"title": "Grammar"
},
{
"paragraph_id": 46,
"text": "The traditional interpretation is that in addition to the four moods (наклонения /nəkloˈnɛnijɐ/) shared by most other European languages – indicative (изявително, /izʲəˈvitɛɫno/) imperative (повелително /poveˈlitelno/), subjunctive (подчинително /pottʃiˈnitɛɫno/) and conditional (условно, /oˈsɫɔvno/) – in Bulgarian there is one more to describe a general category of unwitnessed events – the inferential (преизказно /prɛˈiskɐzno/) mood. However, most contemporary Bulgarian linguists usually exclude the subjunctive mood and the inferential mood from the list of Bulgarian moods (thus placing the number of Bulgarian moods at a total of 3: indicative, imperative and conditional) and do not consider them to be moods but view them as verbial morphosyntactic constructs or separate gramemes of the verb class. The possible existence of a few other moods has been discussed in the literature. Most Bulgarian school grammars teach the traditional view of 4 Bulgarian moods (as described above, but excluding the subjunctive and including the inferential).",
"title": "Grammar"
},
{
"paragraph_id": 47,
"text": "There are three grammatically distinctive positions in time – present, past and future – which combine with aspect and mood to produce a number of formations. Normally, in grammar books these formations are viewed as separate tenses – i. e. \"past imperfect\" would mean that the verb is in past tense, in the imperfective aspect, and in the indicative mood (since no other mood is shown). There are more than 40 different tenses across Bulgarian's two aspects and five moods.",
"title": "Grammar"
},
{
"paragraph_id": 48,
"text": "In the indicative mood, there are three simple tenses:",
"title": "Grammar"
},
{
"paragraph_id": 49,
"text": "In the indicative there are also the following compound tenses:",
"title": "Grammar"
},
{
"paragraph_id": 50,
"text": "The four perfect constructions above can vary in aspect depending on the aspect of the main-verb participle; they are in fact pairs of imperfective and perfective aspects. Verbs in forms using past participles also vary in voice and gender.",
"title": "Grammar"
},
{
"paragraph_id": 51,
"text": "There is only one simple tense in the imperative mood, the present, and there are simple forms only for the second-person singular, -и/-й (-i, -y/i), and plural, -ете/-йте (-ete, -yte), e.g. уча /ˈutʃɐ/ ('to study'): учи /oˈtʃi/, sg., учете /oˈtʃɛtɛ/, pl.; играя /ˈiɡrajɐ/ 'to play': играй /iɡˈraj/, играйте /iɡˈrajtɛ/. There are compound imperative forms for all persons and numbers in the present compound imperative (да играе, da iɡˈrae/), the present perfect compound imperative (да е играл, /dɐ ɛ iɡˈraɫ/) and the rarely used present pluperfect compound imperative (да е бил играл, /dɐ ɛ bil iɡˈraɫ/).",
"title": "Grammar"
},
{
"paragraph_id": 52,
"text": "The conditional mood consists of five compound tenses, most of which are not grammatically distinguishable. The present, future and past conditional use a special past form of the stem би- (bi – \"be\") and the past participle (бих учил, /bix ˈutʃiɫ/, 'I would study'). The past future conditional and the past future perfect conditional coincide in form with the respective indicative tenses.",
"title": "Grammar"
},
{
"paragraph_id": 53,
"text": "The subjunctive mood is rarely documented as a separate verb form in Bulgarian, (being, morphologically, a sub-instance of the quasi-infinitive construction with the particle да and a normal finite verb form), but nevertheless it is used regularly. The most common form, often mistaken for the present tense, is the present subjunctive ([по-добре] да отида (ˈpɔdobrɛ) dɐ oˈtidɐ/, 'I had better go'). The difference between the present indicative and the present subjunctive tense is that the subjunctive can be formed by both perfective and imperfective verbs. It has completely replaced the infinitive and the supine from complex expressions (see below). It is also employed to express opinion about possible future events. The past perfect subjunctive ([по добре] да бях отишъл (ˈpɔdobrɛ) dɐ bʲax oˈtiʃɐl/, 'I'd had better be gone') refers to possible events in the past, which did not take place, and the present pluperfect subjunctive (да съм бил отишъл /dɐ sɐm bil oˈtiʃɐl/), which may be used about both past and future events arousing feelings of incontinence, suspicion, etc.",
"title": "Grammar"
},
{
"paragraph_id": 54,
"text": "The inferential mood has five pure tenses. Two of them are simple – past aorist inferential and past imperfect inferential – and are formed by the past participles of perfective and imperfective verbs, respectively. There are also three compound tenses – past future inferential, past future perfect inferential and past perfect inferential. All these tenses' forms are gender-specific in the singular. There are also conditional and compound-imperative crossovers. The existence of inferential forms has been attributed to Turkic influences by most Bulgarian linguists. Morphologically, they are derived from the perfect.",
"title": "Grammar"
},
{
"paragraph_id": 55,
"text": "Bulgarian has the following participles:",
"title": "Grammar"
},
{
"paragraph_id": 56,
"text": "The participles are inflected by gender, number, and definiteness, and are coordinated with the subject when forming compound tenses (see tenses above). When used in an attributive role, the inflection attributes are coordinated with the noun that is being attributed.",
"title": "Grammar"
},
{
"paragraph_id": 57,
"text": "Bulgarian uses reflexive verbal forms (i.e. actions which are performed by the agent onto him- or herself) which behave in a similar way as they do in many other Indo-European languages, such as French and Spanish. The reflexive is expressed by the invariable particle se, originally a clitic form of the accusative reflexive pronoun. Thus –",
"title": "Grammar"
},
{
"paragraph_id": 58,
"text": "When the action is performed on others, other particles are used, just like in any normal verb, e.g. –",
"title": "Grammar"
},
{
"paragraph_id": 59,
"text": "Sometimes, the reflexive verb form has a similar but not necessarily identical meaning to the non-reflexive verb –",
"title": "Grammar"
},
{
"paragraph_id": 60,
"text": "In other cases, the reflexive verb has a completely different meaning from its non-reflexive counterpart –",
"title": "Grammar"
},
{
"paragraph_id": 61,
"text": "When the action is performed on an indirect object, the particles change to si and its derivatives –",
"title": "Grammar"
},
{
"paragraph_id": 62,
"text": "In some cases, the particle si is ambiguous between the indirect object and the possessive meaning –",
"title": "Grammar"
},
{
"paragraph_id": 63,
"text": "The difference between transitive and intransitive verbs can lead to significant differences in meaning with minimal change, e.g. –",
"title": "Grammar"
},
{
"paragraph_id": 64,
"text": "The particle si is often used to indicate a more personal relationship to the action, e.g. –",
"title": "Grammar"
},
{
"paragraph_id": 65,
"text": "The most productive way to form adverbs is to derive them from the neuter singular form of the corresponding adjective—e.g. бързо (fast), силно (hard), странно (strange)—but adjectives ending in -ки use the masculine singular form (i.e. ending in -ки), instead—e.g. юнашки (heroically), мъжки (bravely, like a man), майсторски (skillfully). The same pattern is used to form adverbs from the (adjective-like) ordinal numerals, e.g. първо (firstly), второ (secondly), трето (thirdly), and in some cases from (adjective-like) cardinal numerals, e.g. двойно (twice as/double), тройно (three times as), петорно (five times as).",
"title": "Grammar"
},
{
"paragraph_id": 66,
"text": "The remaining adverbs are formed in ways that are no longer productive in the language. A small number are original (not derived from other words), for example: тук (here), там (there), вътре (inside), вън (outside), много (very/much) etc. The rest are mostly fossilized case forms, such as:",
"title": "Grammar"
},
{
"paragraph_id": 67,
"text": "Adverbs can sometimes be reduplicated to emphasize the qualitative or quantitative properties of actions, moods or relations as performed by the subject of the sentence: \"бавно-бавно\" (\"rather slowly\"), \"едва-едва\" (\"with great difficulty\"), \"съвсем-съвсем\" (\"quite\", \"thoroughly\").",
"title": "Grammar"
},
{
"paragraph_id": 68,
"text": "Questions in Bulgarian which do not use a question word (such as who? what? etc.) are formed with the particle ли after the verb; a subject is not necessary, as the verbal conjugation suggests who is performing the action:",
"title": "Grammar"
},
{
"paragraph_id": 69,
"text": "While the particle ли generally goes after the verb, it can go after a noun or adjective if a contrast is needed:",
"title": "Grammar"
},
{
"paragraph_id": 70,
"text": "A verb is not always necessary, e.g. when presenting a choice:",
"title": "Grammar"
},
{
"paragraph_id": 71,
"text": "Rhetorical questions can be formed by adding ли to a question word, thus forming a \"double interrogative\" –",
"title": "Grammar"
},
{
"paragraph_id": 72,
"text": "The same construction +не ('no') is an emphasized positive –",
"title": "Grammar"
},
{
"paragraph_id": 73,
"text": "The verb съм /sɤm/ – 'to be' is also used as an auxiliary for forming the perfect, the passive and the conditional:",
"title": "Grammar"
},
{
"paragraph_id": 74,
"text": "Two alternate forms of съм exist:",
"title": "Grammar"
},
{
"paragraph_id": 75,
"text": "The impersonal verb ще (lit. 'it wants') is used to for forming the (positive) future tense:",
"title": "Grammar"
},
{
"paragraph_id": 76,
"text": "The negative future is formed with the invariable construction няма да /ˈɲamɐ dɐ/ (see няма below):",
"title": "Grammar"
},
{
"paragraph_id": 77,
"text": "The past tense of this verb – щях /ʃtʲax/ is conjugated to form the past conditional ('would have' – again, with да, since it is irrealis):",
"title": "Grammar"
},
{
"paragraph_id": 78,
"text": "The verbs имам /ˈimɐm/ ('to have') and нямам /ˈɲamɐm/ ('to not have'):",
"title": "Grammar"
},
{
"paragraph_id": 79,
"text": "In Bulgarian, there are several conjunctions all translating into English as \"but\", which are all used in distinct situations. They are но (no), ама (amà), а (a), ами (amì), and ала (alà) (and обаче (obache) – \"however\", identical in use to но).",
"title": "Grammar"
},
{
"paragraph_id": 80,
"text": "While there is some overlapping between their uses, in many cases they are specific. For example, ami is used for a choice – ne tova, ami onova – \"not this one, but that one\" (compare Spanish sino), while ama is often used to provide extra information or an opinion – kazah go, ama sgreshih – \"I said it, but I was wrong\". Meanwhile, a provides contrast between two situations, and in some sentences can even be translated as \"although\", \"while\" or even \"and\" – az rabotya, a toy blee – \"I'm working, and he's daydreaming\".",
"title": "Grammar"
},
{
"paragraph_id": 81,
"text": "Very often, different words can be used to alter the emphasis of a sentence – e.g. while pusha, no ne tryabva and pusha, a ne tryabva both mean \"I smoke, but I shouldn't\", the first sounds more like a statement of fact (\"...but I mustn't\"), while the second feels more like a judgement (\"...but I oughtn't\"). Similarly, az ne iskam, ama toy iska and az ne iskam, a toy iska both mean \"I don't want to, but he does\", however the first emphasizes the fact that he wants to, while the second emphasizes the wanting rather than the person.",
"title": "Grammar"
},
{
"paragraph_id": 82,
"text": "Ala is interesting in that, while it feels archaic, it is often used in poetry and frequently in children's stories, since it has quite a moral/ominous feel to it.",
"title": "Grammar"
},
{
"paragraph_id": 83,
"text": "Some common expressions use these words, and some can be used alone as interjections:",
"title": "Grammar"
},
{
"paragraph_id": 84,
"text": "Bulgarian has several abstract particles which are used to strengthen a statement. These have no precise translation in English. The particles are strictly informal and can even be considered rude by some people and in some situations. They are mostly used at the end of questions or instructions.",
"title": "Grammar"
},
{
"paragraph_id": 85,
"text": "These are \"tagged\" on to the beginning or end of a sentence to express the mood of the speaker in relation to the situation. They are mostly interrogative or slightly imperative in nature. There is no change in the grammatical mood when these are used (although they may be expressed through different grammatical moods in other languages).",
"title": "Grammar"
},
{
"paragraph_id": 86,
"text": "These express intent or desire, perhaps even pleading. They can be seen as a sort of cohortative side to the language. (Since they can be used by themselves, they could even be considered as verbs in their own right.) They are also highly informal.",
"title": "Grammar"
},
{
"paragraph_id": 87,
"text": "These particles can be combined with the vocative particles for greater effect, e.g. ya da vidya, be (let me see), or even exclusively in combinations with them, with no other elements, e.g. hayde, de! (come on!); nedey, de! (I told you not to!).",
"title": "Grammar"
},
{
"paragraph_id": 88,
"text": "Bulgarian has several pronouns of quality which have no direct parallels in English – kakav (what sort of); takuv (this sort of); onakuv (that sort of – colloq.); nyakakav (some sort of); nikakav (no sort of); vsyakakav (every sort of); and the relative pronoun kakavto (the sort of ... that ... ). The adjective ednakuv (\"the same\") derives from the same radical.",
"title": "Grammar"
},
{
"paragraph_id": 89,
"text": "Example phrases include:",
"title": "Grammar"
},
{
"paragraph_id": 90,
"text": "An interesting phenomenon is that these can be strung along one after another in quite long constructions, e.g.",
"title": "Grammar"
},
{
"paragraph_id": 91,
"text": "An extreme, albeit colloquial, example with almost no intrinsic lexical meaning – yet which is meaningful to the Bulgarian ear – would be :",
"title": "Grammar"
},
{
"paragraph_id": 92,
"text": "The subject of the sentence is simply the pronoun \"taya\" (lit. \"this one here\"; colloq. \"she\").",
"title": "Grammar"
},
{
"paragraph_id": 93,
"text": "Another interesting phenomenon that is observed in colloquial speech is the use of takova (neuter of takyv) not only as a substitute for an adjective, but also as a substitute for a verb. In that case the base form takova is used as the third person singular in the present indicative and all other forms are formed by analogy to other verbs in the language. Sometimes the \"verb\" may even acquire a derivational prefix that changes its meaning. Examples:",
"title": "Grammar"
},
{
"paragraph_id": 94,
"text": "Another use of takova in colloquial speech is the word takovata, which can be used as a substitution for a noun, but also, if the speaker does not remember or is not sure how to say something, they might say takovata and then pause to think about it:",
"title": "Grammar"
},
{
"paragraph_id": 95,
"text": "As a result of this versatility, the word takova can readily be used as a euphemism for taboo subjects. It is commonly used to substitute, for example, words relating to reproductive organs or sexual acts:",
"title": "Grammar"
},
{
"paragraph_id": 96,
"text": "Similar \"meaningless\" expressions are extremely common in spoken Bulgarian, especially when the speaker is finding it difficult to describe or express something.",
"title": "Grammar"
},
{
"paragraph_id": 97,
"text": "Bulgarian employs clitic doubling, mostly for emphatic purposes. For example, the following constructions are common in colloquial Bulgarian:",
"title": "Syntax"
},
{
"paragraph_id": 98,
"text": "The phenomenon is practically obligatory in the spoken language in the case of inversion signalling information structure (in writing, clitic doubling may be skipped in such instances, with a somewhat bookish effect):",
"title": "Syntax"
},
{
"paragraph_id": 99,
"text": "Sometimes, the doubling signals syntactic relations, thus:",
"title": "Syntax"
},
{
"paragraph_id": 100,
"text": "This is contrasted with:",
"title": "Syntax"
},
{
"paragraph_id": 101,
"text": "In this case, clitic doubling can be a colloquial alternative of the more formal or bookish passive voice, which would be constructed as follows:",
"title": "Syntax"
},
{
"paragraph_id": 102,
"text": "Clitic doubling is also fully obligatory, both in the spoken and in the written norm, in clauses including several special expressions that use the short accusative and dative pronouns such as \"играе ми се\" (I feel like playing), студено ми е (I am cold), and боли ме ръката (my arm hurts):",
"title": "Syntax"
},
{
"paragraph_id": 103,
"text": "Except the above examples, clitic doubling is considered inappropriate in a formal context.",
"title": "Syntax"
},
{
"paragraph_id": 104,
"text": "Most of the vocabulary of modern Bulgarian consists of terms inherited from Proto-Slavic and local Bulgarian innovations and formations of those through the mediation of Old and Middle Bulgarian. The native terms in Bulgarian account for 70% to 80% of the lexicon.",
"title": "Vocabulary"
},
{
"paragraph_id": 105,
"text": "The remaining 25% to 30% are loanwords from a number of languages, as well as derivations of such words. Bulgarian adopted also a few words of Thracian and Bulgar origin. The languages which have contributed most to Bulgarian as a way of foreign vocabulary borrowings are:",
"title": "Vocabulary"
},
{
"paragraph_id": 106,
"text": "The classical languages Latin and Greek are the source of many words, used mostly in international terminology. Many Latin terms entered Bulgarian during the time when present-day Bulgaria was part of the Roman Empire and also in the later centuries through Romanian, Aromanian, and Megleno-Romanian during Bulgarian Empires. The loanwords of Greek origin in Bulgarian are a product of the influence of the liturgical language of the Orthodox Church. Many of the numerous loanwords from another Turkic language, Ottoman Turkish and, via Ottoman Turkish, from Arabic were adopted into Bulgarian during the long period of Ottoman rule, but have been replaced with native Bulgarian terms. Furthermore, after the independence of Bulgaria from the Ottoman Empire in 1878, Bulgarian intellectuals imported many French language vocabulary. In addition, both specialized (usually coming from the field of science) and commonplace English words (notably abstract, commodity/service-related or technical terms) have also penetrated Bulgarian since the second half of the 20th century, especially since 1989. A noteworthy portion of this English-derived terminology has attained some unique features in the process of its introduction to native speakers, and this has resulted in peculiar derivations that set the newly formed loanwords apart from the original words (mainly in pronunciation), although many loanwords are completely identical to the source words. A growing number of international neologisms are also being widely adopted, causing controversy between younger generations who, in general, are raised in the era of digital globalization, and the older, more conservative educated purists.",
"title": "Vocabulary"
},
{
"paragraph_id": 107,
"text": "Article 1 of the Universal Declaration of Human Rights in Bulgarian:",
"title": "Sample text"
},
{
"paragraph_id": 108,
"text": "The romanization of the text into Latin alphabet:",
"title": "Sample text"
},
{
"paragraph_id": 109,
"text": "Bulgarian pronunciation transliterated in broad IPA:",
"title": "Sample text"
},
{
"paragraph_id": 110,
"text": "Article 1 of the Universal Declaration of Human Rights in English:",
"title": "Sample text"
},
{
"paragraph_id": 111,
"text": "Linguistic reports",
"title": "External links"
},
{
"paragraph_id": 112,
"text": "Dictionaries",
"title": "External links"
},
{
"paragraph_id": 113,
"text": "Courses",
"title": "External links"
},
{
"paragraph_id": 114,
"text": "",
"title": "External links"
}
] | Bulgarian is an Eastern South Slavic language spoken in Southeast Europe, primarily in Bulgaria. It is the language of the Bulgarians. Along with the closely related Macedonian language, it is a member of the Balkan sprachbund and South Slavic dialect continuum of the Indo-European language family. The two languages have several characteristics that set them apart from all other Slavic languages, including the elimination of case declension, the development of a suffixed definite article, and the lack of a verb infinitive. They retain and have further developed the Proto-Slavic verb system. One such major development is the innovation of evidential verb forms to encode for the source of information: witnessed, inferred, or reported. It is the official language of Bulgaria, and since 2007 has been among the official languages of the European Union. It is also spoken by the Bulgarian historical communities in North Macedonia, Ukraine, Moldova, Serbia, Romania, Hungary, Albania and Greece. | 2001-09-14T20:01:51Z | 2023-12-29T18:39:47Z | [
"Template:Respell",
"Template:Sigfig",
"Template:See also",
"Template:Reflist",
"Template:Sister project links",
"Template:Languages of Bulgaria",
"Template:Use British English",
"Template:Infobox language",
"Template:Transl",
"Template:IPA",
"Template:IPAslink",
"Template:Original research",
"Template:Webarchive",
"Template:Authority control",
"Template:Circa",
"Template:Citation needed",
"Template:Further",
"Template:Lang",
"Template:Cite journal",
"Template:Citation",
"Template:Bulgarian dialects",
"Template:Use dmy dates",
"Template:IPA-bg",
"Template:Main",
"Template:Fix",
"Template:Typo help inline",
"Template:Cite book",
"Template:Cite web",
"Template:Distinguish",
"Template:Cite report",
"Template:Contains special characters",
"Template:IPAc-en",
"Template:Lang-bg",
"Template:Nbsp",
"Template:Clarify",
"Template:Cite EB1911",
"Template:Short description",
"Template:Bar box",
"Template:South Slavic languages sidebar",
"Template:Bulgarian language",
"Template:Slavic languages",
"Template:Bulgaria topics",
"Template:More citations needed"
] | https://en.wikipedia.org/wiki/Bulgarian_language |
4,153 | Bipyramid | A (symmetric) n-gonal bipyramid or dipyramid is a polyhedron formed by joining an n-gonal pyramid and its mirror image base-to-base. An n-gonal bipyramid has 2n triangle faces, 3n edges, and 2 + n vertices.
The "n-gonal" in the name of a bipyramid does not refer to a face but to the internal polygon base, lying in the mirror plane that connects the two pyramid halves. (If it were a face, then each of its edges would connect three faces instead of two.)
A "regular" bipyramid has a regular polygon base. It is usually implied to be also a right bipyramid.
A right bipyramid has its two apices right above and right below the center or the centroid of its polygon base.
A "regular" right (symmetric) n-gonal bipyramid has Schläfli symbol { } + {n}.
A right (symmetric) bipyramid has Schläfli symbol { } + P, for polygon base P.
The "regular" right (thus face-transitive) n-gonal bipyramid with regular vertices is the dual of the n-gonal uniform (thus right) prism, and has congruent isosceles triangle faces.
A "regular" right (symmetric) n-gonal bipyramid can be projected on a sphere or globe as a "regular" right (symmetric) n-gonal spherical bipyramid: n equally spaced lines of longitude going from pole to pole, and an equator line bisecting them.
Only three kinds of bipyramids can have all edges of the same length (which implies that all faces are equilateral triangles, and thus the bipyramid is a deltahedron): the "regular" right (symmetric) triangular, tetragonal, and pentagonal bipyramids. The tetragonal or square bipyramid with same length edges, or regular octahedron, counts among the Platonic solids; the triangular and pentagonal bipyramids with same length edges count among the Johnson solids (J12 and J13).
A "regular" right (symmetric) n-gonal bipyramid has dihedral symmetry group Dnh, of order 4n, except in the case of a regular octahedron, which has the larger octahedral symmetry group Oh, of order 48, which has three versions of D4h as subgroups. The rotation group is Dn, of order 2n, except in the case of a regular octahedron, which has the larger rotation group O, of order 24, which has three versions of D4 as subgroups.
Note: Every "regular" right (symmetric) n-gonal bipyramid has the same (dihedral) symmetry group as the dual-uniform n-gonal bipyramid, for n ≠ 4.
The 4n triangle faces of a "regular" right (symmetric) 2n-gonal bipyramid, projected as the 4n spherical triangle faces of a "regular" right (symmetric) 2n-gonal spherical bipyramid, represent the fundamental domains of dihedral symmetry in three dimensions: Dnh, [n,2], (*n22), of order 4n. These domains can be shown as alternately colored spherical triangles:
An n-gonal (symmetric) bipyramid can be seen as the Kleetope of the "corresponding" n-gonal dihedron.
Volume of a (symmetric) bipyramid:
where B is the area of the base and h the height from the base plane to any apex.
This works for any shape of the base, and for any location of the apices, provided that h is measured as the perpendicular distance from the base plane to any apex. Hence:
Volume of a (symmetric) bipyramid whose base is a regular n-sided polygon with side length s and whose height is h:
Non-right bipyramids are called oblique bipyramids.
A concave bipyramid has a concave polygon base.
An asymmetric right bipyramid joins two right pyramids with congruent bases but unequal heights, base-to-base.
An inverted right bipyramid joins two right pyramids with congruent bases but unequal heights, base-to-base, but on the same side of their common base.
The dual of an asymmetric/inverted right n-gonal bipyramid is an n-gonal frustum.
A "regular" asymmetric/inverted right n-gonal bipyramid has symmetry group Cnv, of order 2n.
An "isotoxal" right (symmetric) di-n-gonal bipyramid is a right (symmetric) 2n-gonal bipyramid with an isotoxal flat polygon base: its 2n basal vertices are coplanar, but alternate in two radii.
All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of a right "symmetric" di-n-gonal scalenohedron, with an isotoxal flat polygon base.
An "isotoxal" right (symmetric) di-n-gonal bipyramid has n two-fold rotation axes through opposite basal vertices, n reflection planes through opposite apical edges, an n-fold rotation axis through apices, a reflection plane through base, and an n-fold rotation-reflection axis through apices, representing symmetry group Dnh, [n,2], (*22n), of order 4n. (The reflection about the base plane corresponds to the 0° rotation-reflection. If n is even, then there is an inversion symmetry about the center, corresponding to the 180° rotation-reflection.)
Example with 2n = 2×3:
Example with 2n = 2×4:
Note: For at most two particular values of z A = | z A ′ | , {\displaystyle z_{A}=|z_{A'}|,} the faces of such a scalene triangle bipyramid may be isosceles.
Double example:
In crystallography, "isotoxal" right (symmetric) "didigonal" (8-faced), ditrigonal (12-faced), ditetragonal (16-faced), and dihexagonal (24-faced) bipyramids exist.
A "regular" right "symmetric" di-n-gonal scalenohedron is defined by a regular zigzag skew 2n-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each basal edge to each apex.
It has two apices and 2n basal vertices, 4n faces, and 6n edges; it is topologically identical to a 2n-gonal bipyramid, but its 2n basal vertices alternate in two rings above and below the center.
All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of a right "symmetric" di-n-gonal bipyramid, with a regular zigzag skew polygon base.
A "regular" right "symmetric" di-n-gonal scalenohedron has n two-fold rotation axes through opposite basal mid-edges, n reflection planes through opposite apical edges, an n-fold rotation axis through apices, and a 2n-fold rotation-reflection axis through apices (about which 1n rotations-reflections globally preserve the solid), representing symmetry group Dnv = Dnd, [2,2n], (2*n), of order 4n. (If n is odd, then there is an inversion symmetry about the center, corresponding to the 180° rotation-reflection.)
Example with 2n = 2×3:
Example with 2n = 2×2:
Note: For at most two particular values of z A = | z A ′ | , {\displaystyle z_{A}=|z_{A'}|,} the faces of such a scalenohedron may be isosceles.
Double example:
In crystallography, "regular" right "symmetric" "didigonal" (8-faced) and ditrigonal (12-faced) scalenohedra exist.
The smallest geometric scalenohedra have eight faces, and are topologically identical to the regular octahedron. In this case (2n = 2×2), in crystallography, a "regular" right "symmetric" "didigonal" (8-faced) scalenohedron is called a tetragonal scalenohedron.
Let us temporarily focus on the "regular" right "symmetric" 8-faced scalenohedra with h = r, i.e.
Their two apices can be represented as A, A' and their four basal vertices as U, U', V, V':
where z is a parameter between 0 and 1. At z = 0, it is a regular octahedron; at z = 1, it has four pairs of coplanar faces, and merging these into four congruent isosceles triangles makes it a disphenoid; for z > 1, it is concave.
Note: If the 2n-gon base is both isotoxal in-out and zigzag skew, then not all faces of the "isotoxal" right "symmetric" scalenohedron are congruent.
Example with five different edge lengths:
Note: For some particular values of zA = |zA'|, half the faces of such a scalenohedron may be isosceles or equilateral.
Example with three different edge lengths:
A self-intersecting or star bipyramid has a star polygon base.
A "regular" right symmetric star bipyramid is defined by a regular star polygon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each basal edge to each apex.
A "regular" right symmetric star bipyramid has congruent isosceles triangle faces, and is isohedral.
Note: For at most one particular value of z A = | z A ′ | , {\displaystyle z_{A}=|z_{A'}|,} the faces of such a "regular" star bipyramid may be equilateral.
A p/q-bipyramid has Coxeter diagram .
An "isotoxal" right symmetric 2p/q-gonal star bipyramid is defined by an isotoxal in-out star 2p/q-gon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each basal edge to each apex.
An "isotoxal" right symmetric 2p/q-gonal star bipyramid has congruent scalene triangle faces, and is isohedral. It can be seen as another type of a 2p/q-gonal right "symmetric" star scalenohedron, with an isotoxal in-out star polygon base.
Note: For at most two particular values of z A = | z A ′ | , {\displaystyle z_{A}=|z_{A'}|,} the faces of such a scalene triangle star bipyramid may be isosceles.
A "regular" right "symmetric" 2p/q-gonal star scalenohedron is defined by a regular zigzag skew star 2p/q-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each basal edge to each apex.
A "regular" right "symmetric" 2p/q-gonal star scalenohedron has congruent scalene triangle faces, and is isohedral. It can be seen as another type of a right "symmetric" 2p/q-gonal star bipyramid, with a regular zigzag skew star polygon base.
Note: For at most two particular values of z A = | z A ′ | , {\displaystyle z_{A}=|z_{A'}|,} the faces of such a star scalenohedron may be isosceles.
Note: If the star 2p/q-gon base is both isotoxal in-out and zigzag skew, then not all faces of the "isotoxal" right "symmetric" star scalenohedron are congruent.
Note: For some particular values of zA = |zA'|, half the faces of such a star scalenohedron may be isosceles or equilateral.
Example with four different edge lengths:
Example with three different edge lengths:
The dual of the rectification of each convex regular 4-polytopes is a cell-transitive 4-polytope with bipyramidal cells. In the following, the apex vertex of the bipyramid is A and an equator vertex is E. The distance between adjacent vertices on the equator EE = 1, the apex to equator edge is AE and the distance between the apices is AA. The bipyramid 4-polytope will have VA vertices where the apices of NA bipyramids meet. It will have VE vertices where the type E vertices of NE bipyramids meet.
As cells must fit around an edge,
In general, a bipyramid can be seen as an n-polytope constructed with a (n − 1)-polytope in a hyperplane with two points in opposite directions and equal perpendicular distances from the hyperplane. If the (n − 1)-polytope is a regular polytope, it will have identical pyramidal facets.
A 2-dimensional ("regular") right symmetric (digonal) bipyramid is formed by joining two congruent isosceles triangles base-to-base; its outline is a rhombus, { } + { }.
A polyhedral bipyramid is a 4-polytope with a polyhedron base, and an apex point.
An example is the 16-cell, which is an octahedral bipyramid, { } + {3,4}, and more generally an n-orthoplex is an (n − 1)-orthoplex bipyramid, { } + {3,4}.
Other bipyramids include the tetrahedral bipyramid, { } + {3,3}, icosahedral bipyramid, { } + {3,5}, and dodecahedral bipyramid, { }+{5,3}, the first two having all regular cells, they are also Blind polytopes. | [
{
"paragraph_id": 0,
"text": "A (symmetric) n-gonal bipyramid or dipyramid is a polyhedron formed by joining an n-gonal pyramid and its mirror image base-to-base. An n-gonal bipyramid has 2n triangle faces, 3n edges, and 2 + n vertices.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The \"n-gonal\" in the name of a bipyramid does not refer to a face but to the internal polygon base, lying in the mirror plane that connects the two pyramid halves. (If it were a face, then each of its edges would connect three faces instead of two.)",
"title": ""
},
{
"paragraph_id": 2,
"text": "A \"regular\" bipyramid has a regular polygon base. It is usually implied to be also a right bipyramid.",
"title": "\"Regular\", right bipyramids"
},
{
"paragraph_id": 3,
"text": "A right bipyramid has its two apices right above and right below the center or the centroid of its polygon base.",
"title": "\"Regular\", right bipyramids"
},
{
"paragraph_id": 4,
"text": "A \"regular\" right (symmetric) n-gonal bipyramid has Schläfli symbol { } + {n}.",
"title": "\"Regular\", right bipyramids"
},
{
"paragraph_id": 5,
"text": "A right (symmetric) bipyramid has Schläfli symbol { } + P, for polygon base P.",
"title": "\"Regular\", right bipyramids"
},
{
"paragraph_id": 6,
"text": "The \"regular\" right (thus face-transitive) n-gonal bipyramid with regular vertices is the dual of the n-gonal uniform (thus right) prism, and has congruent isosceles triangle faces.",
"title": "\"Regular\", right bipyramids"
},
{
"paragraph_id": 7,
"text": "A \"regular\" right (symmetric) n-gonal bipyramid can be projected on a sphere or globe as a \"regular\" right (symmetric) n-gonal spherical bipyramid: n equally spaced lines of longitude going from pole to pole, and an equator line bisecting them.",
"title": "\"Regular\", right bipyramids"
},
{
"paragraph_id": 8,
"text": "Only three kinds of bipyramids can have all edges of the same length (which implies that all faces are equilateral triangles, and thus the bipyramid is a deltahedron): the \"regular\" right (symmetric) triangular, tetragonal, and pentagonal bipyramids. The tetragonal or square bipyramid with same length edges, or regular octahedron, counts among the Platonic solids; the triangular and pentagonal bipyramids with same length edges count among the Johnson solids (J12 and J13).",
"title": "Equilateral triangle bipyramids"
},
{
"paragraph_id": 9,
"text": "A \"regular\" right (symmetric) n-gonal bipyramid has dihedral symmetry group Dnh, of order 4n, except in the case of a regular octahedron, which has the larger octahedral symmetry group Oh, of order 48, which has three versions of D4h as subgroups. The rotation group is Dn, of order 2n, except in the case of a regular octahedron, which has the larger rotation group O, of order 24, which has three versions of D4 as subgroups.",
"title": "Kaleidoscopic symmetry"
},
{
"paragraph_id": 10,
"text": "Note: Every \"regular\" right (symmetric) n-gonal bipyramid has the same (dihedral) symmetry group as the dual-uniform n-gonal bipyramid, for n ≠ 4.",
"title": "Kaleidoscopic symmetry"
},
{
"paragraph_id": 11,
"text": "The 4n triangle faces of a \"regular\" right (symmetric) 2n-gonal bipyramid, projected as the 4n spherical triangle faces of a \"regular\" right (symmetric) 2n-gonal spherical bipyramid, represent the fundamental domains of dihedral symmetry in three dimensions: Dnh, [n,2], (*n22), of order 4n. These domains can be shown as alternately colored spherical triangles:",
"title": "Kaleidoscopic symmetry"
},
{
"paragraph_id": 12,
"text": "An n-gonal (symmetric) bipyramid can be seen as the Kleetope of the \"corresponding\" n-gonal dihedron.",
"title": "Kaleidoscopic symmetry"
},
{
"paragraph_id": 13,
"text": "Volume of a (symmetric) bipyramid:",
"title": "Volume"
},
{
"paragraph_id": 14,
"text": "where B is the area of the base and h the height from the base plane to any apex.",
"title": "Volume"
},
{
"paragraph_id": 15,
"text": "This works for any shape of the base, and for any location of the apices, provided that h is measured as the perpendicular distance from the base plane to any apex. Hence:",
"title": "Volume"
},
{
"paragraph_id": 16,
"text": "Volume of a (symmetric) bipyramid whose base is a regular n-sided polygon with side length s and whose height is h:",
"title": "Volume"
},
{
"paragraph_id": 17,
"text": "Non-right bipyramids are called oblique bipyramids.",
"title": "Oblique bipyramids"
},
{
"paragraph_id": 18,
"text": "A concave bipyramid has a concave polygon base.",
"title": "Concave bipyramids"
},
{
"paragraph_id": 19,
"text": "",
"title": "Concave bipyramids"
},
{
"paragraph_id": 20,
"text": "An asymmetric right bipyramid joins two right pyramids with congruent bases but unequal heights, base-to-base.",
"title": "Asymmetric/inverted right bipyramids"
},
{
"paragraph_id": 21,
"text": "An inverted right bipyramid joins two right pyramids with congruent bases but unequal heights, base-to-base, but on the same side of their common base.",
"title": "Asymmetric/inverted right bipyramids"
},
{
"paragraph_id": 22,
"text": "The dual of an asymmetric/inverted right n-gonal bipyramid is an n-gonal frustum.",
"title": "Asymmetric/inverted right bipyramids"
},
{
"paragraph_id": 23,
"text": "A \"regular\" asymmetric/inverted right n-gonal bipyramid has symmetry group Cnv, of order 2n.",
"title": "Asymmetric/inverted right bipyramids"
},
{
"paragraph_id": 24,
"text": "An \"isotoxal\" right (symmetric) di-n-gonal bipyramid is a right (symmetric) 2n-gonal bipyramid with an isotoxal flat polygon base: its 2n basal vertices are coplanar, but alternate in two radii.",
"title": "Scalene triangle bipyramids"
},
{
"paragraph_id": 25,
"text": "All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of a right \"symmetric\" di-n-gonal scalenohedron, with an isotoxal flat polygon base.",
"title": "Scalene triangle bipyramids"
},
{
"paragraph_id": 26,
"text": "An \"isotoxal\" right (symmetric) di-n-gonal bipyramid has n two-fold rotation axes through opposite basal vertices, n reflection planes through opposite apical edges, an n-fold rotation axis through apices, a reflection plane through base, and an n-fold rotation-reflection axis through apices, representing symmetry group Dnh, [n,2], (*22n), of order 4n. (The reflection about the base plane corresponds to the 0° rotation-reflection. If n is even, then there is an inversion symmetry about the center, corresponding to the 180° rotation-reflection.)",
"title": "Scalene triangle bipyramids"
},
{
"paragraph_id": 27,
"text": "Example with 2n = 2×3:",
"title": "Scalene triangle bipyramids"
},
{
"paragraph_id": 28,
"text": "Example with 2n = 2×4:",
"title": "Scalene triangle bipyramids"
},
{
"paragraph_id": 29,
"text": "Note: For at most two particular values of z A = | z A ′ | , {\\displaystyle z_{A}=|z_{A'}|,} the faces of such a scalene triangle bipyramid may be isosceles.",
"title": "Scalene triangle bipyramids"
},
{
"paragraph_id": 30,
"text": "Double example:",
"title": "Scalene triangle bipyramids"
},
{
"paragraph_id": 31,
"text": "In crystallography, \"isotoxal\" right (symmetric) \"didigonal\" (8-faced), ditrigonal (12-faced), ditetragonal (16-faced), and dihexagonal (24-faced) bipyramids exist.",
"title": "Scalene triangle bipyramids"
},
{
"paragraph_id": 32,
"text": "A \"regular\" right \"symmetric\" di-n-gonal scalenohedron is defined by a regular zigzag skew 2n-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each basal edge to each apex.",
"title": "Scalenohedra"
},
{
"paragraph_id": 33,
"text": "It has two apices and 2n basal vertices, 4n faces, and 6n edges; it is topologically identical to a 2n-gonal bipyramid, but its 2n basal vertices alternate in two rings above and below the center.",
"title": "Scalenohedra"
},
{
"paragraph_id": 34,
"text": "All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of a right \"symmetric\" di-n-gonal bipyramid, with a regular zigzag skew polygon base.",
"title": "Scalenohedra"
},
{
"paragraph_id": 35,
"text": "A \"regular\" right \"symmetric\" di-n-gonal scalenohedron has n two-fold rotation axes through opposite basal mid-edges, n reflection planes through opposite apical edges, an n-fold rotation axis through apices, and a 2n-fold rotation-reflection axis through apices (about which 1n rotations-reflections globally preserve the solid), representing symmetry group Dnv = Dnd, [2,2n], (2*n), of order 4n. (If n is odd, then there is an inversion symmetry about the center, corresponding to the 180° rotation-reflection.)",
"title": "Scalenohedra"
},
{
"paragraph_id": 36,
"text": "Example with 2n = 2×3:",
"title": "Scalenohedra"
},
{
"paragraph_id": 37,
"text": "Example with 2n = 2×2:",
"title": "Scalenohedra"
},
{
"paragraph_id": 38,
"text": "Note: For at most two particular values of z A = | z A ′ | , {\\displaystyle z_{A}=|z_{A'}|,} the faces of such a scalenohedron may be isosceles.",
"title": "Scalenohedra"
},
{
"paragraph_id": 39,
"text": "Double example:",
"title": "Scalenohedra"
},
{
"paragraph_id": 40,
"text": "In crystallography, \"regular\" right \"symmetric\" \"didigonal\" (8-faced) and ditrigonal (12-faced) scalenohedra exist.",
"title": "Scalenohedra"
},
{
"paragraph_id": 41,
"text": "The smallest geometric scalenohedra have eight faces, and are topologically identical to the regular octahedron. In this case (2n = 2×2), in crystallography, a \"regular\" right \"symmetric\" \"didigonal\" (8-faced) scalenohedron is called a tetragonal scalenohedron.",
"title": "Scalenohedra"
},
{
"paragraph_id": 42,
"text": "Let us temporarily focus on the \"regular\" right \"symmetric\" 8-faced scalenohedra with h = r, i.e.",
"title": "Scalenohedra"
},
{
"paragraph_id": 43,
"text": "Their two apices can be represented as A, A' and their four basal vertices as U, U', V, V':",
"title": "Scalenohedra"
},
{
"paragraph_id": 44,
"text": "where z is a parameter between 0 and 1. At z = 0, it is a regular octahedron; at z = 1, it has four pairs of coplanar faces, and merging these into four congruent isosceles triangles makes it a disphenoid; for z > 1, it is concave.",
"title": "Scalenohedra"
},
{
"paragraph_id": 45,
"text": "Note: If the 2n-gon base is both isotoxal in-out and zigzag skew, then not all faces of the \"isotoxal\" right \"symmetric\" scalenohedron are congruent.",
"title": "Scalenohedra"
},
{
"paragraph_id": 46,
"text": "Example with five different edge lengths:",
"title": "Scalenohedra"
},
{
"paragraph_id": 47,
"text": "Note: For some particular values of zA = |zA'|, half the faces of such a scalenohedron may be isosceles or equilateral.",
"title": "Scalenohedra"
},
{
"paragraph_id": 48,
"text": "Example with three different edge lengths:",
"title": "Scalenohedra"
},
{
"paragraph_id": 49,
"text": "A self-intersecting or star bipyramid has a star polygon base.",
"title": "\"Regular\" star bipyramids"
},
{
"paragraph_id": 50,
"text": "A \"regular\" right symmetric star bipyramid is defined by a regular star polygon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each basal edge to each apex.",
"title": "\"Regular\" star bipyramids"
},
{
"paragraph_id": 51,
"text": "A \"regular\" right symmetric star bipyramid has congruent isosceles triangle faces, and is isohedral.",
"title": "\"Regular\" star bipyramids"
},
{
"paragraph_id": 52,
"text": "Note: For at most one particular value of z A = | z A ′ | , {\\displaystyle z_{A}=|z_{A'}|,} the faces of such a \"regular\" star bipyramid may be equilateral.",
"title": "\"Regular\" star bipyramids"
},
{
"paragraph_id": 53,
"text": "A p/q-bipyramid has Coxeter diagram .",
"title": "\"Regular\" star bipyramids"
},
{
"paragraph_id": 54,
"text": "An \"isotoxal\" right symmetric 2p/q-gonal star bipyramid is defined by an isotoxal in-out star 2p/q-gon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each basal edge to each apex.",
"title": "Scalene triangle star bipyramids"
},
{
"paragraph_id": 55,
"text": "An \"isotoxal\" right symmetric 2p/q-gonal star bipyramid has congruent scalene triangle faces, and is isohedral. It can be seen as another type of a 2p/q-gonal right \"symmetric\" star scalenohedron, with an isotoxal in-out star polygon base.",
"title": "Scalene triangle star bipyramids"
},
{
"paragraph_id": 56,
"text": "Note: For at most two particular values of z A = | z A ′ | , {\\displaystyle z_{A}=|z_{A'}|,} the faces of such a scalene triangle star bipyramid may be isosceles.",
"title": "Scalene triangle star bipyramids"
},
{
"paragraph_id": 57,
"text": "A \"regular\" right \"symmetric\" 2p/q-gonal star scalenohedron is defined by a regular zigzag skew star 2p/q-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each basal edge to each apex.",
"title": "Star scalenohedra"
},
{
"paragraph_id": 58,
"text": "A \"regular\" right \"symmetric\" 2p/q-gonal star scalenohedron has congruent scalene triangle faces, and is isohedral. It can be seen as another type of a right \"symmetric\" 2p/q-gonal star bipyramid, with a regular zigzag skew star polygon base.",
"title": "Star scalenohedra"
},
{
"paragraph_id": 59,
"text": "Note: For at most two particular values of z A = | z A ′ | , {\\displaystyle z_{A}=|z_{A'}|,} the faces of such a star scalenohedron may be isosceles.",
"title": "Star scalenohedra"
},
{
"paragraph_id": 60,
"text": "Note: If the star 2p/q-gon base is both isotoxal in-out and zigzag skew, then not all faces of the \"isotoxal\" right \"symmetric\" star scalenohedron are congruent.",
"title": "Star scalenohedra"
},
{
"paragraph_id": 61,
"text": "Note: For some particular values of zA = |zA'|, half the faces of such a star scalenohedron may be isosceles or equilateral.",
"title": "Star scalenohedra"
},
{
"paragraph_id": 62,
"text": "Example with four different edge lengths:",
"title": "Star scalenohedra"
},
{
"paragraph_id": 63,
"text": "Example with three different edge lengths:",
"title": "Star scalenohedra"
},
{
"paragraph_id": 64,
"text": "The dual of the rectification of each convex regular 4-polytopes is a cell-transitive 4-polytope with bipyramidal cells. In the following, the apex vertex of the bipyramid is A and an equator vertex is E. The distance between adjacent vertices on the equator EE = 1, the apex to equator edge is AE and the distance between the apices is AA. The bipyramid 4-polytope will have VA vertices where the apices of NA bipyramids meet. It will have VE vertices where the type E vertices of NE bipyramids meet.",
"title": "4-polytopes with bipyramidal cells"
},
{
"paragraph_id": 65,
"text": "As cells must fit around an edge,",
"title": "4-polytopes with bipyramidal cells"
},
{
"paragraph_id": 66,
"text": "In general, a bipyramid can be seen as an n-polytope constructed with a (n − 1)-polytope in a hyperplane with two points in opposite directions and equal perpendicular distances from the hyperplane. If the (n − 1)-polytope is a regular polytope, it will have identical pyramidal facets.",
"title": "Other dimensions"
},
{
"paragraph_id": 67,
"text": "A 2-dimensional (\"regular\") right symmetric (digonal) bipyramid is formed by joining two congruent isosceles triangles base-to-base; its outline is a rhombus, { } + { }.",
"title": "Other dimensions"
},
{
"paragraph_id": 68,
"text": "A polyhedral bipyramid is a 4-polytope with a polyhedron base, and an apex point.",
"title": "Other dimensions"
},
{
"paragraph_id": 69,
"text": "An example is the 16-cell, which is an octahedral bipyramid, { } + {3,4}, and more generally an n-orthoplex is an (n − 1)-orthoplex bipyramid, { } + {3,4}.",
"title": "Other dimensions"
},
{
"paragraph_id": 70,
"text": "Other bipyramids include the tetrahedral bipyramid, { } + {3,3}, icosahedral bipyramid, { } + {3,5}, and dodecahedral bipyramid, { }+{5,3}, the first two having all regular cells, they are also Blind polytopes.",
"title": "Other dimensions"
}
] | A (symmetric) n-gonal bipyramid or dipyramid is a polyhedron formed by joining an n-gonal pyramid and its mirror image base-to-base. An n-gonal bipyramid has 2n triangle faces, 3n edges, and 2 + n vertices. The "n-gonal" in the name of a bipyramid does not refer to a face but to the internal polygon base, lying in the mirror plane that connects the two pyramid halves. | 2002-02-25T15:43:11Z | 2023-11-21T16:27:22Z | [
"Template:Mvar",
"Template:Efn",
"Template:Cite web",
"Template:Cite book",
"Template:Short description",
"Template:Math",
"Template:Sfn",
"Template:Citation needed",
"Template:Overline",
"Template:Cite EB1911",
"Template:Redirect",
"Template:Infobox polyhedron",
"Template:Bipyramids",
"Template:Tmath",
"Template:Mathworld",
"Template:Polyhedron navigator",
"Template:Use dmy dates",
"Template:Nowrap",
"Template:CDD",
"Template:Reflist",
"Template:Notelist",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Bipyramid |
4,157 | Brown University | Brown University is a private Ivy League research university in Providence, Rhode Island. It is the seventh-oldest institution of higher education in the United States, founded in 1764 as the College in the English Colony of Rhode Island and Providence Plantations. One of nine colonial colleges chartered before the American Revolution, it was the first college in the United States to codify in its charter that admission and instruction of students was to be equal regardless of their religious affiliation.
The university is home to the oldest applied mathematics program in the United States, the oldest engineering program in the Ivy League, and the third-oldest medical program in New England. It was one of the early doctoral-granting U.S. institutions in the late 19th century, adding masters and doctoral studies in 1887. In 1969, it adopted its Open Curriculum after a period of student lobbying, which eliminated mandatory "general education" distribution requirements, made students "the architects of their own syllabus", and allowed them to take any course for a grade of satisfactory (Pass) or no-credit (Fail) which is unrecorded on external transcripts. In 1971, Brown's coordinate women's institution, Pembroke College, was fully merged into the university.
The university comprises the College, the Graduate School, Alpert Medical School, the School of Engineering, the School of Public Health and the School of Professional Studies. Its international programs are organized through the Watson Institute for International and Public Affairs, and it is academically affiliated with the Marine Biological Laboratory and the Rhode Island School of Design; with the latter, it offers undergraduate and graduate dual degree programs.
Brown's main campus is in the College Hill neighborhood of Providence, Rhode Island. The university is surrounded by a federally listed architectural district with a dense concentration of Colonial-era buildings. Benefit Street, which runs along the campus's western edge, has one of America's richest concentrations of 17th- and 18th-century architecture. Brown's undergraduate admissions are among the most selective in the country, with an overall acceptance rate of 5% for the class of 2026.
As of March 2022, 11 Nobel Prize winners have been affiliated with Brown as alumni, faculty, or researchers, as well as 1 Fields Medalist, 7 National Humanities Medalists and 11 National Medal of Science laureates. Other notable alumni include 27 Pulitzer Prize winners, 21 billionaires, 1 U.S. Supreme Court Chief Justice, 4 U.S. Secretaries of State, over 100 members of the United States Congress, 58 Rhodes Scholars, 22 MacArthur Genius Fellows, and 38 Olympic medalists.
In 1761, three residents of Newport, Rhode Island, drafted a petition to the colony's General Assembly:
That your Petitioners propose to open a literary institution or School for instructing young Gentlemen in the Languages, Mathematics, Geography & History, & such other branches of Knowledge as shall be desired. That for this End... it will be necessary... to erect a public Building or Buildings for the boarding of the youth & the Residence of the Professors.
The three petitioners were Ezra Stiles, pastor of Newport's Second Congregational Church and future president of Yale University; William Ellery Jr., future signer of the United States Declaration of Independence; and Josias Lyndon, future governor of the colony. Stiles and Ellery later served as co-authors of the college's charter two years later. The editor of Stiles's papers observes, "This draft of a petition connects itself with other evidence of Dr. Stiles's project for a Collegiate Institution in Rhode Island, before the charter of what became Brown University."
The Philadelphia Association of Baptist Churches was also interested in establishing a college in Rhode Island—home of the mother church of their denomination. At the time, the Baptists were unrepresented among the colonial colleges; the Congregationalists had Harvard and Yale, the Presbyterians had the College of New Jersey (later Princeton), and the Episcopalians had the College of William and Mary and King's College (later Columbia) while their local University of Pennsylvania was specifically founded without direct association with any particular denomination. Isaac Backus, a historian of the New England Baptists and an inaugural trustee of Brown, wrote of the October 1762 resolution taken at Philadelphia:
The Philadelphia Association obtained such an acquaintance with our affairs, as to bring them to an apprehension that it was practicable and expedient to erect a college in the Colony of Rhode-Island, under the chief direction of the Baptists; ... Mr. James Manning, who took his first degree in New-Jersey college in September, 1762, was esteemed a suitable leader in this important work.
James Manning arrived at Newport in July 1763 and was introduced to Stiles, who agreed to write the charter for the college. Stiles' first draft was read to the General Assembly in August 1763, and rejected by Baptist members who worried that their denomination would be underrepresented in the College Board of Fellows. A revised charter written by Stiles and Ellery was adopted by the Rhode Island General Assembly on March 3, 1764, in East Greenwich.
In September 1764, the inaugural meeting of the corporation—the college's governing body—was held in Newport's Old Colony House. Governor Stephen Hopkins was chosen chancellor, former and future governor Samuel Ward vice chancellor, John Tillinghast treasurer, and Thomas Eyres secretary. The charter stipulated that the board of trustees should be composed of 22 Baptists, five Quakers, five Episcopalians, and four Congregationalists. Of the 12 Fellows, eight should be Baptists—including the college president—"and the rest indifferently of any or all Denominations."
At the time of its creation, Brown's charter was a uniquely progressive document. Other colleges had curricular strictures against opposing doctrines, while Brown's charter asserted, "Sectarian differences of opinions, shall not make any Part of the Public and Classical Instruction." The document additionally "recognized more broadly and fundamentally than any other [university charter] the principle of denominational cooperation." The oft-repeated statement that Brown's charter alone prohibited a religious test for College membership is inaccurate; other college charters were similarly liberal in that particular.
The college was founded as Rhode Island College, at the site of the First Baptist Church in Warren, Rhode Island. Manning was sworn in as the college's first president in 1765 and remained in the role until 1791. In 1766, the college authorized the Reverend Morgan Edwards to travel to Europe to "solicit Benefactions for this Institution". During his year-and-a-half stay in the British Isles, Edwards secured funding from benefactors including Thomas Penn and Benjamin Franklin.
In 1770, the college moved from Warren to Providence. To establish a campus, John and Moses Brown purchased a four-acre lot on the crest of College Hill on behalf of the school. The majority of the property fell within the bounds of the original home lot of Chad Brown, an ancestor of the Browns and one of the original proprietors of Providence Plantations. After the college was relocated to the city, work began on constructing its first building.
A building committee, organized by the corporation, developed plans for the college's first purpose-built edifice, finalizing a design on February 9, 1770. The subsequent structure, referred to as "The College Edifice" and later as University Hall, may have been modeled on Nassau Hall, built 14 years prior at the College of New Jersey. President Manning, an active member of the building process, was educated at Princeton and might have suggested that Brown's first building resemble that of his alma mater.
Nicholas Brown, John Brown, Joseph Brown, and Moses Brown were instrumental in moving the college to Providence, constructing its first building, and securing its endowment. Joseph became a professor of natural philosophy at the college; John served as its treasurer from 1775 to 1796; and Nicholas Sr's son Nicholas Brown Jr. succeeded his uncle as treasurer from 1796 to 1825.
On September 8, 1803, the corporation voted, "That the donation of $5,000, if made to this College within one Year from the late Commencement, shall entitle the donor to name the College." The following year, the appeal was answered by College Treasurer Nicholas Brown Jr. In a letter dated September 6, 1804, Brown committed "a donation of Five Thousand Dollars to Rhode Island College, to remain in perpetuity as a fund for the establishment of a Professorship of Oratory and Belles Letters." In recognition of the gift, the corporation on the same day voted, "That this College be called and known in all future time by the Name of Brown University." Over the years, the benefactions of Nicholas Brown Jr., totaled nearly $160,000 and included funds for building Hope College (1821–22) and Manning Hall (1834–35).
In 1904, the John Carter Brown Library was established as an independently funded research library on Brown's campus; the library's collection was founded on that of John Carter Brown, son of Nicholas Brown Jr.
The Brown family was involved in various business ventures in Rhode Island, and accrued wealth both directly and indirectly from the transatlantic slave trade. The family was divided on the issue of slavery. John Brown had defended slavery, while Moses and Nicholas Brown Jr. were fervent abolitionists.
In 2003, under the tenure of President Ruth Simmons, the university established a steering committee to investigate these ties of the university to slavery and recommend a strategy to address them.
With British vessels patrolling Narragansett Bay in the fall of 1776, the college library was moved out of Providence for safekeeping. During the subsequent American Revolutionary War, Brown's University Hall was used to house French and other revolutionary troops led by General George Washington and the Comte de Rochambeau as they waited to commence the march of 1781 that led to the Siege of Yorktown and the Battle of the Chesapeake. This has been celebrated as marking the defeat of the British and the end of the war. The building functioned as barracks and hospital from December 10, 1776, to April 20, 1780, and as a hospital for French troops from June 26, 1780, to May 27, 1782.
A number of Brown's founders and alumni played roles in the American Revolution and subsequent founding of the United States. Brown's first chancellor, Stephen Hopkins, served as a delegate to the Colonial Congress in Albany in 1754, and to the Continental Congress from 1774 to 1776. James Manning represented Rhode Island at the Congress of the Confederation, while concurrently serving as Brown's first president. Two of Brown's founders, William Ellery and Stephen Hopkins signed the Declaration of Independence.
James Mitchell Varnum, who graduated from Brown with honors in 1769, served as one of General George Washington's Continental Army brigadier generals and later as major general in command of the entire Rhode Island militia. Varnum is noted as the founder and commander of the 1st Rhode Island Regiment, widely regarded as the first Black battalion in U.S. military history. David Howell, who graduated with an A.M. in 1769, served as a delegate to the Continental Congress from 1782 to 1785.
Nineteen individuals have served as presidents of the university since its founding in 1764. Since 2012, Christina Hull Paxson has served as president. Paxson had previously served as dean of Princeton University's School of Public and International Affairs and chair of Princeton's economics department. Paxson's immediate predecessor, Ruth Simmons, is noted as the first African American president of an Ivy League institution. Other presidents of note include academic, Vartan Gregorian; and philosopher and economist, Francis Wayland.
In 1966, the first Group Independent Study Project (GISP) at Brown was formed, involving 80 students and 15 professors. The GISP was inspired by student-initiated experimental schools, especially San Francisco State College, and sought ways to "put students at the center of their education" and "teach students how to think rather than just teaching facts".
Members of the GISP, Ira Magaziner and Elliot Maxwell published a paper of their findings titled, "Draft of a Working Paper for Education at Brown University." The paper made proposals for a new curriculum, including interdisciplinary freshman-year courses that would introduce "modes of thought," with instruction from faculty from different disciplines as well as for an end to letter grades. The following year Magaziner began organizing the student body to press for the reforms, organizing discussions and protests.
In 1968, university president Ray Heffner established a Special Committee on Curricular Philosophy. Composed of administrators, the committee was tasked with developing specific reforms and producing recommendations. A report, produced by the committee, was presented to the faculty, which voted the New Curriculum into existence on May 7, 1969. Its key features included:
The Modes of Thought course was discontinued early on, but the other elements remain in place. In 2006, the reintroduction of plus/minus grading was proposed in response to concerns regarding grade inflation. The idea was rejected by the College Curriculum Council after canvassing alumni, faculty, and students, including the original authors of the Magaziner-Maxwell Report.
In 2003, then-university president Ruth Simmons launched a steering committee to research Brown's eighteenth-century ties to slavery. In October 2006, the committee released a report documenting its findings.
Titled "Slavery and Justice", the document detailed the ways in which the university benefited both directly and indirectly from the transatlantic slave trade and the labor of enslaved people. The report also included seven recommendations for how the university should address this legacy. Brown has since completed a number of these recommendations including the establishment of its Center for the Study of Slavery and Justice, the construction of its Slavery Memorial, and the funding of a $10 million permanent endowment for Providence Public Schools.
The Slavery and Justice report marked the first major effort by an American university to address its ties to slavery and prompted other institutions to undertake similar processes.
Brown's coat of arms was created in 1834. The prior year, president Francis Wayland had commissioned a committee to update the school's original seal to match the name the university had adopted in 1804. Central in the coat of arms is a white escutcheon divided into four sectors by a red cross. Within each sector of the coat of arms lies an open book. Above the shield is a crest consisting of the upper half of a sun in splendor among the clouds atop a red and white torse.
Brown is the largest institutional landowner in Providence, with properties on College Hill and in the Jewelry District. The university was built contemporaneously with the eighteenth and nineteenth-century precincts surrounding it, making Brown's campus tightly integrated into Providence's urban fabric. Among the noted architects who have shaped Brown's campus are McKim, Mead & White, Philip Johnson, Rafael Viñoly, Diller Scofidio + Renfro, and Robert A. M. Stern.
Brown's main campus, comprises 235 buildings and 143 acres (0.58 km) in the East Side neighborhood of College Hill. The university's central campus sits on a 15-acre (6.1-hectare) block bounded by Waterman, Prospect, George, and Thayer Streets; newer buildings extend northward, eastward, and southward. Brown's core, historic campus, constructed primary between 1770 and 1926, is defined by three greens: the Front or Quiet Green, the Middle or College Green, and the Ruth J. Simmons Quadrangle (historically known as Lincoln Field). A brick and wrought-iron fence punctuated by decorative gates and arches traces the block's perimeter. This section of campus is primarily Georgian and Richardsonian Romanesque in its architectural character.
To the south of the central campus are academic buildings and residential quadrangles, including Wriston, Keeney, and Gregorian quadrangles. Immediately to the east of the campus core sit Sciences Park and Brown's School of Engineering. North of the central campus are performing and visual arts facilities, life sciences labs, and the Pembroke Campus, which houses both dormitories and academic buildings. Facing the western edge of the central campus sit two of the Brown's seven libraries, the John Hay Library and the John D. Rockefeller Jr. Library.
The university's campus is contiguous with that of the Rhode Island School of Design, which is located immediately to Brown's west, along the slope of College Hill.
Built in 1901, the Van Wickle Gates are a set of wrought iron gates that stand at the western edge of Brown's campus. The larger main gate is flanked by two smaller side gates. At Convocation the central gate opens inward to admit the procession of new students; at Commencement, the gate opens outward for the procession of graduates. A Brown superstition holds that students who walk through the central gate a second time prematurely will not graduate, although walking backward is said to cancel the hex.
The John Hay Library is the second oldest library on campus. Opened in 1910, the library is named for John Hay (class of 1858), private secretary to Abraham Lincoln and Secretary of State under William McKinley and Theodore Roosevelt. The construction of the building was funded in large part by Hay's friend, Andrew Carnegie, who contributed half of the $300,000 cost of construction.
The John Hay Library serves as the repository of the university's archives, rare books and manuscripts, and special collections. Noteworthy among the latter are the Anne S. K. Brown Military Collection (described as "the foremost American collection of material devoted to the history and iconography of soldiers and soldiering"), the Harris Collection of American Poetry and Plays (described as "the largest and most comprehensive collection of its kind in any research library"), the Lownes Collection of the History of Science (described as "one of the three most important private collections of books of science in America"), and the papers of H. P. Lovecraft. The Hay Library is home to one of the broadest collections of incunabula in the Americas, one of Brown's two Shakespeare First Folios, the manuscript of George Orwell's Nineteen Eighty-Four, and three books bound in human skin.
Founded in 1846, the John Carter Brown Library is generally regarded as the world's leading collection of primary historical sources relating to the exploration and colonization of the Americas. While administered and funded separately from the university, the library has been owned by Brown and located on its campus since 1904.
The library contains the best preserved of the eleven surviving copies of the Bay Psalm Book—the earliest extant book printed in British North America and the most expensive printed book in the world. Other holdings include a Shakespeare First Folio and the world's largest collection of 16th century Mexican texts.
The exhibition galleries of the Haffenreffer Museum of Anthropology, Brown's teaching museum, are located in Manning Hall on the campus's main green. Its one million artifacts, available for research and educational purposes, are located at its Collections Research Center in Bristol, Rhode Island. The museum's goal is to inspire creative and critical thinking about culture by fostering an interdisciplinary understanding of the material world. It provides opportunities for faculty and students to work with collections and the public, teaching through objects and programs in classrooms and exhibitions. The museum sponsors lectures and events in all areas of anthropology and also runs an extensive program of outreach to local schools.
The Annmary Brown Memorial was constructed from 1903 to 1907 by the politician, Civil War veteran, and book collector General Rush Hawkins, as a mausoleum for his wife, Annmary Brown, a member of the Brown family. In addition to its crypt—the final repository for Brown and Hawkins—the Memorial includes works of art from Hawkins's private collection, including paintings by Angelica Kauffman, Peter Paul Rubens, Gilbert Stuart, Giovanni Battista Tiepolo, Benjamin West, and Eastman Johnson, among others. His collection of over 450 incunabula was relocated to the John Hay Library in 1990. Today the Memorial is home to Brown's Medieval Studies and Renaissance Studies programs.
The Walk, a landscaped pedestrian corridor, connects the Pembroke Campus to the main campus. It runs parallel to Thayer Street and serves as a primary axis of campus, extending from Ruth Simmons Quadrangle at its southern terminus to the Meeting Street entrance to the Pembroke Campus at its northern end. The walk is bordered by departmental buildings as well as the Lindemann Performing Arts Center and Granoff Center for the Creative Arts
The corridor is home to public art including sculptures by Maya Lin and Tom Friedman.
The Women's College in Brown University, known as Pembroke College, was founded in October 1891. Upon its 1971 merger with the College of Brown University, Pembroke's campus was absorbed into the larger Brown campus. The Pembroke campus is bordered by Meeting, Brown, Bowen, and Thayer Streets and sits three blocks north of Brown's central campus. The campus is dominated by brick architecture, largely of the Georgian and Victorian styles. The west side of the quadrangle comprises Pembroke Hall (1897), Smith-Buonanno Hall (1907), and Metcalf Hall (1919), while the east side comprises Alumnae Hall (1927) and Miller Hall (1910). The quadrangle culminates on the north with Andrews Hall (1947).
East Campus, centered on Hope and Charlesfield streets, originally served as the campus of Bryant University. In 1969, as Bryant was preparing to relocate to Smithfield, Rhode Island, Brown purchased their Providence campus for $5 million. The transaction expanded the Brown campus by 10 acres (40,000 m) and 26 buildings. In 1971, Brown renamed the area East Campus. Today, the area is largely used for dormitories.
Thayer Street runs through Brown's main campus. As a commercial corridor frequented by students, Thayer is comparable to Harvard Square or Berkeley's Telegraph Avenue. Wickenden Street, in the adjacent Fox Point neighborhood, is another commercial street similarly popular among students.
Built in 1925, Brown Stadium—the home of the school's football team—is located approximately a mile and a half northeast of the university's central campus. Marston Boathouse, the home of Brown's crew teams, lies on the Seekonk River, to the southeast of campus. Brown's sailing teams are based out of the Ted Turner Sailing Pavilion at the Edgewood Yacht Club in adjacent Cranston.
Since 2011, Brown's Warren Alpert Medical School has been located in Providence's historic Jewelry District, near the medical campus of Brown's teaching hospitals, Rhode Island Hospital and the Women and Infants Hospital of Rhode Island. Other university facilities, including molecular medicine labs and administrative offices, are likewise located in the area.
Brown's School of Public Health occupies a landmark modernist building along the Providence River. Other Brown properties include the 376-acre (1.52 km) Mount Hope Grant in Bristol, Rhode Island, an important Native American site noted as a location of King Philip's War. Brown's Haffenreffer Museum of Anthropology Collection Research Center, particularly strong in Native American items, is located in the Mount Hope Grant.
Brown has committed to "minimize its energy use, reduce negative environmental impacts, and promote environmental stewardship." Since 2010, the university has required all new buildings meet LEED silver standards. Between 2007 and 2018, Brown reduced its greenhouse emissions by 27 percent; the majority of this reduction is attributable to the university's Thermal Efficiency Project which converted its central heating plant from a steam-powered system to a hot water-powered system.
In 2020, Brown announced it had sold 90 percent of its fossil fuel investments as part of a broader divestment from direct investments and managed funds that focus on fossil fuels. In 2021, the university adopted the goal of reducing quantifiable campus emissions by 75 percent by 2025 and achieving carbon neutrality by 2040. Brown is a member of the Ivy Plus Sustainability Consortium, through which it has committed to best-practice sharing and the ongoing exchange of campus sustainability solutions along with other member institutions.
According to the A. W. Kuchler U.S. potential natural vegetation types, Brown would have a dominant vegetation type of Appalachian Oak (104) with a dominant vegetation form of Eastern Hardwood Forest (25).
Founded in 1764, The College is Brown's oldest school. About 7,200 undergraduate students are enrolled in the college , and 81 concentrations are offered. For the graduating class of 2020, the most popular concentrations were Computer Science, Economics, Biology, History, Applied Mathematics, International Relations, and Political Science. A quarter of Brown undergraduates complete more than one concentration before graduating. If the existing programs do not align with their intended curricular interests, undergraduates may design and pursue independent concentrations.
Around 35 percent of undergraduates pursue graduate or professional study immediately, 60 percent within 5 years, and 80 percent within 10 years. For the Class of 2009, 56 percent of all undergraduate alumni have since earned graduate degrees. Among undergraduate alumni who go on to receive graduate degrees, the most common degrees earned are J.D. (16%), M.D. (14%), M.A. (14%), M.Sc. (14%), and Ph.D. (11%). The most common institutions from which undergraduate alumni earn graduate degrees are Brown University, Columbia University, and Harvard University.
The highest fields of employment for undergraduate alumni ten years after graduation are education and higher education (15%), medicine (9%), business and finance (9%), law (8%), and computing and technology (7%).
Since its 1893 relocation to College Hill, Rhode Island School of Design (RISD) has bordered Brown to its west. Since 1900, Brown and RISD students have been able to cross-register at the two institutions, with Brown students permitted to take as many as four courses at RISD to count towards their Brown degree. The two institutions partner to provide various student-life services and the two student bodies compose a synergy in the College Hill cultural scene.
After several years of discussion between the two institutions and several students pursuing dual degrees unofficially, Brown and RISD formally established a five-year dual degree program in 2007, with the first class matriculating in the fall of 2008. The Brown|RISD Dual Degree Program, among the most selective in the country, offered admission to 20 of the 725 applicants for the class entering in autumn 2020, for an acceptance rate of 2.7%. The program combines the complementary strengths of the two institutions, integrating studio art and design at RISD with Brown's academic offerings. Students are admitted to the Dual Degree Program for a course lasting five years and culminating in both the Bachelor of Arts (A.B.) or Bachelor of Science (Sc.B.) degree from Brown and the Bachelor of Fine Arts (B.F.A.) degree from RISD. Prospective students must apply to the two schools separately and be accepted by separate admissions committees. Their application must then be approved by a third Brown|RISD joint committee.
Admitted students spend the first year in residence at RISD completing its first-year Experimental and Foundation Studies curriculum while taking up to three Brown classes. Students spend their second year in residence at Brown, during which students take mainly Brown courses while starting on their RISD major requirements. In the third, fourth, and fifth years, students can elect to live at either school or off-campus, and course distribution is determined by the requirements of each student's unique combination of Brown concentration and RISD major. Program participants are noted for their creative and original approach to cross-disciplinary opportunities, combining, for example, industrial design with engineering, or anatomical illustration with human biology, or philosophy with sculpture, or architecture with urban studies. An annual "BRDD Exhibition" is a well-publicized and heavily attended event, drawing interest and attendees from the broader world of industry, design, the media, and the fine arts.
In 2020, the two schools announced the establishment of a new joint Master of Arts in Design Engineering program. Abbreviated as MADE, the program intends to combine RISD's programs in industrial design with Brown's programs in engineering. The program is administered through Brown's School of Engineering and RISD's Architecture and Design Division.
Brown's theatre and playwriting programs are among the best-regarded in the country. Six Brown graduates have received the Pulitzer Prize for Drama; Alfred Uhry '58, Lynn Nottage '86, Ayad Akhtar '93, Nilo Cruz '94, Quiara Alegría Hudes '04, and Jackie Sibblies Drury MFA '04. In American Theater magazine's 2009 ranking of the most-produced American plays, Brown graduates occupied four of the top five places—Peter Nachtrieb '97, Rachel Sheinkin '89, Sarah Ruhl '97, and Stephen Karam '02.
The undergraduate concentration encompasses programs in theatre history, performance theory, playwriting, dramaturgy, acting, directing, dance, speech, and technical production. Applications for doctoral and master's degree programs are made through the University Graduate School. Master's degrees in acting and directing are pursued in conjunction with the Brown/Trinity Rep MFA program, which partners with the Trinity Repertory Company, a local regional theatre.
Writing at Brown—fiction, non-fiction, poetry, playwriting, screenwriting, electronic writing, mixed media, and the undergraduate writing proficiency requirement—is catered for by various centers and degree programs, and a faculty that has long included nationally and internationally known authors. The undergraduate concentration in literary arts offers courses in fiction, poetry, screenwriting, literary hypermedia, and translation. Graduate programs include the fiction and poetry MFA writing programs in the literary arts department and the MFA playwriting program in the theatre arts and performance studies department. The non-fiction writing program is offered in the English department. Screenwriting and cinema narrativity courses are offered in the departments of literary arts and modern culture and media. The undergraduate writing proficiency requirement is supported by the Writing Center.
Alumni authors take their degrees across the spectrum of degree concentrations, but a gauge of the strength of writing at Brown is the number of major national writing prizes won. To note only winners since the year 2000: Pulitzer Prize for Fiction-winners Jeffrey Eugenides '82 (2003), Marilynne Robinson '66 (2005), and Andrew Sean Greer '92 (2018); British Orange Prize-winners Marilynne Robinson '66 (2009) and Madeline Miller '00 (2012); Pulitzer Prize for Drama-winners Nilo Cruz '94 (2003), Lynn Nottage '86 (twice, 2009, 2017), Quiara Alegría Hudes '04 (2012), Ayad Akhtar '93 (2013), and Jackie Sibblies Drury MFA '04 (2019); Pulitzer Prize for Biography-winners David Kertzer '69 (2015) and Benjamin Moser '98 (2020); Pulitzer Prize for Journalism-winners James Risen '77 (2006), Gareth Cook '91 (2005), Tony Horwitz '80 (1995), Usha Lee McFarling '89 (2007), David Rohde '90 (1996), Kathryn Schulz '96 (2016), and Alissa J. Rubin '80 (2016); Pulitzer Prize for General Nonfiction-winner James Forman Jr. '88 (2018); Pulitzer Prize for History-winner Marcia Chatelain PhD '08 (2021); Pulitzer Prize for Criticism-winner Salamishah Tillet MAT '97 (2022); and Pulitzer Prize for Poetry-winner Peter Balakian PhD '80 (2016)
Brown began offering computer science courses through the departments of Economics and Applied Mathematics in 1956 when it acquired an IBM machine. Brown added an IBM 650 in January 1958, the only one of its type between Hartford and Boston. In 1960, Brown opened its first dedicated computer building. The facility, designed by Philip Johnson, received an IBM 7070 computer the following year. Brown granted computer sciences full Departmental status in 1979. In 2009, IBM and Brown announced the installation of a supercomputer (by teraflops standards), the most powerful in the southeastern New England region.
In the 1960s, Andries van Dam along with Ted Nelson, and Bob Wallace invented The Hypertext Editing Systems, HES and FRESS while at Brown. Nelson coined the word hypertext while Van Dam's students helped originate XML, XSLT, and related Web standards. Among the school's computer science alumni are principal architect of the Classic Mac OS, Andy Hertzfeld; principal architect of the Intel 80386 and Intel 80486 microprocessors, John Crawford; former CEO of Apple, John Sculley; and digital effects programmer Masi Oka. Other alumni include former CS department head at MIT, John Guttag, Workday founder, Aneel Bhusri, MongoDB founder Eliot Horowitz, Figma founders Dylan Field and Evan Wallace; and OpenSea founder Devin Finzer.
The character "Andy" in the animated film Toy Story is purportedly an homage to professor Van Dam from his students employed at Pixar.
Between 2012 and 2018, the number of concentrators in CS tripled. In 2017, computer science overtook economics as the school's most popular undergraduate concentration.
Brown's program in applied mathematics was established in 1941 making it the oldest such program in the United States. The division is highly ranked and regarded nationally and internationally. Among the 67 recipients of the Timoshenko Medal, 22 have been affiliated with Brown's applied mathematics division as faculty, researchers, or students.
Established in 2004, the Joukowsky Institute for Archaeology and the Ancient World is Brown's interdisciplinary research center for archeology and ancient studies. The institute pursues fieldwork, excavations, regional surveys, and academic study of the archaeology and art of the ancient Mediterranean, Egypt, and Western Asia from the Levant to the Caucasus. The institute has a very active fieldwork profile, with faculty-led excavations and regional surveys presently in Petra (Jordan), Abydos (Egypt), Turkey, Sudan, Italy, Mexico, Guatemala, Montserrat, and Providence.
The Joukowsky Institute's faculty includes cross-appointments from the departments of Egyptology, Assyriology, Classics, Anthropology, and History of Art and Architecture. Faculty research and publication areas include Greek and Roman art and architecture, landscape archaeology, urban and religious architecture of the Levant, Roman provincial studies, the Aegean Bronze Age, and the archaeology of the Caucasus. The institute offers visiting teaching appointments and postdoctoral fellowships which have, in recent years, included Near Eastern Archaeology and Art, Classical Archaeology and Art, Islamic Archaeology and Art, and Archaeology and Media Studies.
Egyptology and Assyriology
Facing the Joukowsky Institute, across the Front Green, is the Department of Egyptology and Assyriology, formed in 2006 by the merger of Brown's departments of Egyptology and History of Mathematics. It is one of only a handful of such departments in the United States. The curricular focus is on three principal areas: Egyptology, Assyriology, and the history of the ancient exact sciences (astronomy, astrology, and mathematics). Many courses in the department are open to all Brown undergraduates without prerequisites and include archaeology, languages, history, and Egyptian and Mesopotamian religions, literature, and science. Students concentrating in the department choose a track of either Egyptology or Assyriology. Graduate-level study comprises three tracks to the doctoral degree: Egyptology, Assyriology, or the History of the Exact Sciences in Antiquity.
The Watson Institute for International and Public Affairs, Brown's center for the study of global Issues and public affairs, is one of the leading institutes of its type in the country. The institute occupies facilities designed by Uruguayan architect Rafael Viñoly and Japanese architect Toshiko Mori. The institute was initially endowed by Thomas Watson Jr. (Class of 1937), former Ambassador to the Soviet Union and longtime president of IBM.
Institute faculty and faculty emeritus include Italian prime minister and European Commission president Romano Prodi, Brazilian president Fernando Henrique Cardoso, Chilean president Ricardo Lagos Escobar, Mexican novelist and statesman Carlos Fuentes, Brazilian statesman and United Nations commission head Paulo Sérgio Pinheiro, Indian foreign minister and ambassador to the United States Nirupama Rao, American diplomat and Dayton Peace Accords author Richard Holbrooke (Class of 1962), and Sergei Khrushchev, editor of the papers of his father Nikita Khrushchev, leader of the Soviet Union.
The institute's curricular interest is organized into the principal themes of development, security, and governance—with further focuses on globalization, economic uncertainty, security threats, environmental degradation, and poverty. Six Brown undergraduate concentrations are hosted by the Watson Institute: Development Studies, International and Public Affairs, International Relations, Latin American and Caribbean Studies, Middle East Studies, Public Policy, and South Asian Studies. Graduate programs offered at the Watson Institute include the Graduate Program in Development (Ph.D.) and the Master of Public Affairs (M.P.A) Program. The institute also offers postdoctoral, professional development, and global outreach programming. In support of these programs, the institute houses various centers, including the Brazil Initiative, Brown-India Initiative, China Initiative, Middle East Studies Center, The Center for Latin American and Caribbean Studies (CLACS), and the Taubman Center for Public Policy. In recent years, the most internationally cited product of the Watson Institute has been its Costs of War Project, first released in 2011 and continuously updated since. The project comprises a team of economists, anthropologists, political scientists, legal experts, and physicians, and seeks to calculate the economic costs, human casualties, and impact on civil liberties of the wars in Iraq, Afghanistan, and Pakistan since 2001.
Established in 1847, Brown's engineering program is the oldest in the Ivy League and the third oldest civilian engineering program in the country. In 1916, Brown's departments of electrical, mechanical, and civil engineering were merged into a single Division of Engineering. In 2010 the division was elevated to a School of Engineering.
Engineering at Brown is especially interdisciplinary. The school is organized without the traditional departments or boundaries found at most schools and follows a model of connectivity between disciplines—including biology, medicine, physics, chemistry, computer science, the humanities, and the social sciences. The school practices an innovative clustering of faculties in which engineers team with non-engineers to bring a convergence of ideas.
Student teams have launched two CubeSats with the support of the School of Engineering. Brown Space Engineering developed EQUiSat a 1U satellite, and another interdisciplinary team developed SBUDNIC a 3U satellite.
Since 2009, Brown has developed an Executive MBA program in conjunction with one of the leading Business Schools in Europe, IE Business School in Madrid. This relationship has since strengthened resulting in both institutions offering a dual degree program. In this partnership, Brown provides its traditional coursework while IE provides most of the business-related subjects making a differentiated alternative program to other Ivy League's EMBAs. The cohort typically consists of 25–30 EMBA candidates from some 20 countries. Classes are held in Providence, Madrid, Cape Town and Online.
The Pembroke Center for Teaching and Research on Women was established at Brown in 1981 by Joan Wallach Scott as an interdisciplinary research center on gender. The center is named for Pembroke College, Brown's former women's college, and is affiliated with Brown's Sarah Doyle Women's Center. The Pembroke Center supports Brown's undergraduate concentration in Gender and Sexuality Studies, post-doctoral research fellowships, the annual Pembroke Seminar, and other academic programs. It also manages various collections, archives, and resources, including the Elizabeth Weed Feminist Theory Papers and the Christine Dunlap Farnham Archive.
Brown introduced graduate courses in the 1870s and granted its first advanced degrees in 1888. The university established a Graduate Department in 1903 and a full Graduate School in 1927.
With an enrollment of approximately 2,600 students, the school currently offers 33 and 51 master's and doctoral programs, respectively. The school additionally offers a number of fifth-year master's programs. Overall, admission to the Graduate School is most competitive with an acceptance rate averaging at approximately 9 percent in recent years.
The Robert J. & Nancy D. Carney Institute for Brain Science is Brown's cross-departmental neuroscience research institute. The institute's core focus areas include brain-computer interfaces and computational neuroscience; additional areas of focus include research into mechanisms of cell death with the interest of developing therapies for neurodegenerative diseases.
The Carney Institute was founded by John Donoghue in 2009 as the Brown Institute for Brain Science and renamed in 2018 in recognition of a $100 million gift. The donation, one of the largest in the university's history, established the institute as one of the best-endowed university neuroscience programs in the country.
Established in 1811, Brown's Alpert Medical School is the fourth oldest medical school in the Ivy League.
In 1827, medical instruction was suspended by President Francis Wayland after the program's faculty declined to follow a new policy requiring students to live on campus. The program was reorganized in 1972; the first M.D. degrees from the new Program in Medicine were awarded to a graduating class of 58 students in 1975. In 1991, the school was officially renamed the Brown University School of Medicine, then renamed once more to Brown Medical School in October 2000. In January 2007, entrepreneur and philanthropist Warren Alpert donated $100 million to the school. In recognition of the gift, the school's name was changed to the Warren Alpert Medical School of Brown University.
In 2020, U.S. News & World Report ranked Brown's medical school the 9th most selective in the country, with an acceptance rate of 2.8%. U.S. News ranks the school 38th for research and 35th for primary care.
Brown's medical school is known especially for its eight-year Program in Liberal Medical Education (PLME), an eight-year combined baccalaureate-M.D. medical program. Inaugurated in 1984, the program is one of the most selective and renowned programs of its type in the country, offering admission to only 2% of applicants in 2021.
Since 1976, the Early Identification Program (EIP) has encouraged Rhode Island residents to pursue careers in medicine by recruiting sophomores from Providence College, Rhode Island College, the University of Rhode Island, and Tougaloo College. In 2004, the school once again began to accept applications from premedical students at other colleges and universities via AMCAS like most other medical schools. The medical school also offers M.D./PhD, M.D./M.P.H. and M.D./M.P.P. dual degree programs.
Brown's School of Public Health grew out of the Alpert Medical School's Department of Community Health and was officially founded in 2013 as an independent school. The school issues undergraduate (A.B., Sc.B.), graduate (M.P.H., Sc.M., A.M.), doctoral (Ph.D.), and dual-degrees (M.P.H./M.P.A., M.D./M.P.H.).
The Brown University School of Professional Studies currently offers blended learning Executive master's degrees in Healthcare Leadership, Cyber Security, and Science and Technology Leadership. The master's degrees are designed to help students who have a job and life outside of academia to progress in their respective fields. The students meet in Providence every 6–7 weeks for a weekly seminar each trimester.
The university has also invested in MOOC development starting in 2013, when two courses, Archeology's Dirty Little Secrets and The Fiction of Relationship, both of which received thousands of students. However, after a year of courses, the university broke its contract with Coursera and revamped its online persona and MOOC development department. By 2017, the university released new courses on edx, two of which were The Ethics of Memory and Artful Medicine: Art's Power to Enrich Patient Care. In January 2018, Brown published its first "game-ified" course called Fantastic Places, Unhuman Humans: Exploring Humanity Through Literature, which featured out-of-platform games to help learners understand materials, as well as a story-line that immerses users into a fictional world to help characters along their journey.
Undergraduate admission to Brown University is considered "most selective" by U.S. News & World Report. For the undergraduate class of 2026, Brown received 50,649 applications—the largest applicant pool in the university's history and a 9% increase from the prior year. Of these applicants, 2,560 were admitted for an acceptance rate of 5.0%, the lowest in the university's history.
In 2021, the university reported a yield rate of 69%. For the academic year 2019–20 the university received 2,030 transfer applications, of which 5.8% were accepted.
Brown's admissions policy is stipulated need-blind for all domestic first-year applicants. In 2017, Brown announced that loans would be eliminated from all undergraduate financial aid awards starting in 2018–2019, as part of a new $30 million campaign called the Brown Promise. In 2016–17, the university awarded need-based scholarships worth $120.5 million. The average need-based award for the class of 2020 was $47,940.
In 2017, the Graduate School accepted 11% of 9,215 applicants. In 2021, Brown received a record 948 applications for roughly 90 spots in its Master of Public Health Degree.
In 2020, U.S. News ranked Brown's Warren Alpert Medical School the 9th most selective in the country, with an acceptance rate of 2.8 percent.
Brown University is accredited by the New England Commission of Higher Education. For their 2021 rankings, The Wall Street Journal/Times Higher Education ranked Brown 5th in the "Best Colleges 2021" edition.
The Forbes magazine annual ranking of "America's Top Colleges 2022"—which ranked 600 research universities, liberal arts colleges and service academies—ranked Brown 19th overall and 18th among universities.
U.S. News & World Report ranked Brown 9th among national universities in its 2023 edition. The 2022 edition also ranked Brown 2nd for undergraduate teaching, 25th in Most Innovative Schools, and 14th in Best Value Schools.
Washington Monthly ranked Brown 40th in 2022 among 442 national universities in the U.S. based on its contribution to the public good, as measured by social mobility, research, and promoting public service.
In 2022, U.S. News & World Report ranks Brown 129th globally.
In 2014, Forbes magazine ranked Brown 7th on its list of "America's Most Entrepreneurial Universities." The Forbes analysis looked at the ratio of "alumni and students who have identified themselves as founders and business owners on LinkedIn" and the total number of alumni and students. LinkedIn particularized the Forbes rankings, placing Brown third (between MIT and Princeton) among "Best Undergraduate Universities for Software Developers at Startups." LinkedIn's methodology involved a career-path examination of "millions of alumni profiles" in its membership database.
In 2016, 2017, 2018, and 2021 the university produced the most Fulbright recipients of any university in the nation. Brown has also produced the 7th most Rhodes Scholars of all colleges and universities in the United States.
Brown is a member of the Association of American Universities since 1933 and is classified among "R1: Doctoral Universities – Very High Research Activity". In FY 2017, Brown spent $212.3 million on research and was ranked 103rd in the United States by total R&D expenditure by National Science Foundation. In 2021 Brown's School of Public Health received the 4th most funding in NIH awards among schools of public health in the U.S.
In 2014, Brown tied with the University of Connecticut for the highest number of reported rapes in the nation, with its "total of reports of rape" on their main campus standing at 43. However, such rankings have been criticized for failing to account for how different campus environments can encourage or discourage individuals from reporting sexual assault cases, thereby affecting the number of reported rapes.
Established in 1950, Spring Weekend is an annual spring music festival for students. Historical performers at the festival have included Ella Fitzgerald, Dizzy Gillespie, Ray Charles, Bob Dylan, Janis Joplin, Bruce Springsteen, and U2. More recent headliners include Kendrick Lamar, Young Thug, Daniel Caesar, Anderson .Paak, Mitski, Aminé, and Mac DeMarco. Since 1960, Spring Weekend has been organized by the student-run Brown Concert Agency.
Approximately 12 percent of Brown students participate in Greek Life. The university recognizes thirteen active Greek organizations: six fraternities (Beta Omega Chi, Beta Rho Pi, Delta Tau, Delta Phi, Kappa Alpha Psi, and Theta Alpha), five sororities (Alpha Chi Omega, Delta Sigma Theta, Delta Gamma, Kappa Delta, and Kappa Alpha Theta,), one co-ed house (Zeta Delta Xi), and one co-ed literary society (Alpha Delta Phi). Other Greek-lettered organizations that have been historically active at Brown University include Alpha Kappa Alpha, Alpha Phi Alpha, and Lambda Upsilon Lambda.
Since the early 1950s, all Greek organizations on campus have been located in Wriston Quadrangle. The organizations are overseen by the Greek Council.
An alternative to Greek-letter organizations are Brown's program houses, which are organized by themes. As with Greek houses, the residents of program houses select their new members, usually at the start of the spring semester. Examples of program houses are St. Anthony Hall (located in King House), Buxton International House, the Machado French/Hispanic/Latinx House, Technology House, Harambee (African culture) House, Social Action House and Interfaith House.
All students not in program housing enter a lottery for general housing. Students form groups and are assigned time slots during which they can pick among the remaining housing options.
The earliest societies at Brown were devoted to oration and debate. The Pronouncing Society is mentioned in the diary of Solomon Drowne, class of 1773, who was voted its president in 1771. The organization seems to have disappeared during the American Revolutionary War. Subsequent societies include the Misokosmian Society (est. 1798 and renamed the Philermenian Society), the Philandrian Society (est. 1799), the United Brothers (1806), the Philophysian Society (1818), and the Franklin Society (1824). Societies served social as well as academic purposes, with many supporting literary debate and amassing large libraries. Older societies generally aligned with Federalists while younger societies generally leaned Republican.
Societies remained popular into the 1860s, after which they were largely replaced by fraternities.
The Cammarian Club was at first a semi-secret society that "tapped" 15 seniors each year. In 1915, self-perpetuating membership gave way to popular election by the student body, and thenceforward the club served as the de facto undergraduate student government. The organization was dissolved in 1971 and ultimately succeeded by a formal student government.
Societas Domi Pacificae, known colloquially as "Pacifica House", is a present-day, self-described secret society. It purports a continuous line of descent from the Franklin Society of 1824, citing a supposed intermediary "Franklin Society" traceable in the nineteenth century.
There are over 300 registered student organizations on campus with diverse interests. The Student Activities Fair, during the orientation program, provides first-year students the opportunity to become acquainted with a wide range of organizations. A sample of organizations includes:
In 2023, 38% of Brown's students identified as being LGBTQ+, in a poll by The Brown Daily Herald. The 2023 LGBTQ+ self-identification level was an increase, up from 14% LGBT identification in 2010. "Bisexual" was the most common answer amongst LGBTQ+ respondents to the poll.
Brown has several resource centers on campus. The centers often act as sources of support as well as safe spaces for students to explore certain aspects of their identity. Additionally, the centers often provide physical spaces for students to study and have meetings. Although most centers are identity-focused, some provide academic support as well.
The Brown Center for Students of Color (BCSC) is a space that provides support for students of color. Established in 1972 at the demand of student protests, the BCSC encourages students to engage in critical dialogue, develop leadership skills, and promote social justice. The center houses various programs for students to share their knowledge and engage in discussion. Programs include the Third World Transition Program, the Minority Peer Counselor Program, the Heritage Series, and other student-led initiatives. Additionally, the BCSC hopes to foster community among the students it serves by providing spaces for students to meet and study.
The Sarah Doyle Women's Center aims to provide a space for members of the Brown community to examine and explore issues surrounding gender. The center was named after one of the first women to attend Brown, Sarah Doyle. The center emphasizes intersectionality in its conversations on gender, encouraging people to see gender as present and relevant in various aspects of life. The center hosts programs and workshops in order to facilitate dialogue and provide resources for students, faculty, and staff.
Other centers include the LGBTQ Center, the Undocumented, First-Generation College and Low-Income Student (U-FLi) Center, and the Curricular Resource Center.
On December 5, 1968, several Black women from Pembroke College initiated a walkout in protest of an atmosphere at the colleges described by Black students as a "stifling, frustrating, [and] degrading place for Black students" after feeling the colleges were non-responsive to their concerns. In total, 65 Black students participated in the walkout. Their principal demand was to increase Black student enrollment to 11% of the student populace, in an attempt to match that of the proportion in the US. This ultimately resulted in a 300% increase in Black enrollment the following year, but some demands have yet to be met.
In the mid-1980s, under student pressure, the university divested from certain companies involved in South Africa. Some students were still unsatisfied with partial divestment and began a fast in Manning Chapel and the university disenrolled them. In April 1987, "dozens" of students interrupted a university corporation meeting, leading to 20 being put on probation.
In early December 2023, forty-one students held a sit-in, resulting in their arrests. The students were protesting the Israel-Hamas war and calling for a ceasefire, as well as for the university to divest from companies that "allegedly facilitate the "Israeli military occupation" in Gaza."
Brown is a member of the Ivy League athletic conference, which is categorized as a Division I (top-level) conference of the National Collegiate Athletic Association (NCAA).
The Brown Bears has one of the largest university sports programs in the United States, sponsoring 32 varsity intercollegiate teams. Brown's athletic program is one of the U.S. News & World Report top 20—the "College Sports Honor Roll"—based on breadth of the program and athletes' graduation rates.
Brown's newest varsity team is women's rugby, promoted from club-sport status in 2014. Brown women's rowing has won 7 national titles between 1999 and 2011. Brown men's rowing perennially finishes in the top 5 in the nation, most recently winning silver, bronze, and silver in the national championship races of 2012, 2013, and 2014. The men's and women's crews have also won championship trophies at the Henley Royal Regatta and the Henley Women's Regatta. Brown's men's soccer is consistently ranked in the top 20 and has won 18 Ivy League titles overall; recent soccer graduates play professionally in Major League Soccer and overseas.
Brown football, under its most successful coach historically, Phil Estes, won Ivy League championships in 1999, 2005, and 2008. high-profile alumni of the football program include former Houston Texans head coach Bill O'Brien; former Penn State football coach Joe Paterno, Heisman Trophy namesake John W. Heisman, and Pollard Award namesake Fritz Pollard.
Brown women's gymnastics won the Ivy League tournament in 2013 and 2014. The Brown women's sailing team has won 5 national championships, most recently in 2019 while the coed sailing team won 2 national championships in 1942 and 1948. Both teams are consistency ranked in the top 10 in the nation.
The first intercollegiate ice hockey game in America was played between Brown and Harvard on January 19, 1898. The first university rowing regatta larger than a dual-meet was held between Brown, Harvard, and Yale at Lake Quinsigamond in Massachusetts on July 26, 1859.
Brown also supports competitive intercollegiate club sports, including ultimate frisbee. The men's ultimate team, Brownian Motion, has won three national championships, in 2000, 2005, and 2019.
Alumni in politics and government include U.S. Secretary of State John Hay (1852), U.S. Secretary of State and U.S. Attorney General Richard Olney (1856), Chief Justice of the United States and U.S. Secretary of State Charles Evans Hughes (1881), Governor of Wyoming Territory and Nebraska Governor John Milton Thayer (1841), Rhode Island Governor Augustus Bourn (1855), Louisiana Governor Bobby Jindal '92, U.S. Senator Maggie Hassan '80 of New Hampshire, Delaware Governor Jack Markell '82, Rhode Island Representative David Cicilline '83, Minnesota Representative Dean Phillips '91, 2020 Presidential candidate and entrepreneur Andrew Yang '96, DNC Chair Tom Perez '83, diplomat Richard Holbrooke '62, and diplomat W. Stuart Symington '74.
Prominent alumni in business and finance include philanthropist John D. Rockefeller Jr. (1897), managing director of McKinsey & Company and "father of modern management consulting" Marvin Bower '25, former Chair of the Federal Reserve and current U.S. Secretary of the Treasury Janet Yellen '67, World Bank President Jim Yong Kim '82, Bank of America CEO Brian Moynihan '81, CNN founder Ted Turner '60, IBM chairman and CEO Thomas Watson Jr. '37, co-founder of Starwood Capital Group Barry Sternlicht '82, Apple Inc. CEO John Sculley '61, Blackberry Ltd. CEO John S. Chen '78, Facebook CFO David Ebersman '91, and Uber CEO Dara Khosrowshahi '91. Companies founded by Brown alumni include CNN,The Wall Street Journal, Searchlight Pictures, Netgear, W Hotels, Workday, Warby Parker, Casper, Figma, ZipRecruiter, and Cards Against Humanity.
Alumni in the arts and media include actors Emma Watson '14, John Krasinski '01, Daveed Diggs '04, Julie Bowen '91, Tracee Ellis Ross '94, and Jessica Capshaw '98; NPR program host Ira Glass '82; singer-composer Mary Chapin Carpenter '81; humorist and Marx Brothers screenwriter S.J. Perelman '25; novelists Nathanael West '24, Jeffrey Eugenides '83, Edwidge Danticat (MFA '93), and Marilynne Robinson '66; and composer and synthesizer pioneer Wendy Carlos '62, journalist James Risen '77; political pundit Mara Liasson; MSNBC hosts Alex Wagner '99 and Chris Hayes '01; New York Times, publisher A. G. Sulzberger '03, and magazine editor John F. Kennedy Jr. '83.
Important figures in the history of education include the father of American public school education Horace Mann (1819), civil libertarian and Amherst College president Alexander Meiklejohn, first president of the University of South Carolina Jonathan Maxcy (1787), Bates College founder Oren B. Cheney (1836), University of Michigan president (1871–1909) James Burrill Angell (1849), University of California president (1899–1919) Benjamin Ide Wheeler (1875), and Morehouse College's first African-American president John Hope (1894).
Alumni in the computer sciences and industry include architect of Intel 386, 486, and Pentium microprocessors John H. Crawford '75, inventor of the first silicon transistor Gordon Kidd Teal '31, MongoDB founder Eliot Horowitz '03, Figma founder Dylan Field, and Macintosh developer Andy Hertzfeld '75.
Other notable alumni include "Lafayette of the Greek Revolution" and its historian Samuel Gridley Howe (1821), NASA head during first seven Apollo missions Thomas O. Paine '42, sportscaster Chris Berman '77, Houston Texans head coach Bill O'Brien '92, 2018 Miss America Cara Mund '16, Penn State football coach Joe Paterno '50, Heisman Trophy namesake John W. Heisman '91, distinguished professor of law Cortney Lollar '97, Olympic and world champion triathlete Joanna Zeiger, royals and nobles such as Prince Rahim Aga Khan, Prince Faisal bin Al Hussein of the Hashemite Kingdom of Jordan, Princess Leila Pahlavi of Iran '92, Prince Nikolaos of Greece and Denmark, Prince Nikita Romanov, Princess Theodora of Greece and Denmark, Prince Jaime of Bourbon-Parma, Duke of San Jaime and Count of Bardi, Prince Ra'ad bin Zeid, Lady Gabriella Windsor, Prince Alexander von Fürstenberg, Countess Cosima von Bülow Pavoncelli, and her half-brother Prince Alexander-Georg von Auersperg.
Nobel Laureate alumni include humanitarian Jerry White '87 (Peace, 1997), biologist Craig Mello '82 (Physiology or Medicine, 2006), economist Guido Imbens (AM '89, PhD '91; Economic Sciences, 2021), and economist Douglas Diamond '75 (Economic Sciences, 2022).
Among Brown's past and present faculty are seven Nobel Laureates: Lars Onsager (Chemistry, 1968), Leon Cooper (Physics, 1972), George Snell (Physiology or Medicine, 1980), George Stigler (Economic Sciences, 1982), Henry David Abraham (Peace, 1985), Vernon L. Smith (Economic Sciences, 2002), and J. Michael Kosterlitz (Physics, 2016).
Notable past and present faculty include biologists Anne Fausto-Sterling (Ph.D. 1970) and Kenneth R. Miller (Sc.B. 1970); computer scientists Robert Sedgewick and Andries van Dam; economists Hyman Minsky, Glenn Loury, George Stigler, Mark Blyth, and Emily Oster; historians Gordon S. Wood and Joan Wallach Scott; mathematicians David Gale, David Mumford, Mary Cartwright, and Solomon Lefschetz; physicists Sylvester James Gates and Gerald Guralnik. Faculty in literature include Chinua Achebe, Ama Ata Aidoo, and Carlos Fuentes. Among Brown's faculty and fellows in political science, and public affairs are the former prime minister of Italy and former EU chief, Romano Prodi; former president of Brazil, Fernando Cardoso; former president of Chile, Ricardo Lagos; and son of Soviet Premier Nikita Khrushchev, Sergei Khrushchev. Other faculty include philosopher Martha Nussbaum, author Ibram X. Kendi, and public health doctor Ashish Jha.
Brown's reputation as an institution with a free-spirited, iconoclastic student body is portrayed in fiction and popular culture. Family Guy character Brian Griffin is a Brown alumnus. The O.C.'s main character Seth Cohen is denied acceptance to Brown while his girlfriend Summer Roberts is accepted. In The West Wing, Amy Gardner is a Brown alumna.
Media related to Brown University at Wikimedia Commons | [
{
"paragraph_id": 0,
"text": "Brown University is a private Ivy League research university in Providence, Rhode Island. It is the seventh-oldest institution of higher education in the United States, founded in 1764 as the College in the English Colony of Rhode Island and Providence Plantations. One of nine colonial colleges chartered before the American Revolution, it was the first college in the United States to codify in its charter that admission and instruction of students was to be equal regardless of their religious affiliation.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The university is home to the oldest applied mathematics program in the United States, the oldest engineering program in the Ivy League, and the third-oldest medical program in New England. It was one of the early doctoral-granting U.S. institutions in the late 19th century, adding masters and doctoral studies in 1887. In 1969, it adopted its Open Curriculum after a period of student lobbying, which eliminated mandatory \"general education\" distribution requirements, made students \"the architects of their own syllabus\", and allowed them to take any course for a grade of satisfactory (Pass) or no-credit (Fail) which is unrecorded on external transcripts. In 1971, Brown's coordinate women's institution, Pembroke College, was fully merged into the university.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The university comprises the College, the Graduate School, Alpert Medical School, the School of Engineering, the School of Public Health and the School of Professional Studies. Its international programs are organized through the Watson Institute for International and Public Affairs, and it is academically affiliated with the Marine Biological Laboratory and the Rhode Island School of Design; with the latter, it offers undergraduate and graduate dual degree programs.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Brown's main campus is in the College Hill neighborhood of Providence, Rhode Island. The university is surrounded by a federally listed architectural district with a dense concentration of Colonial-era buildings. Benefit Street, which runs along the campus's western edge, has one of America's richest concentrations of 17th- and 18th-century architecture. Brown's undergraduate admissions are among the most selective in the country, with an overall acceptance rate of 5% for the class of 2026.",
"title": ""
},
{
"paragraph_id": 4,
"text": "As of March 2022, 11 Nobel Prize winners have been affiliated with Brown as alumni, faculty, or researchers, as well as 1 Fields Medalist, 7 National Humanities Medalists and 11 National Medal of Science laureates. Other notable alumni include 27 Pulitzer Prize winners, 21 billionaires, 1 U.S. Supreme Court Chief Justice, 4 U.S. Secretaries of State, over 100 members of the United States Congress, 58 Rhodes Scholars, 22 MacArthur Genius Fellows, and 38 Olympic medalists.",
"title": ""
},
{
"paragraph_id": 5,
"text": "In 1761, three residents of Newport, Rhode Island, drafted a petition to the colony's General Assembly:",
"title": "History"
},
{
"paragraph_id": 6,
"text": "That your Petitioners propose to open a literary institution or School for instructing young Gentlemen in the Languages, Mathematics, Geography & History, & such other branches of Knowledge as shall be desired. That for this End... it will be necessary... to erect a public Building or Buildings for the boarding of the youth & the Residence of the Professors.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The three petitioners were Ezra Stiles, pastor of Newport's Second Congregational Church and future president of Yale University; William Ellery Jr., future signer of the United States Declaration of Independence; and Josias Lyndon, future governor of the colony. Stiles and Ellery later served as co-authors of the college's charter two years later. The editor of Stiles's papers observes, \"This draft of a petition connects itself with other evidence of Dr. Stiles's project for a Collegiate Institution in Rhode Island, before the charter of what became Brown University.\"",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Philadelphia Association of Baptist Churches was also interested in establishing a college in Rhode Island—home of the mother church of their denomination. At the time, the Baptists were unrepresented among the colonial colleges; the Congregationalists had Harvard and Yale, the Presbyterians had the College of New Jersey (later Princeton), and the Episcopalians had the College of William and Mary and King's College (later Columbia) while their local University of Pennsylvania was specifically founded without direct association with any particular denomination. Isaac Backus, a historian of the New England Baptists and an inaugural trustee of Brown, wrote of the October 1762 resolution taken at Philadelphia:",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Philadelphia Association obtained such an acquaintance with our affairs, as to bring them to an apprehension that it was practicable and expedient to erect a college in the Colony of Rhode-Island, under the chief direction of the Baptists; ... Mr. James Manning, who took his first degree in New-Jersey college in September, 1762, was esteemed a suitable leader in this important work.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "James Manning arrived at Newport in July 1763 and was introduced to Stiles, who agreed to write the charter for the college. Stiles' first draft was read to the General Assembly in August 1763, and rejected by Baptist members who worried that their denomination would be underrepresented in the College Board of Fellows. A revised charter written by Stiles and Ellery was adopted by the Rhode Island General Assembly on March 3, 1764, in East Greenwich.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In September 1764, the inaugural meeting of the corporation—the college's governing body—was held in Newport's Old Colony House. Governor Stephen Hopkins was chosen chancellor, former and future governor Samuel Ward vice chancellor, John Tillinghast treasurer, and Thomas Eyres secretary. The charter stipulated that the board of trustees should be composed of 22 Baptists, five Quakers, five Episcopalians, and four Congregationalists. Of the 12 Fellows, eight should be Baptists—including the college president—\"and the rest indifferently of any or all Denominations.\"",
"title": "History"
},
{
"paragraph_id": 12,
"text": "At the time of its creation, Brown's charter was a uniquely progressive document. Other colleges had curricular strictures against opposing doctrines, while Brown's charter asserted, \"Sectarian differences of opinions, shall not make any Part of the Public and Classical Instruction.\" The document additionally \"recognized more broadly and fundamentally than any other [university charter] the principle of denominational cooperation.\" The oft-repeated statement that Brown's charter alone prohibited a religious test for College membership is inaccurate; other college charters were similarly liberal in that particular.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The college was founded as Rhode Island College, at the site of the First Baptist Church in Warren, Rhode Island. Manning was sworn in as the college's first president in 1765 and remained in the role until 1791. In 1766, the college authorized the Reverend Morgan Edwards to travel to Europe to \"solicit Benefactions for this Institution\". During his year-and-a-half stay in the British Isles, Edwards secured funding from benefactors including Thomas Penn and Benjamin Franklin.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 1770, the college moved from Warren to Providence. To establish a campus, John and Moses Brown purchased a four-acre lot on the crest of College Hill on behalf of the school. The majority of the property fell within the bounds of the original home lot of Chad Brown, an ancestor of the Browns and one of the original proprietors of Providence Plantations. After the college was relocated to the city, work began on constructing its first building.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "A building committee, organized by the corporation, developed plans for the college's first purpose-built edifice, finalizing a design on February 9, 1770. The subsequent structure, referred to as \"The College Edifice\" and later as University Hall, may have been modeled on Nassau Hall, built 14 years prior at the College of New Jersey. President Manning, an active member of the building process, was educated at Princeton and might have suggested that Brown's first building resemble that of his alma mater.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Nicholas Brown, John Brown, Joseph Brown, and Moses Brown were instrumental in moving the college to Providence, constructing its first building, and securing its endowment. Joseph became a professor of natural philosophy at the college; John served as its treasurer from 1775 to 1796; and Nicholas Sr's son Nicholas Brown Jr. succeeded his uncle as treasurer from 1796 to 1825.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "On September 8, 1803, the corporation voted, \"That the donation of $5,000, if made to this College within one Year from the late Commencement, shall entitle the donor to name the College.\" The following year, the appeal was answered by College Treasurer Nicholas Brown Jr. In a letter dated September 6, 1804, Brown committed \"a donation of Five Thousand Dollars to Rhode Island College, to remain in perpetuity as a fund for the establishment of a Professorship of Oratory and Belles Letters.\" In recognition of the gift, the corporation on the same day voted, \"That this College be called and known in all future time by the Name of Brown University.\" Over the years, the benefactions of Nicholas Brown Jr., totaled nearly $160,000 and included funds for building Hope College (1821–22) and Manning Hall (1834–35).",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1904, the John Carter Brown Library was established as an independently funded research library on Brown's campus; the library's collection was founded on that of John Carter Brown, son of Nicholas Brown Jr.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Brown family was involved in various business ventures in Rhode Island, and accrued wealth both directly and indirectly from the transatlantic slave trade. The family was divided on the issue of slavery. John Brown had defended slavery, while Moses and Nicholas Brown Jr. were fervent abolitionists.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 2003, under the tenure of President Ruth Simmons, the university established a steering committee to investigate these ties of the university to slavery and recommend a strategy to address them.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "With British vessels patrolling Narragansett Bay in the fall of 1776, the college library was moved out of Providence for safekeeping. During the subsequent American Revolutionary War, Brown's University Hall was used to house French and other revolutionary troops led by General George Washington and the Comte de Rochambeau as they waited to commence the march of 1781 that led to the Siege of Yorktown and the Battle of the Chesapeake. This has been celebrated as marking the defeat of the British and the end of the war. The building functioned as barracks and hospital from December 10, 1776, to April 20, 1780, and as a hospital for French troops from June 26, 1780, to May 27, 1782.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "A number of Brown's founders and alumni played roles in the American Revolution and subsequent founding of the United States. Brown's first chancellor, Stephen Hopkins, served as a delegate to the Colonial Congress in Albany in 1754, and to the Continental Congress from 1774 to 1776. James Manning represented Rhode Island at the Congress of the Confederation, while concurrently serving as Brown's first president. Two of Brown's founders, William Ellery and Stephen Hopkins signed the Declaration of Independence.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "James Mitchell Varnum, who graduated from Brown with honors in 1769, served as one of General George Washington's Continental Army brigadier generals and later as major general in command of the entire Rhode Island militia. Varnum is noted as the founder and commander of the 1st Rhode Island Regiment, widely regarded as the first Black battalion in U.S. military history. David Howell, who graduated with an A.M. in 1769, served as a delegate to the Continental Congress from 1782 to 1785.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Nineteen individuals have served as presidents of the university since its founding in 1764. Since 2012, Christina Hull Paxson has served as president. Paxson had previously served as dean of Princeton University's School of Public and International Affairs and chair of Princeton's economics department. Paxson's immediate predecessor, Ruth Simmons, is noted as the first African American president of an Ivy League institution. Other presidents of note include academic, Vartan Gregorian; and philosopher and economist, Francis Wayland.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "In 1966, the first Group Independent Study Project (GISP) at Brown was formed, involving 80 students and 15 professors. The GISP was inspired by student-initiated experimental schools, especially San Francisco State College, and sought ways to \"put students at the center of their education\" and \"teach students how to think rather than just teaching facts\".",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Members of the GISP, Ira Magaziner and Elliot Maxwell published a paper of their findings titled, \"Draft of a Working Paper for Education at Brown University.\" The paper made proposals for a new curriculum, including interdisciplinary freshman-year courses that would introduce \"modes of thought,\" with instruction from faculty from different disciplines as well as for an end to letter grades. The following year Magaziner began organizing the student body to press for the reforms, organizing discussions and protests.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In 1968, university president Ray Heffner established a Special Committee on Curricular Philosophy. Composed of administrators, the committee was tasked with developing specific reforms and producing recommendations. A report, produced by the committee, was presented to the faculty, which voted the New Curriculum into existence on May 7, 1969. Its key features included:",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The Modes of Thought course was discontinued early on, but the other elements remain in place. In 2006, the reintroduction of plus/minus grading was proposed in response to concerns regarding grade inflation. The idea was rejected by the College Curriculum Council after canvassing alumni, faculty, and students, including the original authors of the Magaziner-Maxwell Report.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In 2003, then-university president Ruth Simmons launched a steering committee to research Brown's eighteenth-century ties to slavery. In October 2006, the committee released a report documenting its findings.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Titled \"Slavery and Justice\", the document detailed the ways in which the university benefited both directly and indirectly from the transatlantic slave trade and the labor of enslaved people. The report also included seven recommendations for how the university should address this legacy. Brown has since completed a number of these recommendations including the establishment of its Center for the Study of Slavery and Justice, the construction of its Slavery Memorial, and the funding of a $10 million permanent endowment for Providence Public Schools.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The Slavery and Justice report marked the first major effort by an American university to address its ties to slavery and prompted other institutions to undertake similar processes.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Brown's coat of arms was created in 1834. The prior year, president Francis Wayland had commissioned a committee to update the school's original seal to match the name the university had adopted in 1804. Central in the coat of arms is a white escutcheon divided into four sectors by a red cross. Within each sector of the coat of arms lies an open book. Above the shield is a crest consisting of the upper half of a sun in splendor among the clouds atop a red and white torse.",
"title": "Coat of arms"
},
{
"paragraph_id": 33,
"text": "Brown is the largest institutional landowner in Providence, with properties on College Hill and in the Jewelry District. The university was built contemporaneously with the eighteenth and nineteenth-century precincts surrounding it, making Brown's campus tightly integrated into Providence's urban fabric. Among the noted architects who have shaped Brown's campus are McKim, Mead & White, Philip Johnson, Rafael Viñoly, Diller Scofidio + Renfro, and Robert A. M. Stern.",
"title": "Campus"
},
{
"paragraph_id": 34,
"text": "Brown's main campus, comprises 235 buildings and 143 acres (0.58 km) in the East Side neighborhood of College Hill. The university's central campus sits on a 15-acre (6.1-hectare) block bounded by Waterman, Prospect, George, and Thayer Streets; newer buildings extend northward, eastward, and southward. Brown's core, historic campus, constructed primary between 1770 and 1926, is defined by three greens: the Front or Quiet Green, the Middle or College Green, and the Ruth J. Simmons Quadrangle (historically known as Lincoln Field). A brick and wrought-iron fence punctuated by decorative gates and arches traces the block's perimeter. This section of campus is primarily Georgian and Richardsonian Romanesque in its architectural character.",
"title": "Campus"
},
{
"paragraph_id": 35,
"text": "To the south of the central campus are academic buildings and residential quadrangles, including Wriston, Keeney, and Gregorian quadrangles. Immediately to the east of the campus core sit Sciences Park and Brown's School of Engineering. North of the central campus are performing and visual arts facilities, life sciences labs, and the Pembroke Campus, which houses both dormitories and academic buildings. Facing the western edge of the central campus sit two of the Brown's seven libraries, the John Hay Library and the John D. Rockefeller Jr. Library.",
"title": "Campus"
},
{
"paragraph_id": 36,
"text": "The university's campus is contiguous with that of the Rhode Island School of Design, which is located immediately to Brown's west, along the slope of College Hill.",
"title": "Campus"
},
{
"paragraph_id": 37,
"text": "Built in 1901, the Van Wickle Gates are a set of wrought iron gates that stand at the western edge of Brown's campus. The larger main gate is flanked by two smaller side gates. At Convocation the central gate opens inward to admit the procession of new students; at Commencement, the gate opens outward for the procession of graduates. A Brown superstition holds that students who walk through the central gate a second time prematurely will not graduate, although walking backward is said to cancel the hex.",
"title": "Campus"
},
{
"paragraph_id": 38,
"text": "The John Hay Library is the second oldest library on campus. Opened in 1910, the library is named for John Hay (class of 1858), private secretary to Abraham Lincoln and Secretary of State under William McKinley and Theodore Roosevelt. The construction of the building was funded in large part by Hay's friend, Andrew Carnegie, who contributed half of the $300,000 cost of construction.",
"title": "Campus"
},
{
"paragraph_id": 39,
"text": "The John Hay Library serves as the repository of the university's archives, rare books and manuscripts, and special collections. Noteworthy among the latter are the Anne S. K. Brown Military Collection (described as \"the foremost American collection of material devoted to the history and iconography of soldiers and soldiering\"), the Harris Collection of American Poetry and Plays (described as \"the largest and most comprehensive collection of its kind in any research library\"), the Lownes Collection of the History of Science (described as \"one of the three most important private collections of books of science in America\"), and the papers of H. P. Lovecraft. The Hay Library is home to one of the broadest collections of incunabula in the Americas, one of Brown's two Shakespeare First Folios, the manuscript of George Orwell's Nineteen Eighty-Four, and three books bound in human skin.",
"title": "Campus"
},
{
"paragraph_id": 40,
"text": "Founded in 1846, the John Carter Brown Library is generally regarded as the world's leading collection of primary historical sources relating to the exploration and colonization of the Americas. While administered and funded separately from the university, the library has been owned by Brown and located on its campus since 1904.",
"title": "Campus"
},
{
"paragraph_id": 41,
"text": "The library contains the best preserved of the eleven surviving copies of the Bay Psalm Book—the earliest extant book printed in British North America and the most expensive printed book in the world. Other holdings include a Shakespeare First Folio and the world's largest collection of 16th century Mexican texts.",
"title": "Campus"
},
{
"paragraph_id": 42,
"text": "The exhibition galleries of the Haffenreffer Museum of Anthropology, Brown's teaching museum, are located in Manning Hall on the campus's main green. Its one million artifacts, available for research and educational purposes, are located at its Collections Research Center in Bristol, Rhode Island. The museum's goal is to inspire creative and critical thinking about culture by fostering an interdisciplinary understanding of the material world. It provides opportunities for faculty and students to work with collections and the public, teaching through objects and programs in classrooms and exhibitions. The museum sponsors lectures and events in all areas of anthropology and also runs an extensive program of outreach to local schools.",
"title": "Campus"
},
{
"paragraph_id": 43,
"text": "The Annmary Brown Memorial was constructed from 1903 to 1907 by the politician, Civil War veteran, and book collector General Rush Hawkins, as a mausoleum for his wife, Annmary Brown, a member of the Brown family. In addition to its crypt—the final repository for Brown and Hawkins—the Memorial includes works of art from Hawkins's private collection, including paintings by Angelica Kauffman, Peter Paul Rubens, Gilbert Stuart, Giovanni Battista Tiepolo, Benjamin West, and Eastman Johnson, among others. His collection of over 450 incunabula was relocated to the John Hay Library in 1990. Today the Memorial is home to Brown's Medieval Studies and Renaissance Studies programs.",
"title": "Campus"
},
{
"paragraph_id": 44,
"text": "The Walk, a landscaped pedestrian corridor, connects the Pembroke Campus to the main campus. It runs parallel to Thayer Street and serves as a primary axis of campus, extending from Ruth Simmons Quadrangle at its southern terminus to the Meeting Street entrance to the Pembroke Campus at its northern end. The walk is bordered by departmental buildings as well as the Lindemann Performing Arts Center and Granoff Center for the Creative Arts",
"title": "Campus"
},
{
"paragraph_id": 45,
"text": "The corridor is home to public art including sculptures by Maya Lin and Tom Friedman.",
"title": "Campus"
},
{
"paragraph_id": 46,
"text": "The Women's College in Brown University, known as Pembroke College, was founded in October 1891. Upon its 1971 merger with the College of Brown University, Pembroke's campus was absorbed into the larger Brown campus. The Pembroke campus is bordered by Meeting, Brown, Bowen, and Thayer Streets and sits three blocks north of Brown's central campus. The campus is dominated by brick architecture, largely of the Georgian and Victorian styles. The west side of the quadrangle comprises Pembroke Hall (1897), Smith-Buonanno Hall (1907), and Metcalf Hall (1919), while the east side comprises Alumnae Hall (1927) and Miller Hall (1910). The quadrangle culminates on the north with Andrews Hall (1947).",
"title": "Campus"
},
{
"paragraph_id": 47,
"text": "East Campus, centered on Hope and Charlesfield streets, originally served as the campus of Bryant University. In 1969, as Bryant was preparing to relocate to Smithfield, Rhode Island, Brown purchased their Providence campus for $5 million. The transaction expanded the Brown campus by 10 acres (40,000 m) and 26 buildings. In 1971, Brown renamed the area East Campus. Today, the area is largely used for dormitories.",
"title": "Campus"
},
{
"paragraph_id": 48,
"text": "Thayer Street runs through Brown's main campus. As a commercial corridor frequented by students, Thayer is comparable to Harvard Square or Berkeley's Telegraph Avenue. Wickenden Street, in the adjacent Fox Point neighborhood, is another commercial street similarly popular among students.",
"title": "Campus"
},
{
"paragraph_id": 49,
"text": "Built in 1925, Brown Stadium—the home of the school's football team—is located approximately a mile and a half northeast of the university's central campus. Marston Boathouse, the home of Brown's crew teams, lies on the Seekonk River, to the southeast of campus. Brown's sailing teams are based out of the Ted Turner Sailing Pavilion at the Edgewood Yacht Club in adjacent Cranston.",
"title": "Campus"
},
{
"paragraph_id": 50,
"text": "Since 2011, Brown's Warren Alpert Medical School has been located in Providence's historic Jewelry District, near the medical campus of Brown's teaching hospitals, Rhode Island Hospital and the Women and Infants Hospital of Rhode Island. Other university facilities, including molecular medicine labs and administrative offices, are likewise located in the area.",
"title": "Campus"
},
{
"paragraph_id": 51,
"text": "Brown's School of Public Health occupies a landmark modernist building along the Providence River. Other Brown properties include the 376-acre (1.52 km) Mount Hope Grant in Bristol, Rhode Island, an important Native American site noted as a location of King Philip's War. Brown's Haffenreffer Museum of Anthropology Collection Research Center, particularly strong in Native American items, is located in the Mount Hope Grant.",
"title": "Campus"
},
{
"paragraph_id": 52,
"text": "Brown has committed to \"minimize its energy use, reduce negative environmental impacts, and promote environmental stewardship.\" Since 2010, the university has required all new buildings meet LEED silver standards. Between 2007 and 2018, Brown reduced its greenhouse emissions by 27 percent; the majority of this reduction is attributable to the university's Thermal Efficiency Project which converted its central heating plant from a steam-powered system to a hot water-powered system.",
"title": "Campus"
},
{
"paragraph_id": 53,
"text": "In 2020, Brown announced it had sold 90 percent of its fossil fuel investments as part of a broader divestment from direct investments and managed funds that focus on fossil fuels. In 2021, the university adopted the goal of reducing quantifiable campus emissions by 75 percent by 2025 and achieving carbon neutrality by 2040. Brown is a member of the Ivy Plus Sustainability Consortium, through which it has committed to best-practice sharing and the ongoing exchange of campus sustainability solutions along with other member institutions.",
"title": "Campus"
},
{
"paragraph_id": 54,
"text": "According to the A. W. Kuchler U.S. potential natural vegetation types, Brown would have a dominant vegetation type of Appalachian Oak (104) with a dominant vegetation form of Eastern Hardwood Forest (25).",
"title": "Campus"
},
{
"paragraph_id": 55,
"text": "Founded in 1764, The College is Brown's oldest school. About 7,200 undergraduate students are enrolled in the college , and 81 concentrations are offered. For the graduating class of 2020, the most popular concentrations were Computer Science, Economics, Biology, History, Applied Mathematics, International Relations, and Political Science. A quarter of Brown undergraduates complete more than one concentration before graduating. If the existing programs do not align with their intended curricular interests, undergraduates may design and pursue independent concentrations.",
"title": "Academics"
},
{
"paragraph_id": 56,
"text": "Around 35 percent of undergraduates pursue graduate or professional study immediately, 60 percent within 5 years, and 80 percent within 10 years. For the Class of 2009, 56 percent of all undergraduate alumni have since earned graduate degrees. Among undergraduate alumni who go on to receive graduate degrees, the most common degrees earned are J.D. (16%), M.D. (14%), M.A. (14%), M.Sc. (14%), and Ph.D. (11%). The most common institutions from which undergraduate alumni earn graduate degrees are Brown University, Columbia University, and Harvard University.",
"title": "Academics"
},
{
"paragraph_id": 57,
"text": "The highest fields of employment for undergraduate alumni ten years after graduation are education and higher education (15%), medicine (9%), business and finance (9%), law (8%), and computing and technology (7%).",
"title": "Academics"
},
{
"paragraph_id": 58,
"text": "Since its 1893 relocation to College Hill, Rhode Island School of Design (RISD) has bordered Brown to its west. Since 1900, Brown and RISD students have been able to cross-register at the two institutions, with Brown students permitted to take as many as four courses at RISD to count towards their Brown degree. The two institutions partner to provide various student-life services and the two student bodies compose a synergy in the College Hill cultural scene.",
"title": "Academics"
},
{
"paragraph_id": 59,
"text": "After several years of discussion between the two institutions and several students pursuing dual degrees unofficially, Brown and RISD formally established a five-year dual degree program in 2007, with the first class matriculating in the fall of 2008. The Brown|RISD Dual Degree Program, among the most selective in the country, offered admission to 20 of the 725 applicants for the class entering in autumn 2020, for an acceptance rate of 2.7%. The program combines the complementary strengths of the two institutions, integrating studio art and design at RISD with Brown's academic offerings. Students are admitted to the Dual Degree Program for a course lasting five years and culminating in both the Bachelor of Arts (A.B.) or Bachelor of Science (Sc.B.) degree from Brown and the Bachelor of Fine Arts (B.F.A.) degree from RISD. Prospective students must apply to the two schools separately and be accepted by separate admissions committees. Their application must then be approved by a third Brown|RISD joint committee.",
"title": "Academics"
},
{
"paragraph_id": 60,
"text": "Admitted students spend the first year in residence at RISD completing its first-year Experimental and Foundation Studies curriculum while taking up to three Brown classes. Students spend their second year in residence at Brown, during which students take mainly Brown courses while starting on their RISD major requirements. In the third, fourth, and fifth years, students can elect to live at either school or off-campus, and course distribution is determined by the requirements of each student's unique combination of Brown concentration and RISD major. Program participants are noted for their creative and original approach to cross-disciplinary opportunities, combining, for example, industrial design with engineering, or anatomical illustration with human biology, or philosophy with sculpture, or architecture with urban studies. An annual \"BRDD Exhibition\" is a well-publicized and heavily attended event, drawing interest and attendees from the broader world of industry, design, the media, and the fine arts.",
"title": "Academics"
},
{
"paragraph_id": 61,
"text": "In 2020, the two schools announced the establishment of a new joint Master of Arts in Design Engineering program. Abbreviated as MADE, the program intends to combine RISD's programs in industrial design with Brown's programs in engineering. The program is administered through Brown's School of Engineering and RISD's Architecture and Design Division.",
"title": "Academics"
},
{
"paragraph_id": 62,
"text": "Brown's theatre and playwriting programs are among the best-regarded in the country. Six Brown graduates have received the Pulitzer Prize for Drama; Alfred Uhry '58, Lynn Nottage '86, Ayad Akhtar '93, Nilo Cruz '94, Quiara Alegría Hudes '04, and Jackie Sibblies Drury MFA '04. In American Theater magazine's 2009 ranking of the most-produced American plays, Brown graduates occupied four of the top five places—Peter Nachtrieb '97, Rachel Sheinkin '89, Sarah Ruhl '97, and Stephen Karam '02.",
"title": "Academics"
},
{
"paragraph_id": 63,
"text": "The undergraduate concentration encompasses programs in theatre history, performance theory, playwriting, dramaturgy, acting, directing, dance, speech, and technical production. Applications for doctoral and master's degree programs are made through the University Graduate School. Master's degrees in acting and directing are pursued in conjunction with the Brown/Trinity Rep MFA program, which partners with the Trinity Repertory Company, a local regional theatre.",
"title": "Academics"
},
{
"paragraph_id": 64,
"text": "Writing at Brown—fiction, non-fiction, poetry, playwriting, screenwriting, electronic writing, mixed media, and the undergraduate writing proficiency requirement—is catered for by various centers and degree programs, and a faculty that has long included nationally and internationally known authors. The undergraduate concentration in literary arts offers courses in fiction, poetry, screenwriting, literary hypermedia, and translation. Graduate programs include the fiction and poetry MFA writing programs in the literary arts department and the MFA playwriting program in the theatre arts and performance studies department. The non-fiction writing program is offered in the English department. Screenwriting and cinema narrativity courses are offered in the departments of literary arts and modern culture and media. The undergraduate writing proficiency requirement is supported by the Writing Center.",
"title": "Academics"
},
{
"paragraph_id": 65,
"text": "Alumni authors take their degrees across the spectrum of degree concentrations, but a gauge of the strength of writing at Brown is the number of major national writing prizes won. To note only winners since the year 2000: Pulitzer Prize for Fiction-winners Jeffrey Eugenides '82 (2003), Marilynne Robinson '66 (2005), and Andrew Sean Greer '92 (2018); British Orange Prize-winners Marilynne Robinson '66 (2009) and Madeline Miller '00 (2012); Pulitzer Prize for Drama-winners Nilo Cruz '94 (2003), Lynn Nottage '86 (twice, 2009, 2017), Quiara Alegría Hudes '04 (2012), Ayad Akhtar '93 (2013), and Jackie Sibblies Drury MFA '04 (2019); Pulitzer Prize for Biography-winners David Kertzer '69 (2015) and Benjamin Moser '98 (2020); Pulitzer Prize for Journalism-winners James Risen '77 (2006), Gareth Cook '91 (2005), Tony Horwitz '80 (1995), Usha Lee McFarling '89 (2007), David Rohde '90 (1996), Kathryn Schulz '96 (2016), and Alissa J. Rubin '80 (2016); Pulitzer Prize for General Nonfiction-winner James Forman Jr. '88 (2018); Pulitzer Prize for History-winner Marcia Chatelain PhD '08 (2021); Pulitzer Prize for Criticism-winner Salamishah Tillet MAT '97 (2022); and Pulitzer Prize for Poetry-winner Peter Balakian PhD '80 (2016)",
"title": "Academics"
},
{
"paragraph_id": 66,
"text": "Brown began offering computer science courses through the departments of Economics and Applied Mathematics in 1956 when it acquired an IBM machine. Brown added an IBM 650 in January 1958, the only one of its type between Hartford and Boston. In 1960, Brown opened its first dedicated computer building. The facility, designed by Philip Johnson, received an IBM 7070 computer the following year. Brown granted computer sciences full Departmental status in 1979. In 2009, IBM and Brown announced the installation of a supercomputer (by teraflops standards), the most powerful in the southeastern New England region.",
"title": "Academics"
},
{
"paragraph_id": 67,
"text": "In the 1960s, Andries van Dam along with Ted Nelson, and Bob Wallace invented The Hypertext Editing Systems, HES and FRESS while at Brown. Nelson coined the word hypertext while Van Dam's students helped originate XML, XSLT, and related Web standards. Among the school's computer science alumni are principal architect of the Classic Mac OS, Andy Hertzfeld; principal architect of the Intel 80386 and Intel 80486 microprocessors, John Crawford; former CEO of Apple, John Sculley; and digital effects programmer Masi Oka. Other alumni include former CS department head at MIT, John Guttag, Workday founder, Aneel Bhusri, MongoDB founder Eliot Horowitz, Figma founders Dylan Field and Evan Wallace; and OpenSea founder Devin Finzer.",
"title": "Academics"
},
{
"paragraph_id": 68,
"text": "The character \"Andy\" in the animated film Toy Story is purportedly an homage to professor Van Dam from his students employed at Pixar.",
"title": "Academics"
},
{
"paragraph_id": 69,
"text": "Between 2012 and 2018, the number of concentrators in CS tripled. In 2017, computer science overtook economics as the school's most popular undergraduate concentration.",
"title": "Academics"
},
{
"paragraph_id": 70,
"text": "Brown's program in applied mathematics was established in 1941 making it the oldest such program in the United States. The division is highly ranked and regarded nationally and internationally. Among the 67 recipients of the Timoshenko Medal, 22 have been affiliated with Brown's applied mathematics division as faculty, researchers, or students.",
"title": "Academics"
},
{
"paragraph_id": 71,
"text": "Established in 2004, the Joukowsky Institute for Archaeology and the Ancient World is Brown's interdisciplinary research center for archeology and ancient studies. The institute pursues fieldwork, excavations, regional surveys, and academic study of the archaeology and art of the ancient Mediterranean, Egypt, and Western Asia from the Levant to the Caucasus. The institute has a very active fieldwork profile, with faculty-led excavations and regional surveys presently in Petra (Jordan), Abydos (Egypt), Turkey, Sudan, Italy, Mexico, Guatemala, Montserrat, and Providence.",
"title": "Academics"
},
{
"paragraph_id": 72,
"text": "The Joukowsky Institute's faculty includes cross-appointments from the departments of Egyptology, Assyriology, Classics, Anthropology, and History of Art and Architecture. Faculty research and publication areas include Greek and Roman art and architecture, landscape archaeology, urban and religious architecture of the Levant, Roman provincial studies, the Aegean Bronze Age, and the archaeology of the Caucasus. The institute offers visiting teaching appointments and postdoctoral fellowships which have, in recent years, included Near Eastern Archaeology and Art, Classical Archaeology and Art, Islamic Archaeology and Art, and Archaeology and Media Studies.",
"title": "Academics"
},
{
"paragraph_id": 73,
"text": "Egyptology and Assyriology",
"title": "Academics"
},
{
"paragraph_id": 74,
"text": "Facing the Joukowsky Institute, across the Front Green, is the Department of Egyptology and Assyriology, formed in 2006 by the merger of Brown's departments of Egyptology and History of Mathematics. It is one of only a handful of such departments in the United States. The curricular focus is on three principal areas: Egyptology, Assyriology, and the history of the ancient exact sciences (astronomy, astrology, and mathematics). Many courses in the department are open to all Brown undergraduates without prerequisites and include archaeology, languages, history, and Egyptian and Mesopotamian religions, literature, and science. Students concentrating in the department choose a track of either Egyptology or Assyriology. Graduate-level study comprises three tracks to the doctoral degree: Egyptology, Assyriology, or the History of the Exact Sciences in Antiquity.",
"title": "Academics"
},
{
"paragraph_id": 75,
"text": "The Watson Institute for International and Public Affairs, Brown's center for the study of global Issues and public affairs, is one of the leading institutes of its type in the country. The institute occupies facilities designed by Uruguayan architect Rafael Viñoly and Japanese architect Toshiko Mori. The institute was initially endowed by Thomas Watson Jr. (Class of 1937), former Ambassador to the Soviet Union and longtime president of IBM.",
"title": "Academics"
},
{
"paragraph_id": 76,
"text": "Institute faculty and faculty emeritus include Italian prime minister and European Commission president Romano Prodi, Brazilian president Fernando Henrique Cardoso, Chilean president Ricardo Lagos Escobar, Mexican novelist and statesman Carlos Fuentes, Brazilian statesman and United Nations commission head Paulo Sérgio Pinheiro, Indian foreign minister and ambassador to the United States Nirupama Rao, American diplomat and Dayton Peace Accords author Richard Holbrooke (Class of 1962), and Sergei Khrushchev, editor of the papers of his father Nikita Khrushchev, leader of the Soviet Union.",
"title": "Academics"
},
{
"paragraph_id": 77,
"text": "The institute's curricular interest is organized into the principal themes of development, security, and governance—with further focuses on globalization, economic uncertainty, security threats, environmental degradation, and poverty. Six Brown undergraduate concentrations are hosted by the Watson Institute: Development Studies, International and Public Affairs, International Relations, Latin American and Caribbean Studies, Middle East Studies, Public Policy, and South Asian Studies. Graduate programs offered at the Watson Institute include the Graduate Program in Development (Ph.D.) and the Master of Public Affairs (M.P.A) Program. The institute also offers postdoctoral, professional development, and global outreach programming. In support of these programs, the institute houses various centers, including the Brazil Initiative, Brown-India Initiative, China Initiative, Middle East Studies Center, The Center for Latin American and Caribbean Studies (CLACS), and the Taubman Center for Public Policy. In recent years, the most internationally cited product of the Watson Institute has been its Costs of War Project, first released in 2011 and continuously updated since. The project comprises a team of economists, anthropologists, political scientists, legal experts, and physicians, and seeks to calculate the economic costs, human casualties, and impact on civil liberties of the wars in Iraq, Afghanistan, and Pakistan since 2001.",
"title": "Academics"
},
{
"paragraph_id": 78,
"text": "Established in 1847, Brown's engineering program is the oldest in the Ivy League and the third oldest civilian engineering program in the country. In 1916, Brown's departments of electrical, mechanical, and civil engineering were merged into a single Division of Engineering. In 2010 the division was elevated to a School of Engineering.",
"title": "Academics"
},
{
"paragraph_id": 79,
"text": "Engineering at Brown is especially interdisciplinary. The school is organized without the traditional departments or boundaries found at most schools and follows a model of connectivity between disciplines—including biology, medicine, physics, chemistry, computer science, the humanities, and the social sciences. The school practices an innovative clustering of faculties in which engineers team with non-engineers to bring a convergence of ideas.",
"title": "Academics"
},
{
"paragraph_id": 80,
"text": "Student teams have launched two CubeSats with the support of the School of Engineering. Brown Space Engineering developed EQUiSat a 1U satellite, and another interdisciplinary team developed SBUDNIC a 3U satellite.",
"title": "Academics"
},
{
"paragraph_id": 81,
"text": "Since 2009, Brown has developed an Executive MBA program in conjunction with one of the leading Business Schools in Europe, IE Business School in Madrid. This relationship has since strengthened resulting in both institutions offering a dual degree program. In this partnership, Brown provides its traditional coursework while IE provides most of the business-related subjects making a differentiated alternative program to other Ivy League's EMBAs. The cohort typically consists of 25–30 EMBA candidates from some 20 countries. Classes are held in Providence, Madrid, Cape Town and Online.",
"title": "Academics"
},
{
"paragraph_id": 82,
"text": "The Pembroke Center for Teaching and Research on Women was established at Brown in 1981 by Joan Wallach Scott as an interdisciplinary research center on gender. The center is named for Pembroke College, Brown's former women's college, and is affiliated with Brown's Sarah Doyle Women's Center. The Pembroke Center supports Brown's undergraduate concentration in Gender and Sexuality Studies, post-doctoral research fellowships, the annual Pembroke Seminar, and other academic programs. It also manages various collections, archives, and resources, including the Elizabeth Weed Feminist Theory Papers and the Christine Dunlap Farnham Archive.",
"title": "Academics"
},
{
"paragraph_id": 83,
"text": "Brown introduced graduate courses in the 1870s and granted its first advanced degrees in 1888. The university established a Graduate Department in 1903 and a full Graduate School in 1927.",
"title": "Academics"
},
{
"paragraph_id": 84,
"text": "With an enrollment of approximately 2,600 students, the school currently offers 33 and 51 master's and doctoral programs, respectively. The school additionally offers a number of fifth-year master's programs. Overall, admission to the Graduate School is most competitive with an acceptance rate averaging at approximately 9 percent in recent years.",
"title": "Academics"
},
{
"paragraph_id": 85,
"text": "The Robert J. & Nancy D. Carney Institute for Brain Science is Brown's cross-departmental neuroscience research institute. The institute's core focus areas include brain-computer interfaces and computational neuroscience; additional areas of focus include research into mechanisms of cell death with the interest of developing therapies for neurodegenerative diseases.",
"title": "Academics"
},
{
"paragraph_id": 86,
"text": "The Carney Institute was founded by John Donoghue in 2009 as the Brown Institute for Brain Science and renamed in 2018 in recognition of a $100 million gift. The donation, one of the largest in the university's history, established the institute as one of the best-endowed university neuroscience programs in the country.",
"title": "Academics"
},
{
"paragraph_id": 87,
"text": "Established in 1811, Brown's Alpert Medical School is the fourth oldest medical school in the Ivy League.",
"title": "Academics"
},
{
"paragraph_id": 88,
"text": "In 1827, medical instruction was suspended by President Francis Wayland after the program's faculty declined to follow a new policy requiring students to live on campus. The program was reorganized in 1972; the first M.D. degrees from the new Program in Medicine were awarded to a graduating class of 58 students in 1975. In 1991, the school was officially renamed the Brown University School of Medicine, then renamed once more to Brown Medical School in October 2000. In January 2007, entrepreneur and philanthropist Warren Alpert donated $100 million to the school. In recognition of the gift, the school's name was changed to the Warren Alpert Medical School of Brown University.",
"title": "Academics"
},
{
"paragraph_id": 89,
"text": "In 2020, U.S. News & World Report ranked Brown's medical school the 9th most selective in the country, with an acceptance rate of 2.8%. U.S. News ranks the school 38th for research and 35th for primary care.",
"title": "Academics"
},
{
"paragraph_id": 90,
"text": "Brown's medical school is known especially for its eight-year Program in Liberal Medical Education (PLME), an eight-year combined baccalaureate-M.D. medical program. Inaugurated in 1984, the program is one of the most selective and renowned programs of its type in the country, offering admission to only 2% of applicants in 2021.",
"title": "Academics"
},
{
"paragraph_id": 91,
"text": "Since 1976, the Early Identification Program (EIP) has encouraged Rhode Island residents to pursue careers in medicine by recruiting sophomores from Providence College, Rhode Island College, the University of Rhode Island, and Tougaloo College. In 2004, the school once again began to accept applications from premedical students at other colleges and universities via AMCAS like most other medical schools. The medical school also offers M.D./PhD, M.D./M.P.H. and M.D./M.P.P. dual degree programs.",
"title": "Academics"
},
{
"paragraph_id": 92,
"text": "Brown's School of Public Health grew out of the Alpert Medical School's Department of Community Health and was officially founded in 2013 as an independent school. The school issues undergraduate (A.B., Sc.B.), graduate (M.P.H., Sc.M., A.M.), doctoral (Ph.D.), and dual-degrees (M.P.H./M.P.A., M.D./M.P.H.).",
"title": "Academics"
},
{
"paragraph_id": 93,
"text": "The Brown University School of Professional Studies currently offers blended learning Executive master's degrees in Healthcare Leadership, Cyber Security, and Science and Technology Leadership. The master's degrees are designed to help students who have a job and life outside of academia to progress in their respective fields. The students meet in Providence every 6–7 weeks for a weekly seminar each trimester.",
"title": "Academics"
},
{
"paragraph_id": 94,
"text": "The university has also invested in MOOC development starting in 2013, when two courses, Archeology's Dirty Little Secrets and The Fiction of Relationship, both of which received thousands of students. However, after a year of courses, the university broke its contract with Coursera and revamped its online persona and MOOC development department. By 2017, the university released new courses on edx, two of which were The Ethics of Memory and Artful Medicine: Art's Power to Enrich Patient Care. In January 2018, Brown published its first \"game-ified\" course called Fantastic Places, Unhuman Humans: Exploring Humanity Through Literature, which featured out-of-platform games to help learners understand materials, as well as a story-line that immerses users into a fictional world to help characters along their journey.",
"title": "Academics"
},
{
"paragraph_id": 95,
"text": "Undergraduate admission to Brown University is considered \"most selective\" by U.S. News & World Report. For the undergraduate class of 2026, Brown received 50,649 applications—the largest applicant pool in the university's history and a 9% increase from the prior year. Of these applicants, 2,560 were admitted for an acceptance rate of 5.0%, the lowest in the university's history.",
"title": "Admissions and financial aid"
},
{
"paragraph_id": 96,
"text": "In 2021, the university reported a yield rate of 69%. For the academic year 2019–20 the university received 2,030 transfer applications, of which 5.8% were accepted.",
"title": "Admissions and financial aid"
},
{
"paragraph_id": 97,
"text": "Brown's admissions policy is stipulated need-blind for all domestic first-year applicants. In 2017, Brown announced that loans would be eliminated from all undergraduate financial aid awards starting in 2018–2019, as part of a new $30 million campaign called the Brown Promise. In 2016–17, the university awarded need-based scholarships worth $120.5 million. The average need-based award for the class of 2020 was $47,940.",
"title": "Admissions and financial aid"
},
{
"paragraph_id": 98,
"text": "In 2017, the Graduate School accepted 11% of 9,215 applicants. In 2021, Brown received a record 948 applications for roughly 90 spots in its Master of Public Health Degree.",
"title": "Admissions and financial aid"
},
{
"paragraph_id": 99,
"text": "In 2020, U.S. News ranked Brown's Warren Alpert Medical School the 9th most selective in the country, with an acceptance rate of 2.8 percent.",
"title": "Admissions and financial aid"
},
{
"paragraph_id": 100,
"text": "Brown University is accredited by the New England Commission of Higher Education. For their 2021 rankings, The Wall Street Journal/Times Higher Education ranked Brown 5th in the \"Best Colleges 2021\" edition.",
"title": "Rankings"
},
{
"paragraph_id": 101,
"text": "The Forbes magazine annual ranking of \"America's Top Colleges 2022\"—which ranked 600 research universities, liberal arts colleges and service academies—ranked Brown 19th overall and 18th among universities.",
"title": "Rankings"
},
{
"paragraph_id": 102,
"text": "U.S. News & World Report ranked Brown 9th among national universities in its 2023 edition. The 2022 edition also ranked Brown 2nd for undergraduate teaching, 25th in Most Innovative Schools, and 14th in Best Value Schools.",
"title": "Rankings"
},
{
"paragraph_id": 103,
"text": "Washington Monthly ranked Brown 40th in 2022 among 442 national universities in the U.S. based on its contribution to the public good, as measured by social mobility, research, and promoting public service.",
"title": "Rankings"
},
{
"paragraph_id": 104,
"text": "In 2022, U.S. News & World Report ranks Brown 129th globally.",
"title": "Rankings"
},
{
"paragraph_id": 105,
"text": "In 2014, Forbes magazine ranked Brown 7th on its list of \"America's Most Entrepreneurial Universities.\" The Forbes analysis looked at the ratio of \"alumni and students who have identified themselves as founders and business owners on LinkedIn\" and the total number of alumni and students. LinkedIn particularized the Forbes rankings, placing Brown third (between MIT and Princeton) among \"Best Undergraduate Universities for Software Developers at Startups.\" LinkedIn's methodology involved a career-path examination of \"millions of alumni profiles\" in its membership database.",
"title": "Rankings"
},
{
"paragraph_id": 106,
"text": "In 2016, 2017, 2018, and 2021 the university produced the most Fulbright recipients of any university in the nation. Brown has also produced the 7th most Rhodes Scholars of all colleges and universities in the United States.",
"title": "Rankings"
},
{
"paragraph_id": 107,
"text": "Brown is a member of the Association of American Universities since 1933 and is classified among \"R1: Doctoral Universities – Very High Research Activity\". In FY 2017, Brown spent $212.3 million on research and was ranked 103rd in the United States by total R&D expenditure by National Science Foundation. In 2021 Brown's School of Public Health received the 4th most funding in NIH awards among schools of public health in the U.S.",
"title": "Research"
},
{
"paragraph_id": 108,
"text": "In 2014, Brown tied with the University of Connecticut for the highest number of reported rapes in the nation, with its \"total of reports of rape\" on their main campus standing at 43. However, such rankings have been criticized for failing to account for how different campus environments can encourage or discourage individuals from reporting sexual assault cases, thereby affecting the number of reported rapes.",
"title": "Student life"
},
{
"paragraph_id": 109,
"text": "Established in 1950, Spring Weekend is an annual spring music festival for students. Historical performers at the festival have included Ella Fitzgerald, Dizzy Gillespie, Ray Charles, Bob Dylan, Janis Joplin, Bruce Springsteen, and U2. More recent headliners include Kendrick Lamar, Young Thug, Daniel Caesar, Anderson .Paak, Mitski, Aminé, and Mac DeMarco. Since 1960, Spring Weekend has been organized by the student-run Brown Concert Agency.",
"title": "Student life"
},
{
"paragraph_id": 110,
"text": "Approximately 12 percent of Brown students participate in Greek Life. The university recognizes thirteen active Greek organizations: six fraternities (Beta Omega Chi, Beta Rho Pi, Delta Tau, Delta Phi, Kappa Alpha Psi, and Theta Alpha), five sororities (Alpha Chi Omega, Delta Sigma Theta, Delta Gamma, Kappa Delta, and Kappa Alpha Theta,), one co-ed house (Zeta Delta Xi), and one co-ed literary society (Alpha Delta Phi). Other Greek-lettered organizations that have been historically active at Brown University include Alpha Kappa Alpha, Alpha Phi Alpha, and Lambda Upsilon Lambda.",
"title": "Student life"
},
{
"paragraph_id": 111,
"text": "Since the early 1950s, all Greek organizations on campus have been located in Wriston Quadrangle. The organizations are overseen by the Greek Council.",
"title": "Student life"
},
{
"paragraph_id": 112,
"text": "An alternative to Greek-letter organizations are Brown's program houses, which are organized by themes. As with Greek houses, the residents of program houses select their new members, usually at the start of the spring semester. Examples of program houses are St. Anthony Hall (located in King House), Buxton International House, the Machado French/Hispanic/Latinx House, Technology House, Harambee (African culture) House, Social Action House and Interfaith House.",
"title": "Student life"
},
{
"paragraph_id": 113,
"text": "All students not in program housing enter a lottery for general housing. Students form groups and are assigned time slots during which they can pick among the remaining housing options.",
"title": "Student life"
},
{
"paragraph_id": 114,
"text": "The earliest societies at Brown were devoted to oration and debate. The Pronouncing Society is mentioned in the diary of Solomon Drowne, class of 1773, who was voted its president in 1771. The organization seems to have disappeared during the American Revolutionary War. Subsequent societies include the Misokosmian Society (est. 1798 and renamed the Philermenian Society), the Philandrian Society (est. 1799), the United Brothers (1806), the Philophysian Society (1818), and the Franklin Society (1824). Societies served social as well as academic purposes, with many supporting literary debate and amassing large libraries. Older societies generally aligned with Federalists while younger societies generally leaned Republican.",
"title": "Student life"
},
{
"paragraph_id": 115,
"text": "Societies remained popular into the 1860s, after which they were largely replaced by fraternities.",
"title": "Student life"
},
{
"paragraph_id": 116,
"text": "The Cammarian Club was at first a semi-secret society that \"tapped\" 15 seniors each year. In 1915, self-perpetuating membership gave way to popular election by the student body, and thenceforward the club served as the de facto undergraduate student government. The organization was dissolved in 1971 and ultimately succeeded by a formal student government.",
"title": "Student life"
},
{
"paragraph_id": 117,
"text": "Societas Domi Pacificae, known colloquially as \"Pacifica House\", is a present-day, self-described secret society. It purports a continuous line of descent from the Franklin Society of 1824, citing a supposed intermediary \"Franklin Society\" traceable in the nineteenth century.",
"title": "Student life"
},
{
"paragraph_id": 118,
"text": "There are over 300 registered student organizations on campus with diverse interests. The Student Activities Fair, during the orientation program, provides first-year students the opportunity to become acquainted with a wide range of organizations. A sample of organizations includes:",
"title": "Student life"
},
{
"paragraph_id": 119,
"text": "In 2023, 38% of Brown's students identified as being LGBTQ+, in a poll by The Brown Daily Herald. The 2023 LGBTQ+ self-identification level was an increase, up from 14% LGBT identification in 2010. \"Bisexual\" was the most common answer amongst LGBTQ+ respondents to the poll.",
"title": "Student life"
},
{
"paragraph_id": 120,
"text": "Brown has several resource centers on campus. The centers often act as sources of support as well as safe spaces for students to explore certain aspects of their identity. Additionally, the centers often provide physical spaces for students to study and have meetings. Although most centers are identity-focused, some provide academic support as well.",
"title": "Student life"
},
{
"paragraph_id": 121,
"text": "The Brown Center for Students of Color (BCSC) is a space that provides support for students of color. Established in 1972 at the demand of student protests, the BCSC encourages students to engage in critical dialogue, develop leadership skills, and promote social justice. The center houses various programs for students to share their knowledge and engage in discussion. Programs include the Third World Transition Program, the Minority Peer Counselor Program, the Heritage Series, and other student-led initiatives. Additionally, the BCSC hopes to foster community among the students it serves by providing spaces for students to meet and study.",
"title": "Student life"
},
{
"paragraph_id": 122,
"text": "The Sarah Doyle Women's Center aims to provide a space for members of the Brown community to examine and explore issues surrounding gender. The center was named after one of the first women to attend Brown, Sarah Doyle. The center emphasizes intersectionality in its conversations on gender, encouraging people to see gender as present and relevant in various aspects of life. The center hosts programs and workshops in order to facilitate dialogue and provide resources for students, faculty, and staff.",
"title": "Student life"
},
{
"paragraph_id": 123,
"text": "Other centers include the LGBTQ Center, the Undocumented, First-Generation College and Low-Income Student (U-FLi) Center, and the Curricular Resource Center.",
"title": "Student life"
},
{
"paragraph_id": 124,
"text": "On December 5, 1968, several Black women from Pembroke College initiated a walkout in protest of an atmosphere at the colleges described by Black students as a \"stifling, frustrating, [and] degrading place for Black students\" after feeling the colleges were non-responsive to their concerns. In total, 65 Black students participated in the walkout. Their principal demand was to increase Black student enrollment to 11% of the student populace, in an attempt to match that of the proportion in the US. This ultimately resulted in a 300% increase in Black enrollment the following year, but some demands have yet to be met.",
"title": "Student life"
},
{
"paragraph_id": 125,
"text": "In the mid-1980s, under student pressure, the university divested from certain companies involved in South Africa. Some students were still unsatisfied with partial divestment and began a fast in Manning Chapel and the university disenrolled them. In April 1987, \"dozens\" of students interrupted a university corporation meeting, leading to 20 being put on probation.",
"title": "Student life"
},
{
"paragraph_id": 126,
"text": "In early December 2023, forty-one students held a sit-in, resulting in their arrests. The students were protesting the Israel-Hamas war and calling for a ceasefire, as well as for the university to divest from companies that \"allegedly facilitate the \"Israeli military occupation\" in Gaza.\"",
"title": "Student life"
},
{
"paragraph_id": 127,
"text": "Brown is a member of the Ivy League athletic conference, which is categorized as a Division I (top-level) conference of the National Collegiate Athletic Association (NCAA).",
"title": "Athletics"
},
{
"paragraph_id": 128,
"text": "The Brown Bears has one of the largest university sports programs in the United States, sponsoring 32 varsity intercollegiate teams. Brown's athletic program is one of the U.S. News & World Report top 20—the \"College Sports Honor Roll\"—based on breadth of the program and athletes' graduation rates.",
"title": "Athletics"
},
{
"paragraph_id": 129,
"text": "Brown's newest varsity team is women's rugby, promoted from club-sport status in 2014. Brown women's rowing has won 7 national titles between 1999 and 2011. Brown men's rowing perennially finishes in the top 5 in the nation, most recently winning silver, bronze, and silver in the national championship races of 2012, 2013, and 2014. The men's and women's crews have also won championship trophies at the Henley Royal Regatta and the Henley Women's Regatta. Brown's men's soccer is consistently ranked in the top 20 and has won 18 Ivy League titles overall; recent soccer graduates play professionally in Major League Soccer and overseas.",
"title": "Athletics"
},
{
"paragraph_id": 130,
"text": "Brown football, under its most successful coach historically, Phil Estes, won Ivy League championships in 1999, 2005, and 2008. high-profile alumni of the football program include former Houston Texans head coach Bill O'Brien; former Penn State football coach Joe Paterno, Heisman Trophy namesake John W. Heisman, and Pollard Award namesake Fritz Pollard.",
"title": "Athletics"
},
{
"paragraph_id": 131,
"text": "Brown women's gymnastics won the Ivy League tournament in 2013 and 2014. The Brown women's sailing team has won 5 national championships, most recently in 2019 while the coed sailing team won 2 national championships in 1942 and 1948. Both teams are consistency ranked in the top 10 in the nation.",
"title": "Athletics"
},
{
"paragraph_id": 132,
"text": "The first intercollegiate ice hockey game in America was played between Brown and Harvard on January 19, 1898. The first university rowing regatta larger than a dual-meet was held between Brown, Harvard, and Yale at Lake Quinsigamond in Massachusetts on July 26, 1859.",
"title": "Athletics"
},
{
"paragraph_id": 133,
"text": "Brown also supports competitive intercollegiate club sports, including ultimate frisbee. The men's ultimate team, Brownian Motion, has won three national championships, in 2000, 2005, and 2019.",
"title": "Athletics"
},
{
"paragraph_id": 134,
"text": "Alumni in politics and government include U.S. Secretary of State John Hay (1852), U.S. Secretary of State and U.S. Attorney General Richard Olney (1856), Chief Justice of the United States and U.S. Secretary of State Charles Evans Hughes (1881), Governor of Wyoming Territory and Nebraska Governor John Milton Thayer (1841), Rhode Island Governor Augustus Bourn (1855), Louisiana Governor Bobby Jindal '92, U.S. Senator Maggie Hassan '80 of New Hampshire, Delaware Governor Jack Markell '82, Rhode Island Representative David Cicilline '83, Minnesota Representative Dean Phillips '91, 2020 Presidential candidate and entrepreneur Andrew Yang '96, DNC Chair Tom Perez '83, diplomat Richard Holbrooke '62, and diplomat W. Stuart Symington '74.",
"title": "Notable people"
},
{
"paragraph_id": 135,
"text": "Prominent alumni in business and finance include philanthropist John D. Rockefeller Jr. (1897), managing director of McKinsey & Company and \"father of modern management consulting\" Marvin Bower '25, former Chair of the Federal Reserve and current U.S. Secretary of the Treasury Janet Yellen '67, World Bank President Jim Yong Kim '82, Bank of America CEO Brian Moynihan '81, CNN founder Ted Turner '60, IBM chairman and CEO Thomas Watson Jr. '37, co-founder of Starwood Capital Group Barry Sternlicht '82, Apple Inc. CEO John Sculley '61, Blackberry Ltd. CEO John S. Chen '78, Facebook CFO David Ebersman '91, and Uber CEO Dara Khosrowshahi '91. Companies founded by Brown alumni include CNN,The Wall Street Journal, Searchlight Pictures, Netgear, W Hotels, Workday, Warby Parker, Casper, Figma, ZipRecruiter, and Cards Against Humanity.",
"title": "Notable people"
},
{
"paragraph_id": 136,
"text": "Alumni in the arts and media include actors Emma Watson '14, John Krasinski '01, Daveed Diggs '04, Julie Bowen '91, Tracee Ellis Ross '94, and Jessica Capshaw '98; NPR program host Ira Glass '82; singer-composer Mary Chapin Carpenter '81; humorist and Marx Brothers screenwriter S.J. Perelman '25; novelists Nathanael West '24, Jeffrey Eugenides '83, Edwidge Danticat (MFA '93), and Marilynne Robinson '66; and composer and synthesizer pioneer Wendy Carlos '62, journalist James Risen '77; political pundit Mara Liasson; MSNBC hosts Alex Wagner '99 and Chris Hayes '01; New York Times, publisher A. G. Sulzberger '03, and magazine editor John F. Kennedy Jr. '83.",
"title": "Notable people"
},
{
"paragraph_id": 137,
"text": "Important figures in the history of education include the father of American public school education Horace Mann (1819), civil libertarian and Amherst College president Alexander Meiklejohn, first president of the University of South Carolina Jonathan Maxcy (1787), Bates College founder Oren B. Cheney (1836), University of Michigan president (1871–1909) James Burrill Angell (1849), University of California president (1899–1919) Benjamin Ide Wheeler (1875), and Morehouse College's first African-American president John Hope (1894).",
"title": "Notable people"
},
{
"paragraph_id": 138,
"text": "Alumni in the computer sciences and industry include architect of Intel 386, 486, and Pentium microprocessors John H. Crawford '75, inventor of the first silicon transistor Gordon Kidd Teal '31, MongoDB founder Eliot Horowitz '03, Figma founder Dylan Field, and Macintosh developer Andy Hertzfeld '75.",
"title": "Notable people"
},
{
"paragraph_id": 139,
"text": "Other notable alumni include \"Lafayette of the Greek Revolution\" and its historian Samuel Gridley Howe (1821), NASA head during first seven Apollo missions Thomas O. Paine '42, sportscaster Chris Berman '77, Houston Texans head coach Bill O'Brien '92, 2018 Miss America Cara Mund '16, Penn State football coach Joe Paterno '50, Heisman Trophy namesake John W. Heisman '91, distinguished professor of law Cortney Lollar '97, Olympic and world champion triathlete Joanna Zeiger, royals and nobles such as Prince Rahim Aga Khan, Prince Faisal bin Al Hussein of the Hashemite Kingdom of Jordan, Princess Leila Pahlavi of Iran '92, Prince Nikolaos of Greece and Denmark, Prince Nikita Romanov, Princess Theodora of Greece and Denmark, Prince Jaime of Bourbon-Parma, Duke of San Jaime and Count of Bardi, Prince Ra'ad bin Zeid, Lady Gabriella Windsor, Prince Alexander von Fürstenberg, Countess Cosima von Bülow Pavoncelli, and her half-brother Prince Alexander-Georg von Auersperg.",
"title": "Notable people"
},
{
"paragraph_id": 140,
"text": "Nobel Laureate alumni include humanitarian Jerry White '87 (Peace, 1997), biologist Craig Mello '82 (Physiology or Medicine, 2006), economist Guido Imbens (AM '89, PhD '91; Economic Sciences, 2021), and economist Douglas Diamond '75 (Economic Sciences, 2022).",
"title": "Notable people"
},
{
"paragraph_id": 141,
"text": "Among Brown's past and present faculty are seven Nobel Laureates: Lars Onsager (Chemistry, 1968), Leon Cooper (Physics, 1972), George Snell (Physiology or Medicine, 1980), George Stigler (Economic Sciences, 1982), Henry David Abraham (Peace, 1985), Vernon L. Smith (Economic Sciences, 2002), and J. Michael Kosterlitz (Physics, 2016).",
"title": "Notable people"
},
{
"paragraph_id": 142,
"text": "Notable past and present faculty include biologists Anne Fausto-Sterling (Ph.D. 1970) and Kenneth R. Miller (Sc.B. 1970); computer scientists Robert Sedgewick and Andries van Dam; economists Hyman Minsky, Glenn Loury, George Stigler, Mark Blyth, and Emily Oster; historians Gordon S. Wood and Joan Wallach Scott; mathematicians David Gale, David Mumford, Mary Cartwright, and Solomon Lefschetz; physicists Sylvester James Gates and Gerald Guralnik. Faculty in literature include Chinua Achebe, Ama Ata Aidoo, and Carlos Fuentes. Among Brown's faculty and fellows in political science, and public affairs are the former prime minister of Italy and former EU chief, Romano Prodi; former president of Brazil, Fernando Cardoso; former president of Chile, Ricardo Lagos; and son of Soviet Premier Nikita Khrushchev, Sergei Khrushchev. Other faculty include philosopher Martha Nussbaum, author Ibram X. Kendi, and public health doctor Ashish Jha.",
"title": "Notable people"
},
{
"paragraph_id": 143,
"text": "Brown's reputation as an institution with a free-spirited, iconoclastic student body is portrayed in fiction and popular culture. Family Guy character Brian Griffin is a Brown alumnus. The O.C.'s main character Seth Cohen is denied acceptance to Brown while his girlfriend Summer Roberts is accepted. In The West Wing, Amy Gardner is a Brown alumna.",
"title": "Notable people"
},
{
"paragraph_id": 144,
"text": "Media related to Brown University at Wikimedia Commons",
"title": "External links"
}
] | Brown University is a private Ivy League research university in Providence, Rhode Island. It is the seventh-oldest institution of higher education in the United States, founded in 1764 as the College in the English Colony of Rhode Island and Providence Plantations. One of nine colonial colleges chartered before the American Revolution, it was the first college in the United States to codify in its charter that admission and instruction of students was to be equal regardless of their religious affiliation. The university is home to the oldest applied mathematics program in the United States, the oldest engineering program in the Ivy League, and the third-oldest medical program in New England. It was one of the early doctoral-granting U.S. institutions in the late 19th century, adding masters and doctoral studies in 1887. In 1969, it adopted its Open Curriculum after a period of student lobbying, which eliminated mandatory "general education" distribution requirements, made students "the architects of their own syllabus", and allowed them to take any course for a grade of satisfactory (Pass) or no-credit (Fail) which is unrecorded on external transcripts. In 1971, Brown's coordinate women's institution, Pembroke College, was fully merged into the university. The university comprises the College, the Graduate School, Alpert Medical School, the School of Engineering, the School of Public Health and the School of Professional Studies. Its international programs are organized through the Watson Institute for International and Public Affairs, and it is academically affiliated with the Marine Biological Laboratory and the Rhode Island School of Design; with the latter, it offers undergraduate and graduate dual degree programs. Brown's main campus is in the College Hill neighborhood of Providence, Rhode Island. The university is surrounded by a federally listed architectural district with a dense concentration of Colonial-era buildings. Benefit Street, which runs along the campus's western edge, has one of America's richest concentrations of 17th- and 18th-century architecture. Brown's undergraduate admissions are among the most selective in the country, with an overall acceptance rate of 5% for the class of 2026. As of March 2022, 11 Nobel Prize winners have been affiliated with Brown as alumni, faculty, or researchers, as well as 1 Fields Medalist, 7 National Humanities Medalists and 11 National Medal of Science laureates. Other notable alumni include 27 Pulitzer Prize winners, 21 billionaires, 1 U.S. Supreme Court Chief Justice, 4 U.S. Secretaries of State, over 100 members of the United States Congress, 58 Rhodes Scholars, 22 MacArthur Genius Fellows, and 38 Olympic medalists. | 2001-09-10T03:33:29Z | 2023-12-26T01:30:10Z | [
"Template:Webarchive",
"Template:Efn",
"Template:Main",
"Template:Infobox U.S. college admissions",
"Template:See also",
"Template:'",
"Template:Clear",
"Template:Cite journal",
"Template:Subscription required",
"Template:As of",
"Template:Cvt",
"Template:Cite book",
"Template:Citation",
"Template:Official website",
"Template:Blockquote",
"Template:When",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Infobox US university ranking",
"Template:Bartable",
"Template:Commons category-inline",
"Template:Brown University",
"Template:Infobox university",
"Template:Update inline",
"Template:Portal",
"Template:Notelist",
"Template:Cite web",
"Template:Cite news",
"Template:Use mdy dates",
"Template:Further",
"Template:Authority control",
"Template:Div col end",
"Template:Navboxes",
"Template:Multiple image",
"Template:Div col",
"Template:Cite magazine",
"Template:Short description",
"Template:About"
] | https://en.wikipedia.org/wiki/Brown_University |
4,158 | Bill Atkinson | William "Bill" D. Atkinson (born March 17, 1951) is an American computer engineer and photographer. Atkinson worked at Apple Computer from 1978 to 1990.
Atkinson was the principal designer and developer of the graphical user interface (GUI) of the Apple Lisa and, later, one of the first thirty members of the original Apple Macintosh development team, and was the creator of the MacPaint application. He also designed and implemented QuickDraw, the fundamental toolbox that the Lisa and Macintosh used for graphics. QuickDraw's performance was essential for the success of the Macintosh GUI. He also was one of the main designers of the Lisa and Macintosh user interfaces. Atkinson also conceived, designed and implemented HyperCard, an early and influential hypermedia system. HyperCard put the power of computer programming and database design into the hands of nonprogrammers. In 1994, Atkinson received the EFF Pioneer Award for his contributions.
He received his undergraduate degree from the University of California, San Diego, where Apple Macintosh developer Jef Raskin was one of his professors. Atkinson continued his studies as a graduate student in neurochemistry at the University of Washington. Raskin invited Atkinson to visit him at Apple Computer; Steve Jobs persuaded him to join the company immediately as employee No. 51, and Atkinson never finished his PhD.
Around 1990, General Magic's founding, with Bill Atkinson as one of the three cofounders, met the following press in Byte magazine:
The obstacles to General Magic's success may appear daunting, but General Magic is not your typical start-up company. Its partners include some of the biggest players in the worlds of computing, communications, and consumer electronics, and it's loaded with top-notch engineers who have been given a clean slate to reinvent traditional approaches to ubiquitous worldwide communications.
In 2007, Atkinson began working as an outside developer with Numenta, a startup working on computer intelligence. On his work there Atkinson said, "what Numenta is doing is more fundamentally important to society than the personal computer and the rise of the Internet."
Currently, Atkinson has combined his passion for computer programming with his love of nature photography to create art images. He takes close-up photographs of stones that have been cut and polished. His works are highly regarded for their resemblance to miniature landscapes which are hidden within the stones. Atkinson's 2004 book Within the Stone features a collection of his close-up photographs. The highly intricate and detailed images he creates are made possible by the accuracy and creative control of the digital printing process that he helped create.
Some of Atkinson's noteworthy contributions to the field of computing include:
Atkinson now works as a nature photographer. Actor Nelson Franklin portrayed him in the 2013 film Jobs. | [
{
"paragraph_id": 0,
"text": "William \"Bill\" D. Atkinson (born March 17, 1951) is an American computer engineer and photographer. Atkinson worked at Apple Computer from 1978 to 1990.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Atkinson was the principal designer and developer of the graphical user interface (GUI) of the Apple Lisa and, later, one of the first thirty members of the original Apple Macintosh development team, and was the creator of the MacPaint application. He also designed and implemented QuickDraw, the fundamental toolbox that the Lisa and Macintosh used for graphics. QuickDraw's performance was essential for the success of the Macintosh GUI. He also was one of the main designers of the Lisa and Macintosh user interfaces. Atkinson also conceived, designed and implemented HyperCard, an early and influential hypermedia system. HyperCard put the power of computer programming and database design into the hands of nonprogrammers. In 1994, Atkinson received the EFF Pioneer Award for his contributions.",
"title": ""
},
{
"paragraph_id": 2,
"text": "He received his undergraduate degree from the University of California, San Diego, where Apple Macintosh developer Jef Raskin was one of his professors. Atkinson continued his studies as a graduate student in neurochemistry at the University of Washington. Raskin invited Atkinson to visit him at Apple Computer; Steve Jobs persuaded him to join the company immediately as employee No. 51, and Atkinson never finished his PhD.",
"title": "Education"
},
{
"paragraph_id": 3,
"text": "Around 1990, General Magic's founding, with Bill Atkinson as one of the three cofounders, met the following press in Byte magazine:",
"title": "Career"
},
{
"paragraph_id": 4,
"text": "The obstacles to General Magic's success may appear daunting, but General Magic is not your typical start-up company. Its partners include some of the biggest players in the worlds of computing, communications, and consumer electronics, and it's loaded with top-notch engineers who have been given a clean slate to reinvent traditional approaches to ubiquitous worldwide communications.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "In 2007, Atkinson began working as an outside developer with Numenta, a startup working on computer intelligence. On his work there Atkinson said, \"what Numenta is doing is more fundamentally important to society than the personal computer and the rise of the Internet.\"",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "Currently, Atkinson has combined his passion for computer programming with his love of nature photography to create art images. He takes close-up photographs of stones that have been cut and polished. His works are highly regarded for their resemblance to miniature landscapes which are hidden within the stones. Atkinson's 2004 book Within the Stone features a collection of his close-up photographs. The highly intricate and detailed images he creates are made possible by the accuracy and creative control of the digital printing process that he helped create.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "Some of Atkinson's noteworthy contributions to the field of computing include:",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "Atkinson now works as a nature photographer. Actor Nelson Franklin portrayed him in the 2013 film Jobs.",
"title": "Career"
}
] | William "Bill" D. Atkinson is an American computer engineer and photographer. Atkinson worked at Apple Computer from 1978 to 1990. Atkinson was the principal designer and developer of the graphical user interface (GUI) of the Apple Lisa and, later, one of the first thirty members of the original Apple Macintosh development team, and was the creator of the MacPaint application. He also designed and implemented QuickDraw, the fundamental toolbox that the Lisa and Macintosh used for graphics. QuickDraw's performance was essential for the success of the Macintosh GUI. He also was one of the main designers of the Lisa and Macintosh user interfaces. Atkinson also conceived, designed and implemented HyperCard, an early and influential hypermedia system. HyperCard put the power of computer programming and database design into the hands of nonprogrammers. In 1994, Atkinson received the EFF Pioneer Award for his contributions. | 2001-09-10T13:57:57Z | 2023-11-25T02:57:22Z | [
"Template:Use mdy dates",
"Template:Cite patent",
"Template:Cite book",
"Template:Cite news",
"Template:Official website",
"Template:Apple celeb",
"Template:Short description",
"Template:Other people",
"Template:Original Macintosh Design Team",
"Template:Authority control",
"Template:Triangulation",
"Template:Infobox person",
"Template:Cite interview"
] | https://en.wikipedia.org/wiki/Bill_Atkinson |
4,160 | Battle of Lostwithiel | The Battle of Lostwithiel took place over a 13-day period from 21 August to 2 September 1644, around the town of Lostwithiel and along the River Fowey valley in Cornwall during the First English Civil War. A Royalist army led by Charles I of England defeated a Parliamentarian force commanded by the Earl of Essex.
Although Essex and most of the cavalry escaped, between 5,000 and 6,000 Parliamentarian infantry were forced to surrender. Since the Royalists were unable to feed so many, they were given a pass back to their own territory, arriving in Southampton a month later having lost nearly half their number to disease and desertion.
Considered one of the worst defeats suffered by Parliament over the course of the Wars of the Three Kingdoms, it secured South West England for the Royalists until early 1646.
During April and May 1644, Parliamentarian commanders Sir William Waller and the Earl of Essex combined their armies and carried out a campaign against King Charles and the Royalist garrisons surrounding Oxford. Trusting Waller to deal with the King in Oxfordshire, Essex divided the Parliamentarian army on 6 June and headed southwest to relieve the Royalist siege of Lyme in Dorset. Lyme had been under siege by King Charles' nephew, Prince Maurice, and the Royalists for nearly two months.
South-West England at that time was largely under the control of the Royalists. The town of Lyme, however, was a Parliamentarian stronghold and served as an important seaport for the Parliamentarian fleet of the Earl of Warwick. As Essex approached Lyme in mid-June Prince Maurice ended the siege and took his troops west to Exeter.
Essex then proceeded further southwest toward Cornwall with the intent to relieve the siege of Plymouth. Plymouth was the only other significant Parliamentarian stronghold in the South-West and it was under siege by Richard Grenville and Cornish Royalists. Essex had been told by Lord Robartes, a wealthy politician and merchant from Cornwall, that the Parliamentarians would gain considerable military support if he moved against Grenville and freed Plymouth. Given Lord Robartes’ advice, Essex advanced toward Plymouth. His action caused Grenville to end the siege. Essex then advanced further west, believing that he could take full control of the South-West from the Royalists.
Meanwhile, in Oxfordshire, King Charles battled with the Parliamentarians and defeated Sir William Waller at the Battle of Cropredy Bridge on 29 June. On 12 July after a Royalist council of war recommended that Essex be dealt with before he could be reinforced, King Charles and his Oxford army departed Evesham. King Charles accepted the council's advice, not solely because it was good strategy, but more so because his Queen was in Exeter, where she had recently given birth to the Princess Henrietta and had been denied safe conduct to Bath by Essex.
On 26 July, King Charles arrived in Exeter and joined his Oxford army with the Royalist forces commanded by Prince Maurice. On that same day, Essex and his Parliamentary force entered Cornwall. One week later, as Essex bivouacked with his army at Bodmin, he learned that King Charles had defeated Waller; brought his Oxford army to the South-West; and joined forces with Prince Maurice. Essex had also seen that he was not getting the military support from the people of Cornwall as Lord Robartes asserted. At that time, Essex understood that he and his army were trapped in Cornwall and his only salvation would be reinforcements or an escape through the port of Fowey by means of the Parliamentarian fleet.
Essex immediately marched his troops five miles south to the small town of Lostwithiel arriving on 2 August. He immediately deployed his men in a defensive arc with detachments on the high ground to the north at Restormel Castle and the high ground to the east at Beacon Hill. Essex also sent a small contingent of foot south to secure the port of Fowey aiming to eventually evacuate his infantry by sea. At Essex's disposal was a force of 6,500 foot and 3,000 horse.
Aided through intelligence provided by the people of Cornwall , King Charles followed westward, slowly and deliberately cutting off the potential escape routes that Essex might attempt to utilize. On 6 August King Charles communicated with Essex, calling for him to surrender. Stalling for several days, Essex considered the offer but ultimately refused.
On 11 August, Grenville and the Cornish Royalists entered Bodmin forcing out Essex's rear-guard cavalry. Grenville then proceeded south across Respryn Bridge to meet and join forces with King Charles and Prince Maurice. It is estimated that the Royalist forces at that time were composed of 12,000 foot and 7,000 horse. Over the next two days the Royalists deployed detachments along the east side of the River Fowey to prevent a Parliamentarian escape across country. Finally the Royalists sent 200 foot with artillery south to garrison the fort at Polruan, effectively blocking the entrance to the harbour of Fowey. At about that time, Essex learned that reinforcements under the command of Sir John Middleton were turned back by the Royalists at Bridgwater in Somerset.
At 07:00 hours on 21 August, King Charles launched his first attack on Essex and the Parliamentarians at Lostwithiel. From the north, Grenville and the Cornish Royalists attacked Restormel Castle and easily dislodged the Parliamentarians who fell back quickly. From the east, King Charles and the Oxford army captured Beacon Hill with little resistance from the Parliamentarians. Prince Maurice and his force occupied Druid Hill. Casualties were fairly low and by nightfall the fighting ended and the Royalists held the high ground on the north and east sides of Lostwithiel.
For the next couple of days the two opposing forces exchanged fire only in a number of small skirmishes. On 24 August, King Charles further tightened the noose encircling the Parliamentarians when he sent Lord Goring and Sir Thomas Bassett to secure the town of St Blazey and the area to the southwest of Lostwithiel. This reduced the foraging area for the Parliamentarians and access to the coves and inlets in the vicinity of the port of Par.
Essex and the Parliamentarians were now totally surrounded and boxed into a two-mile by five-mile area spanning from Lostwithiel in the north to the port of Fowey in the south. Knowing that he would not be able to fight his way out, Essex made his final plans for an escape. Since a sea evacuation of his cavalry would not be possible, Essex ordered his cavalry commander William Balfour to attempt a breakout to Plymouth. For the infantry, Essex planned to retreat south and meet Lord Warwick and the Parliamentarian fleet at Fowey. At 03:00 hours on 31 August, Balfour and 2,000 members of his cavalry executed the first step of Essex's plan when they successfully crossed the River Fowey and escaped intact without engaging the Royalist defenders.
Early on the morning on 31 August, the Parliamentarians ransacked and looted Lostwithiel and began their withdrawal south. At 07:00 hours, the Royalists observed the actions of the Parliamentarians and immediately proceeded to attack. Grenville attacked from the north. King Charles and Prince Maurice crossed the River Fowey, joined up with Grenville, and entered Lostwithiel. Together the Royalists engaged the Parliamentarian rear-guards and quickly took possession of the town. The Royalist also sent detachments down along the east side of the River Fowey to protect against any further breakouts and to capture the town of Polruan.
The Royalists then began to pursue Essex and the Parliamentarian infantry down the river valley. At the outset the Royalist pushed the Parliamentarians nearly three miles south through the hedged fields, hills and valleys. At the narrow pass near St. Veep, Philip Skippon, Essex's commander of the infantry, counter-attacked the Royalists and pushed them back several fields attempting to give Essex time to set up a line of defense further south. At 11:00 hours, the Royalist cavalry mounted a charge and won back the territory lost. There was a lull in the battle at 12:00 hours as King Charles waited for his full army to come up and reform.
The fighting resumed and continued through the afternoon as the Parliamentarians tried to disengage and continue south. At 16:00 hours, the Parliamentarians tried again to counter-attack with their remaining cavalry only to be driven back by King Charles’ Life Guard. About a mile north of Castle Dore, the Parliamentarians right flank began to give way. At 18:00 hours when the Parliamentarians were pushed back to Castle Dore they made their last attempt to rally only to be pushed back and surrounded.
About that time the fighting ended with the Royalists satisfied in their accomplishments of the day. Exhausted and discouraged, the Parliamentarians hunkered down for the night. Later that evening under the darkness of night, Essex and his command staff stole away to the seashore where they used a fishing boat to flee to Plymouth, leaving Skippon in command.
Early on 1 September, Skippon met with his officers to inform them about Essex's escape and to discuss alternatives. It was decided that they would approach King Charles and seek terms. Concerned that Parliamentarian reinforcements might be on their way, the King quickly agreed on 2 September to generous terms. The battle was over. Six thousand Parliamentarians were taken as prisoners. Their weapons were taken away and they were marched to Southampton. They suffered the wrath of the Cornish people in route and as many as 3,000 died of exposure and disease along the way. Those that survived the journey were, however, eventually set free. Total casualties associated with the battle were extremely high especially when considering those who died on the march back to Southampton. To those numbers as many as 700 Parliamentarians are estimated to have been killed or wounded during the fighting in Cornwall along with an estimated 500 Royalists.
The Battle of Lostwithiel was a great victory for King Charles and the greatest loss that the Parliamentarians would suffer in the First English Civil War. For King Charles the victory secured the South-West for the remainder of the war and mitigated criticism for a while against the Royalist war effort.
For the Parliamentarians, the defeat resulted in recriminations with Middleton ultimately being blamed for his failure to break through with reinforcements. The Parliamentarian failure at Lostwithiel along with the failure to defeat King Charles at the Second Battle of Newbury ultimately led Parliament to adopt the Self-denying Ordinance and led to the implementation of the New Model Army. | [
{
"paragraph_id": 0,
"text": "The Battle of Lostwithiel took place over a 13-day period from 21 August to 2 September 1644, around the town of Lostwithiel and along the River Fowey valley in Cornwall during the First English Civil War. A Royalist army led by Charles I of England defeated a Parliamentarian force commanded by the Earl of Essex.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Although Essex and most of the cavalry escaped, between 5,000 and 6,000 Parliamentarian infantry were forced to surrender. Since the Royalists were unable to feed so many, they were given a pass back to their own territory, arriving in Southampton a month later having lost nearly half their number to disease and desertion.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Considered one of the worst defeats suffered by Parliament over the course of the Wars of the Three Kingdoms, it secured South West England for the Royalists until early 1646.",
"title": ""
},
{
"paragraph_id": 3,
"text": "During April and May 1644, Parliamentarian commanders Sir William Waller and the Earl of Essex combined their armies and carried out a campaign against King Charles and the Royalist garrisons surrounding Oxford. Trusting Waller to deal with the King in Oxfordshire, Essex divided the Parliamentarian army on 6 June and headed southwest to relieve the Royalist siege of Lyme in Dorset. Lyme had been under siege by King Charles' nephew, Prince Maurice, and the Royalists for nearly two months.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "South-West England at that time was largely under the control of the Royalists. The town of Lyme, however, was a Parliamentarian stronghold and served as an important seaport for the Parliamentarian fleet of the Earl of Warwick. As Essex approached Lyme in mid-June Prince Maurice ended the siege and took his troops west to Exeter.",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "Essex then proceeded further southwest toward Cornwall with the intent to relieve the siege of Plymouth. Plymouth was the only other significant Parliamentarian stronghold in the South-West and it was under siege by Richard Grenville and Cornish Royalists. Essex had been told by Lord Robartes, a wealthy politician and merchant from Cornwall, that the Parliamentarians would gain considerable military support if he moved against Grenville and freed Plymouth. Given Lord Robartes’ advice, Essex advanced toward Plymouth. His action caused Grenville to end the siege. Essex then advanced further west, believing that he could take full control of the South-West from the Royalists.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "Meanwhile, in Oxfordshire, King Charles battled with the Parliamentarians and defeated Sir William Waller at the Battle of Cropredy Bridge on 29 June. On 12 July after a Royalist council of war recommended that Essex be dealt with before he could be reinforced, King Charles and his Oxford army departed Evesham. King Charles accepted the council's advice, not solely because it was good strategy, but more so because his Queen was in Exeter, where she had recently given birth to the Princess Henrietta and had been denied safe conduct to Bath by Essex.",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "On 26 July, King Charles arrived in Exeter and joined his Oxford army with the Royalist forces commanded by Prince Maurice. On that same day, Essex and his Parliamentary force entered Cornwall. One week later, as Essex bivouacked with his army at Bodmin, he learned that King Charles had defeated Waller; brought his Oxford army to the South-West; and joined forces with Prince Maurice. Essex had also seen that he was not getting the military support from the people of Cornwall as Lord Robartes asserted. At that time, Essex understood that he and his army were trapped in Cornwall and his only salvation would be reinforcements or an escape through the port of Fowey by means of the Parliamentarian fleet.",
"title": "Trapped in Cornwall"
},
{
"paragraph_id": 8,
"text": "Essex immediately marched his troops five miles south to the small town of Lostwithiel arriving on 2 August. He immediately deployed his men in a defensive arc with detachments on the high ground to the north at Restormel Castle and the high ground to the east at Beacon Hill. Essex also sent a small contingent of foot south to secure the port of Fowey aiming to eventually evacuate his infantry by sea. At Essex's disposal was a force of 6,500 foot and 3,000 horse.",
"title": "Trapped in Cornwall"
},
{
"paragraph_id": 9,
"text": "Aided through intelligence provided by the people of Cornwall , King Charles followed westward, slowly and deliberately cutting off the potential escape routes that Essex might attempt to utilize. On 6 August King Charles communicated with Essex, calling for him to surrender. Stalling for several days, Essex considered the offer but ultimately refused.",
"title": "Trapped in Cornwall"
},
{
"paragraph_id": 10,
"text": "On 11 August, Grenville and the Cornish Royalists entered Bodmin forcing out Essex's rear-guard cavalry. Grenville then proceeded south across Respryn Bridge to meet and join forces with King Charles and Prince Maurice. It is estimated that the Royalist forces at that time were composed of 12,000 foot and 7,000 horse. Over the next two days the Royalists deployed detachments along the east side of the River Fowey to prevent a Parliamentarian escape across country. Finally the Royalists sent 200 foot with artillery south to garrison the fort at Polruan, effectively blocking the entrance to the harbour of Fowey. At about that time, Essex learned that reinforcements under the command of Sir John Middleton were turned back by the Royalists at Bridgwater in Somerset.",
"title": "Trapped in Cornwall"
},
{
"paragraph_id": 11,
"text": "At 07:00 hours on 21 August, King Charles launched his first attack on Essex and the Parliamentarians at Lostwithiel. From the north, Grenville and the Cornish Royalists attacked Restormel Castle and easily dislodged the Parliamentarians who fell back quickly. From the east, King Charles and the Oxford army captured Beacon Hill with little resistance from the Parliamentarians. Prince Maurice and his force occupied Druid Hill. Casualties were fairly low and by nightfall the fighting ended and the Royalists held the high ground on the north and east sides of Lostwithiel.",
"title": "First battle - 21–30 August 1644"
},
{
"paragraph_id": 12,
"text": "For the next couple of days the two opposing forces exchanged fire only in a number of small skirmishes. On 24 August, King Charles further tightened the noose encircling the Parliamentarians when he sent Lord Goring and Sir Thomas Bassett to secure the town of St Blazey and the area to the southwest of Lostwithiel. This reduced the foraging area for the Parliamentarians and access to the coves and inlets in the vicinity of the port of Par.",
"title": "First battle - 21–30 August 1644"
},
{
"paragraph_id": 13,
"text": "Essex and the Parliamentarians were now totally surrounded and boxed into a two-mile by five-mile area spanning from Lostwithiel in the north to the port of Fowey in the south. Knowing that he would not be able to fight his way out, Essex made his final plans for an escape. Since a sea evacuation of his cavalry would not be possible, Essex ordered his cavalry commander William Balfour to attempt a breakout to Plymouth. For the infantry, Essex planned to retreat south and meet Lord Warwick and the Parliamentarian fleet at Fowey. At 03:00 hours on 31 August, Balfour and 2,000 members of his cavalry executed the first step of Essex's plan when they successfully crossed the River Fowey and escaped intact without engaging the Royalist defenders.",
"title": "First battle - 21–30 August 1644"
},
{
"paragraph_id": 14,
"text": "Early on the morning on 31 August, the Parliamentarians ransacked and looted Lostwithiel and began their withdrawal south. At 07:00 hours, the Royalists observed the actions of the Parliamentarians and immediately proceeded to attack. Grenville attacked from the north. King Charles and Prince Maurice crossed the River Fowey, joined up with Grenville, and entered Lostwithiel. Together the Royalists engaged the Parliamentarian rear-guards and quickly took possession of the town. The Royalist also sent detachments down along the east side of the River Fowey to protect against any further breakouts and to capture the town of Polruan.",
"title": "Second battle - 31 August - 2 September 1644"
},
{
"paragraph_id": 15,
"text": "The Royalists then began to pursue Essex and the Parliamentarian infantry down the river valley. At the outset the Royalist pushed the Parliamentarians nearly three miles south through the hedged fields, hills and valleys. At the narrow pass near St. Veep, Philip Skippon, Essex's commander of the infantry, counter-attacked the Royalists and pushed them back several fields attempting to give Essex time to set up a line of defense further south. At 11:00 hours, the Royalist cavalry mounted a charge and won back the territory lost. There was a lull in the battle at 12:00 hours as King Charles waited for his full army to come up and reform.",
"title": "Second battle - 31 August - 2 September 1644"
},
{
"paragraph_id": 16,
"text": "The fighting resumed and continued through the afternoon as the Parliamentarians tried to disengage and continue south. At 16:00 hours, the Parliamentarians tried again to counter-attack with their remaining cavalry only to be driven back by King Charles’ Life Guard. About a mile north of Castle Dore, the Parliamentarians right flank began to give way. At 18:00 hours when the Parliamentarians were pushed back to Castle Dore they made their last attempt to rally only to be pushed back and surrounded.",
"title": "Second battle - 31 August - 2 September 1644"
},
{
"paragraph_id": 17,
"text": "About that time the fighting ended with the Royalists satisfied in their accomplishments of the day. Exhausted and discouraged, the Parliamentarians hunkered down for the night. Later that evening under the darkness of night, Essex and his command staff stole away to the seashore where they used a fishing boat to flee to Plymouth, leaving Skippon in command.",
"title": "Second battle - 31 August - 2 September 1644"
},
{
"paragraph_id": 18,
"text": "Early on 1 September, Skippon met with his officers to inform them about Essex's escape and to discuss alternatives. It was decided that they would approach King Charles and seek terms. Concerned that Parliamentarian reinforcements might be on their way, the King quickly agreed on 2 September to generous terms. The battle was over. Six thousand Parliamentarians were taken as prisoners. Their weapons were taken away and they were marched to Southampton. They suffered the wrath of the Cornish people in route and as many as 3,000 died of exposure and disease along the way. Those that survived the journey were, however, eventually set free. Total casualties associated with the battle were extremely high especially when considering those who died on the march back to Southampton. To those numbers as many as 700 Parliamentarians are estimated to have been killed or wounded during the fighting in Cornwall along with an estimated 500 Royalists.",
"title": "Second battle - 31 August - 2 September 1644"
},
{
"paragraph_id": 19,
"text": "The Battle of Lostwithiel was a great victory for King Charles and the greatest loss that the Parliamentarians would suffer in the First English Civil War. For King Charles the victory secured the South-West for the remainder of the war and mitigated criticism for a while against the Royalist war effort.",
"title": "Aftermath"
},
{
"paragraph_id": 20,
"text": "For the Parliamentarians, the defeat resulted in recriminations with Middleton ultimately being blamed for his failure to break through with reinforcements. The Parliamentarian failure at Lostwithiel along with the failure to defeat King Charles at the Second Battle of Newbury ultimately led Parliament to adopt the Self-denying Ordinance and led to the implementation of the New Model Army.",
"title": "Aftermath"
}
] | The Battle of Lostwithiel took place over a 13-day period from 21 August to 2 September 1644, around the town of Lostwithiel and along the River Fowey valley in Cornwall during the First English Civil War. A Royalist army led by Charles I of England defeated a Parliamentarian force commanded by the Earl of Essex. Although Essex and most of the cavalry escaped, between 5,000 and 6,000 Parliamentarian infantry were forced to surrender. Since the Royalists were unable to feed so many, they were given a pass back to their own territory, arriving in Southampton a month later having lost nearly half their number to disease and desertion. Considered one of the worst defeats suffered by Parliament over the course of the Wars of the Three Kingdoms, it secured South West England for the Royalists until early 1646. | 2001-09-10T16:38:04Z | 2023-08-17T23:27:18Z | [
"Template:Portal",
"Template:Use British English",
"Template:Infobox military conflict",
"Template:Location map many",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Use dmy dates",
"Template:Campaignbox First English Civil War",
"Template:Sfnp",
"Template:Sfn"
] | https://en.wikipedia.org/wiki/Battle_of_Lostwithiel |
4,162 | Beeb | Beeb or BEEB may refer to: | [
{
"paragraph_id": 0,
"text": "Beeb or BEEB may refer to:",
"title": ""
}
] | Beeb or BEEB may refer to: BBC, the British Broadcasting Corporation, sometimes called the Beeb or Auntie Beeb
BEEB, a BBC children's magazine published in 1985
BBC Micro, a home computer built for the BBC by Acorn Computers Ltd., nicknamed The Beeb
Beeb.com or BBC online
Beeb Birtles, Dutch-Australian musician | 2022-01-08T19:17:32Z | [
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Beeb |
|
4,163 | Bertrand Russell | Bertrand Arthur William Russell, 3rd Earl Russell, OM, FRS (18 May 1872 – 2 February 1970) was a British mathematician, philosopher, logician, and public intellectual. He had a considerable influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science, and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.
He was one of the early 20th century's most prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British "revolt against idealism". Together with his former teacher A. N. Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt to reduce the whole of mathematics to logic (see Logicism). Russell's article "On Denoting" has been considered a "paradigm of philosophy".
Russell was a pacifist who championed anti-imperialism and chaired the India League. He went to prison for his pacifism during World War I, but also saw the war against Adolf Hitler's Nazi Germany as a necessary "lesser of two evils". In the wake of World War II, he welcomed American global hegemony in favour of either Soviet hegemony or no (or ineffective) world leadership, even if it were to come at the cost of using their nuclear weapons. He would later criticise Stalinist totalitarianism, condemn the United States' involvement in the Vietnam War, and become an outspoken proponent of nuclear disarmament.
In 1950, Russell was awarded the Nobel Prize in Literature "in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought". He was also the recipient of the De Morgan Medal (1932), Sylvester Medal (1934), Kalinga Prize (1957), and Jerusalem Prize (1963).
Bertrand Arthur William Russell was born at Ravenscroft, a country house in Trellech, Monmouthshire, on 18 May 1872, into an influential and liberal family of the British aristocracy. His parents, Viscount and Viscountess Amberley, were radical for their times. Lord Amberley consented to his wife's affair with their children's tutor, the biologist Douglas Spalding. Both were early advocates of birth control at a time when this was considered scandalous. Lord Amberley was a deist, and even asked the philosopher John Stuart Mill to act as Russell's secular godfather. Mill died the year after Russell's birth, but his writings had a great effect on Russell's life.
His paternal grandfather, Lord John Russell, later 1st Earl Russell (1792–1878), had twice been prime minister in the 1840s and 1860s. A member of Parliament since the early 1810s, he met with Napoleon Bonaparte in Elba. The Russells had been prominent in England for several centuries before this, coming to power and the peerage with the rise of the Tudor dynasty (see: Duke of Bedford). They established themselves as one of the leading Whig families and participated in every great political event from the dissolution of the monasteries in 1536–1540 to the Glorious Revolution in 1688–1689 and the Great Reform Act in 1832.
Lady Amberley was the daughter of Lord and Lady Stanley of Alderley. Russell often feared the ridicule of his maternal grandmother, one of the campaigners for education of women.
Russell had two siblings: brother Frank (nearly seven years older), and sister Rachel (four years older). In June 1874, Russell's mother died of diphtheria, followed shortly by Rachel's death. In January 1876, his father died of bronchitis after a long period of depression. Frank and Bertrand were placed in the care of staunchly Victorian paternal grandparents, who lived at Pembroke Lodge in Richmond Park. His grandfather, former Prime Minister Earl Russell, died in 1878, and was remembered by Russell as a kindly old man in a wheelchair. His grandmother, the Countess Russell (née Lady Frances Elliot), was the dominant family figure for the rest of Russell's childhood and youth.
The Countess was from a Scottish Presbyterian family and successfully petitioned the Court of Chancery to set aside a provision in Amberley's will requiring the children to be raised as agnostics. Despite her religious conservatism, she held progressive views in other areas (accepting Darwinism and supporting Irish Home Rule), and her influence on Bertrand Russell's outlook on social justice and standing up for principle remained with him throughout his life. Her favourite Bible verse, "Thou shalt not follow a multitude to do evil", became his motto. The atmosphere at Pembroke Lodge was one of frequent prayer, emotional repression and formality; Frank reacted to this with open rebellion, but the young Bertrand learned to hide his feelings.
Russell's adolescence was lonely and he often contemplated suicide. He remarked in his autobiography that his keenest interests in "nature and books and (later) mathematics saved me from complete despondency;" only his wish to know more mathematics kept him from suicide. He was educated at home by a series of tutors. When Russell was eleven years old, his brother Frank introduced him to the work of Euclid, which he described in his autobiography as "one of the great events of my life, as dazzling as first love".
During these formative years he also discovered the works of Percy Bysshe Shelley. Russell wrote: "I spent all my spare time reading him, and learning him by heart, knowing no one to whom I could speak of what I thought or felt, I used to reflect how wonderful it would have been to know Shelley, and to wonder whether I should meet any live human being with whom I should feel so much sympathy." Russell claimed that beginning at age 15, he spent considerable time thinking about the validity of Christian religious dogma, which he found unconvincing. At this age, he came to the conclusion that there is no free will and, two years later, that there is no life after death. Finally, at the age of 18, after reading Mill's Autobiography, he abandoned the "First Cause" argument and became an atheist.
He travelled to the continent in 1890 with an American friend, Edward FitzGerald, and with FitzGerald's family he visited the Paris Exhibition of 1889 and climbed the Eiffel Tower soon after it was completed.
Russell won a scholarship to read for the Mathematical Tripos at Trinity College, Cambridge, and began his studies there in 1890, taking as coach Robert Rumsey Webb. He became acquainted with the younger George Edward Moore and came under the influence of Alfred North Whitehead, who recommended him to the Cambridge Apostles. He quickly distinguished himself in mathematics and philosophy, graduating as seventh Wrangler in the former in 1893 and becoming a Fellow in the latter in 1895.
Russell was 17 years old in the summer of 1889 when he met the family of Alys Pearsall Smith, an American Quaker five years older, who was a graduate of Bryn Mawr College near Philadelphia. He became a friend of the Pearsall Smith family. They knew him primarily as "Lord John's grandson" and enjoyed showing him off.
He soon fell in love with the puritanical, high-minded Alys, and contrary to his grandmother's wishes, married her on 13 December 1894. Their marriage began to fall apart in 1901 when it occurred to Russell, while cycling, that he no longer loved her. She asked him if he loved her and he replied that he did not. Russell also disliked Alys's mother, finding her controlling and cruel. A lengthy period of separation began in 1911 with Russell's affair with Lady Ottoline Morrell, and he and Alys finally divorced in 1921 to enable Russell to remarry.
During his years of separation from Alys, Russell had passionate (and often simultaneous) affairs with a number of women, including Morrell and the actress Lady Constance Malleson. Some have suggested that at this point he had an affair with Vivienne Haigh-Wood, the English governess and writer, and first wife of T. S. Eliot.
Russell began his published work in 1896 with German Social Democracy, a study in politics that was an early indication of a lifelong interest in political and social theory. In 1896 he taught German social democracy at the London School of Economics. He was a member of the Coefficients dining club of social reformers set up in 1902 by the Fabian campaigners Sidney and Beatrice Webb.
He now started an intensive study of the foundations of mathematics at Trinity. In 1897, he wrote An Essay on the Foundations of Geometry (submitted at the Fellowship Examination of Trinity College) which discussed the Cayley–Klein metrics used for non-Euclidean geometry. He attended the First International Congress of Philosophy in Paris in 1900 where he met Giuseppe Peano and Alessandro Padoa. The Italians had responded to Georg Cantor, making a science of set theory; they gave Russell their literature including the Formulario mathematico. Russell was impressed by the precision of Peano's arguments at the Congress, read the literature upon returning to England, and came upon Russell's paradox. In 1903 he published The Principles of Mathematics, a work on foundations of mathematics. It advanced a thesis of logicism, that mathematics and logic are one and the same.
At the age of 29, in February 1901, Russell underwent what he called a "sort of mystic illumination", after witnessing Whitehead's wife's acute suffering in an angina attack. "I found myself filled with semi-mystical feelings about beauty... and with a desire almost as profound as that of the Buddha to find some philosophy which should make human life endurable", Russell would later recall. "At the end of those five minutes, I had become a completely different person."
In 1905, he wrote the essay "On Denoting", which was published in the philosophical journal Mind. Russell was elected a Fellow of the Royal Society (FRS) in 1908. The three-volume Principia Mathematica, written with Whitehead, was published between 1910 and 1913. This, along with the earlier The Principles of Mathematics, soon made Russell world-famous in his field. Russell's first political activity was as the Independent Liberal candidate in the 1907 by-election for the Wimbledon constituency, where he was not elected.
In 1910, he became a University of Cambridge lecturer at Trinity College, where he had studied. He was considered for a Fellowship, which would give him a vote in the college government and protect him from being fired for his opinions, but was passed over because he was "anti-clerical", essentially because he was agnostic. He was approached by the Austrian engineering student Ludwig Wittgenstein, who became his PhD student. Russell viewed Wittgenstein as a genius and a successor who would continue his work on logic. He spent hours dealing with Wittgenstein's various phobias and his frequent bouts of despair. This was often a drain on Russell's energy, but Russell continued to be fascinated by him and encouraged his academic development, including the publication of Wittgenstein's Tractatus Logico-Philosophicus in 1922. Russell delivered his lectures on logical atomism, his version of these ideas, in 1918, before the end of World War I. Wittgenstein was, at that time, serving in the Austrian Army and subsequently spent nine months in an Italian prisoner of war camp at the end of the conflict.
During World War I, Russell was one of the few people to engage in active pacifist activities. In 1916, because of his lack of a Fellowship, he was dismissed from Trinity College following his conviction under the Defence of the Realm Act 1914. He later described this, in Free Thought and Official Propaganda, as an illegitimate means the state used to violate freedom of expression. Russell championed the case of Eric Chappelow, a poet jailed and abused as a conscientious objector. Russell played a significant part in the Leeds Convention in June 1917, a historic event which saw well over a thousand "anti-war socialists" gather; many being delegates from the Independent Labour Party and the Socialist Party, united in their pacifist beliefs and advocating a peace settlement. The international press reported that Russell appeared with a number of Labour Members of Parliament (MPs), including Ramsay MacDonald and Philip Snowden, as well as former Liberal MP and anti-conscription campaigner, Professor Arnold Lupton. After the event, Russell told Lady Ottoline Morrell that, "to my surprise, when I got up to speak, I was given the greatest ovation that was possible to give anybody".
His conviction in 1916 resulted in Russell being fined £100 (equivalent to £6,000 in 2021), which he refused to pay in hope that he would be sent to prison, but his books were sold at auction to raise the money. The books were bought by friends; he later treasured his copy of the King James Bible that was stamped "Confiscated by Cambridge Police".
A later conviction for publicly lecturing against inviting the United States to enter the war on the United Kingdom's side resulted in six months' imprisonment in Brixton Prison (see Bertrand Russell's political views) in 1918 (he was prosecuted under the Defence of the Realm Act) He later said of his imprisonment:
I found prison in many ways quite agreeable. I had no engagements, no difficult decisions to make, no fear of callers, no interruptions to my work. I read enormously; I wrote a book, "Introduction to Mathematical Philosophy"... and began the work for "The Analysis of Mind". I was rather interested in my fellow-prisoners, who seemed to me in no way morally inferior to the rest of the population, though they were on the whole slightly below the usual level of intelligence as was shown by their having been caught.
While he was reading Strachey's Eminent Victorians chapter about Gordon he laughed out loud in his cell prompting the warder to intervene and reminding him that "prison was a place of punishment".
Russell was reinstated to Trinity in 1919, resigned in 1920, was Tarner Lecturer in 1926 and became a Fellow again in 1944 until 1949.
In 1924, Russell again gained press attention when attending a "banquet" in the House of Commons with well-known campaigners, including Arnold Lupton, who had been an MP and had also endured imprisonment for "passive resistance to military or naval service".
In 1941, G. H. Hardy wrote a 61-page pamphlet titled Bertrand Russell and Trinity – published later as a book by Cambridge University Press with a foreword by C. D. Broad—in which he gave an authoritative account of Russell's 1916 dismissal from Trinity College, explaining that a reconciliation between the college and Russell had later taken place and gave details about Russell's personal life. Hardy writes that Russell's dismissal had created a scandal since the vast majority of the Fellows of the College opposed the decision. The ensuing pressure from the Fellows induced the Council to reinstate Russell. In January 1920, it was announced that Russell had accepted the reinstatement offer from Trinity and would begin lecturing from October. In July 1920, Russell applied for a one year leave of absence; this was approved. He spent the year giving lectures in China and Japan. In January 1921, it was announced by Trinity that Russell had resigned and his resignation had been accepted. This resignation, Hardy explains, was completely voluntary and was not the result of another altercation.
The reason for the resignation, according to Hardy, was that Russell was going through a tumultuous time in his personal life with a divorce and subsequent remarriage. Russell contemplated asking Trinity for another one-year leave of absence but decided against it, since this would have been an "unusual application" and the situation had the potential to snowball into another controversy. Although Russell did the right thing, in Hardy's opinion, the reputation of the College suffered with Russell's resignation, since the 'world of learning' knew about Russell's altercation with Trinity but not that the rift had healed. In 1925, Russell was asked by the Council of Trinity College to give the Tarner Lectures on the Philosophy of the Sciences; these would later be the basis for one of Russell's best-received books according to Hardy: The Analysis of Matter, published in 1927. In the preface to the Trinity pamphlet, Hardy wrote:
I wish to make it plain that Russell himself is not responsible, directly or indirectly, for the writing of the pamphlet.... I wrote it without his knowledge and, when I sent him the typescript and asked for his permission to print it, I suggested that, unless it contained misstatement of fact, he should make no comment on it. He agreed to this... no word has been changed as the result of any suggestion from him.
In August 1920, Russell travelled to Soviet Russia as part of an official delegation sent by the British government to investigate the effects of the Russian Revolution. He wrote a four-part series of articles, titled "Soviet Russia—1920", for the magazine The Nation. He met Vladimir Lenin and had an hour-long conversation with him. In his autobiography, he mentions that he found Lenin disappointing, sensing an "impish cruelty" in him and comparing him to "an opinionated professor". He cruised down the Volga on a steamship. His experiences destroyed his previous tentative support for the revolution. He subsequently wrote a book, The Practice and Theory of Bolshevism, about his experiences on this trip, taken with a group of 24 others from the UK, all of whom came home thinking well of the Soviet regime, despite Russell's attempts to change their minds. For example, he told them that he had heard shots fired in the middle of the night and was sure that these were clandestine executions, but the others maintained that it was only cars backfiring.
Russell's lover Dora Black, a British author, feminist and socialist campaigner, visited Soviet Russia independently at the same time; in contrast to his reaction, she was enthusiastic about the Bolshevik revolution.
The following year, Russell, accompanied by Dora, visited Peking (as Beijing was then known outside of China) to lecture on philosophy for a year. He went with optimism and hope, seeing China as then being on a new path. Other scholars present in China at the time included John Dewey and Rabindranath Tagore, the Indian Nobel-laureate poet. Before leaving China, Russell became gravely ill with pneumonia, and incorrect reports of his death were published in the Japanese press. When the couple visited Japan on their return journey, Dora took on the role of spurning the local press by handing out notices reading "Mr. Bertrand Russell, having died according to the Japanese press, is unable to give interviews to Japanese journalists". Apparently they found this harsh and reacted resentfully.
Dora was six months pregnant when the couple returned to England on 26 August 1921. Russell arranged a hasty divorce from Alys, marrying Dora six days after the divorce was finalised, on 27 September 1921. Russell's children with Dora were John Conrad Russell, 4th Earl Russell, born on 16 November 1921, and Katharine Jane Russell (later Lady Katharine Tait), born on 29 December 1923. Russell supported his family during this time by writing popular books explaining matters of physics, ethics, and education to the layman.
From 1922 to 1927 the Russells divided their time between London and Cornwall, spending summers in Porthcurno. In the 1922 and 1923 general elections Russell stood as a Labour Party candidate in the Chelsea constituency, but only on the basis that he knew he was extremely unlikely to be elected in such a safe Conservative seat, and he was unsuccessful on both occasions.
After the birth of his two children, he became interested in education, especially early childhood education. He was not satisfied with the old traditional education and thought that progressive education also had some flaws; as a result, together with Dora, Russell founded the experimental Beacon Hill School in 1927. The school was run from a succession of different locations, including its original premises at the Russells' residence, Telegraph House, near Harting, West Sussex. During this time, he published On Education, Especially in Early Childhood. On 8 July 1930, Dora gave birth to her third child Harriet Ruth. After he left the school in 1932, Dora continued it until 1943.
In 1927 Russell met Barry Fox (later Barry Stevens), who became a well-known Gestalt therapist and writer in later years. They developed an intense relationship, and in Fox's words: "... for three years we were very close." Fox sent her daughter Judith to Beacon Hill School. From 1927 to 1932 Russell wrote 34 letters to Fox. Upon the death of his elder brother Frank, in 1931, Russell became the 3rd Earl Russell.
Russell's marriage to Dora grew increasingly tenuous, and it reached a breaking point over her having two children with an American journalist, Griffin Barry. They separated in 1932 and finally divorced. On 18 January 1936, Russell married his third wife, an Oxford undergraduate named Patricia ("Peter") Spence, who had been his children's governess since 1930. Russell and Peter had one son, Conrad Sebastian Robert Russell, 5th Earl Russell, who became a prominent historian and one of the leading figures in the Liberal Democrat party.
Russell returned in 1937 to the London School of Economics to lecture on the science of power. During the 1930s, Russell became a friend and collaborator of V. K. Krishna Menon, then President of the India League, the foremost lobby in the United Kingdom for Indian independence. Russell chaired the India League from 1932 to 1939.
Russell's political views changed over time, mostly about war. He opposed rearmament against Nazi Germany. In 1937, he wrote in a personal letter: "If the Germans succeed in sending an invading army to England we should do best to treat them as visitors, give them quarters and invite the commander and chief to dine with the prime minister." In 1940, he changed his appeasement view that avoiding a full-scale world war was more important than defeating Hitler. He concluded that Adolf Hitler taking over all of Europe would be a permanent threat to democracy. In 1943, he adopted a stance toward large-scale warfare called "relative political pacifism": "War was always a great evil, but in some particularly extreme circumstances, it may be the lesser of two evils."
Before World War II, Russell taught at the University of Chicago, later moving on to Los Angeles to lecture at the UCLA Department of Philosophy. He was appointed professor at the City College of New York (CCNY) in 1940, but after a public outcry the appointment was annulled by a court judgment that pronounced him "morally unfit" to teach at the college because of his opinions, especially those relating to sexual morality, detailed in Marriage and Morals (1929). The matter was however taken to the New York Supreme Court by Jean Kay who was afraid that her daughter would be harmed by the appointment, though her daughter was not a student at CCNY. Many intellectuals, led by John Dewey, protested at his treatment. Albert Einstein's oft-quoted aphorism that "great spirits have always encountered violent opposition from mediocre minds" originated in his open letter, dated 19 March 1940, to Morris Raphael Cohen, a professor emeritus at CCNY, supporting Russell's appointment. Dewey and Horace M. Kallen edited a collection of articles on the CCNY affair in The Bertrand Russell Case. Russell soon joined the Barnes Foundation, lecturing to a varied audience on the history of philosophy; these lectures formed the basis of A History of Western Philosophy. His relationship with the eccentric Albert C. Barnes soon soured, and he returned to the UK in 1944 to rejoin the faculty of Trinity College.
Russell participated in many broadcasts over the BBC, particularly The Brains Trust and for the Third Programme, on various topical and philosophical subjects. By this time Russell was world-famous outside academic circles, frequently the subject or author of magazine and newspaper articles, and was called upon to offer opinions on a wide variety of subjects, even mundane ones. En route to one of his lectures in Trondheim, Russell was one of 24 survivors (among a total of 43 passengers) of an aeroplane crash in Hommelvik in October 1948. He said he owed his life to smoking since the people who drowned were in the non-smoking part of the plane. A History of Western Philosophy (1945) became a best-seller and provided Russell with a steady income for the remainder of his life.
In 1942, Russell argued in favour of a moderate socialism, capable of overcoming its metaphysical principles. In an inquiry on dialectical materialism, launched by the Austrian artist and philosopher Wolfgang Paalen in his journal DYN, Russell said: "I think the metaphysics of both Hegel and Marx plain nonsense—Marx's claim to be 'science' is no more justified than Mary Baker Eddy's. This does not mean that I am opposed to socialism."
In 1943, Russell expressed support for Zionism: "I have come gradually to see that, in a dangerous and largely hostile world, it is essential to Jews to have some country which is theirs, some region where they are not suspected aliens, some state which embodies what is distinctive in their culture".
In a speech in 1948, Russell said that if the USSR's aggression continued, it would be morally worse to go to war after the USSR possessed an atomic bomb than before it possessed one, because if the USSR had no bomb the West's victory would come more swiftly and with fewer casualties than if there were atomic bombs on both sides. At that time, only the United States possessed an atomic bomb, and the USSR was pursuing an extremely aggressive policy towards the countries in Eastern Europe which were being absorbed into the Soviet Union's sphere of influence. Many understood Russell's comments to mean that Russell approved of a first strike in a war with the USSR, including Nigel Lawson, who was present when Russell spoke of such matters. Others, including Griffin, who obtained a transcript of the speech, have argued that he was merely explaining the usefulness of America's atomic arsenal in deterring the USSR from continuing its domination of Eastern Europe.
Just after the atomic bombs exploded over Hiroshima and Nagasaki, Russell wrote letters, and published articles in newspapers from 1945 to 1948, stating clearly that it was morally justified and better to go to war against the USSR using atomic bombs while the United States possessed them and before the USSR did. In September 1949, one week after the USSR tested its first A-bomb, but before this became known, Russell wrote that the USSR would be unable to develop nuclear weapons because following Stalin's purges only science based on Marxist principles would be practised in the Soviet Union. After it became known that the USSR had carried out its nuclear bomb tests, Russell declared his position advocating the total abolition of atomic weapons.
In 1948, Russell was invited by the BBC to deliver the inaugural Reith Lectures—what was to become an annual series of lectures, still broadcast by the BBC. His series of six broadcasts, titled Authority and the Individual, explored themes such as the role of individual initiative in the development of a community and the role of state control in a progressive society. Russell continued to write about philosophy. He wrote a foreword to Words and Things by Ernest Gellner, which was highly critical of the later thought of Ludwig Wittgenstein and of ordinary language philosophy. Gilbert Ryle refused to have the book reviewed in the philosophical journal Mind, which caused Russell to respond via The Times. The result was a month-long correspondence in The Times between the supporters and detractors of ordinary language philosophy, which was only ended when the paper published an editorial critical of both sides but agreeing with the opponents of ordinary language philosophy.
In the King's Birthday Honours of 9 June 1949, Russell was awarded the Order of Merit, and the following year he was awarded the Nobel Prize in Literature. When he was given the Order of Merit, George VI was affable but slightly embarrassed at decorating a former jailbird, saying, "You have sometimes behaved in a manner that would not do if generally adopted". Russell merely smiled, but afterwards claimed that the reply "That's right, just like your brother" immediately came to mind.
In 1950, Russell attended the inaugural conference for the Congress for Cultural Freedom, a CIA-funded anti-communist organisation committed to the deployment of culture as a weapon during the Cold War. Russell was one of the best-known patrons of the Congress, until he resigned in 1956.
In 1952, Russell was divorced by Spence, with whom he had been very unhappy. Conrad, Russell's son by Spence, did not see his father between the time of the divorce and 1968 (at which time his decision to meet his father caused a permanent breach with his mother). Russell married his fourth wife, Edith Finch, soon after the divorce, on 15 December 1952. They had known each other since 1925, and Edith had taught English at Bryn Mawr College near Philadelphia, sharing a house for 20 years with Russell's old friend Lucy Donnelly. Edith remained with him until his death, and, by all accounts, their marriage was a happy, close, and loving one. Russell's eldest son John suffered from serious mental illness, which was the source of ongoing disputes between Russell and his former wife Dora.
In 1962 Russell played a public role in the Cuban Missile Crisis: in an exchange of telegrams with Soviet leader Nikita Khrushchev, Khrushchev assured him that the Soviet government would not be reckless. Russell sent this telegram to President Kennedy:
YOUR ACTION DESPERATE. THREAT TO HUMAN SURVIVAL. NO CONCEIVABLE JUSTIFICATION. CIVILIZED MAN CONDEMNS IT. WE WILL NOT HAVE MASS MURDER. ULTIMATUM MEANS WAR... END THIS MADNESS.
According to historian Peter Knight, after JFK's assassination, Russell, "prompted by the emerging work of the lawyer Mark Lane in the US ... rallied support from other noteworthy and left-leaning compatriots to form a Who Killed Kennedy Committee in June 1964, members of which included Michael Foot MP, Caroline Benn, the publisher Victor Gollancz, the writers John Arden and J. B. Priestley, and the Oxford history professor Hugh Trevor-Roper." Russell published a highly critical article weeks before the Warren Commission Report was published, setting forth 16 Questions on the Assassination and equating the Oswald case with the Dreyfus affair of late 19th-century France, in which the state convicted an innocent man. Russell also criticised the American press for failing to heed any voices critical of the official version.
Bertrand Russell was opposed to war from a young age; his opposition to World War I being used as grounds for his dismissal from Trinity College at Cambridge. This incident fused two of his most controversial causes, as he had failed to be granted Fellow status which would have protected him from firing, because he was not willing to either pretend to be a devout Christian, or at least avoid admitting he was agnostic.
He later described the resolution of these issues as essential to freedom of thought and expression, citing the incident in Free Thought and Official Propaganda, where he explained that the expression of any idea, even the most obviously "bad", must be protected not only from direct State intervention, but also economic leveraging and other means of being silenced:
The opinions which are still persecuted strike the majority as so monstrous and immoral that the general principle of toleration cannot be held to apply to them. But this is exactly the same view as that which made possible the tortures of the Inquisition.
Russell spent the 1950s and 1960s engaged in political causes primarily related to nuclear disarmament and opposing the Vietnam War. The 1955 Russell–Einstein Manifesto was a document calling for nuclear disarmament and was signed by eleven of the most prominent nuclear physicists and intellectuals of the time. In October 1960 "The Committee of 100" was formed with a declaration by Russell and Michael Scott, entitled "Act or Perish", which called for a "movement of nonviolent resistance to nuclear war and weapons of mass destruction". In September 1961, at the age of 89, Russell was jailed for seven days in Brixton Prison for a "breach of the peace" after taking part in an anti-nuclear demonstration in London. The magistrate offered to exempt him from jail if he pledged himself to "good behaviour", to which Russell replied: "No, I won't."
In 1966–1967, Russell worked with Jean-Paul Sartre and many other intellectual figures to form the Russell Vietnam War Crimes Tribunal to investigate the conduct of the United States in Vietnam. He wrote a great many letters to world leaders during this period.
Early in his life Russell supported eugenicist policies. He proposed in 1894 that the state issue certificates of health to prospective parents and withhold public benefits from those considered unfit. In 1929 he wrote that people deemed "mentally defective" and "feebleminded" should be sexually sterilised because they "are apt to have enormous numbers of illegitimate children, all, as a rule, wholly useless to the community." Russell was also an advocate of population control:
The nations which at present increase rapidly should be encouraged to adopt the methods by which, in the West, the increase of population has been checked. Educational propaganda, with government help, could achieve this result in a generation. There are, however, two powerful forces opposed to such a policy: one is religion, the other is nationalism. I think it is the duty of all to proclaim that opposition to the spread of birth is appalling depth of misery and degradation, and that within another fifty years or so. I do not pretend that birth control is the only way in which population can be kept from increasing. There are others, which, one must suppose, opponents of birth control would prefer. War, as I remarked a moment ago, has hitherto been disappointing in this respect, but perhaps bacteriological war may prove more effective. If a Black Death could be spread throughout the whole world once in every generation survivors could procreate freely without making the world too full.
On 20 November 1948, in a public speech at Westminster School, addressing a gathering arranged by the New Commonwealth, Russell shocked some observers by suggesting that a preemptive nuclear strike on the Soviet Union was justified. Russell argued that war between the United States and the Soviet Union seemed inevitable, so it would be a humanitarian gesture to get it over with quickly and have the United States in the dominant position. Currently, Russell argued, humanity could survive such a war, whereas a full nuclear war after both sides had manufactured large stockpiles of more destructive weapons was likely to result in the extinction of the human race. Russell later relented from this stance, instead arguing for mutual disarmament by the nuclear powers.
In 1956, immediately before and during the Suez Crisis, Russell expressed his opposition to European imperialism in the Middle East. He viewed the crisis as another reminder of the pressing need for a more effective mechanism for international governance, and to restrict national sovereignty to places such as the Suez Canal area "where general interest is involved". At the same time the Suez Crisis was taking place, the world was also captivated by the Hungarian Revolution and the subsequent crushing of the revolt by intervening Soviet forces. Russell attracted criticism for speaking out fervently against the Suez war while ignoring Soviet repression in Hungary, to which he responded that he did not criticise the Soviets "because there was no need. Most of the so-called Western World was fulminating". Although he later feigned a lack of concern, at the time he was disgusted by the brutal Soviet response, and on 16 November 1956, he expressed approval for a declaration of support for Hungarian scholars which Michael Polanyi had cabled to the Soviet embassy in London twelve days previously, shortly after Soviet troops had entered Budapest.
In November 1957 Russell wrote an article addressing US President Dwight D. Eisenhower and Soviet Premier Nikita Khrushchev, urging a summit to consider "the conditions of co-existence". Khrushchev responded that peace could be served by such a meeting. In January 1958 Russell elaborated his views in The Observer, proposing a cessation of all nuclear weapons production, with the UK taking the first step by unilaterally suspending its own nuclear-weapons program if necessary, and with Germany "freed from all alien armed forces and pledged to neutrality in any conflict between East and West". US Secretary of State John Foster Dulles replied for Eisenhower. The exchange of letters was published as The Vital Letters of Russell, Khrushchev, and Dulles.
Russell was asked by The New Republic, a liberal American magazine, to elaborate his views on world peace. He urged that all nuclear weapons testing and flights by planes armed with nuclear weapons be halted immediately, and negotiations be opened for the destruction of all hydrogen bombs, with the number of conventional nuclear devices limited to ensure a balance of power. He proposed that Germany be reunified and accept the Oder-Neisse line as its border, and that a neutral zone be established in Central Europe, consisting at the minimum of Germany, Poland, Hungary, and Czechoslovakia, with each of these countries being free of foreign troops and influence, and prohibited from forming alliances with countries outside the zone. In the Middle East, Russell suggested that the West avoid opposing Arab nationalism, and proposed the creation of a United Nations peacekeeping force to guard Israel's frontiers to ensure that Israel was prevented from committing aggression and protected from it. He also suggested Western recognition of the People's Republic of China, and that it be admitted to the UN with a permanent seat on the UN Security Council.
He was in contact with Lionel Rogosin while the latter was filming his anti-war film Good Times, Wonderful Times in the 1960s. He became a hero to many of the youthful members of the New Left. In early 1963, Russell became increasingly vocal in his disapproval of the Vietnam War, and felt that the US government's policies there were near-genocidal. In 1963 he became the inaugural recipient of the Jerusalem Prize, an award for writers concerned with the freedom of the individual in society. In 1964 he was one of eleven world figures who issued an appeal to Israel and the Arab countries to accept an arms embargo and international supervision of nuclear plants and rocket weaponry. In October 1965 he tore up his Labour Party card because he suspected Harold Wilson's Labour government was going to send troops to support the United States in Vietnam.
In June 1955, Russell had leased Plas Penrhyn in Penrhyndeudraeth, Merionethshire, Wales and on 5 July of the following year it became his and Edith's principal residence.
Russell published his three-volume autobiography in 1967, 1968, and 1969. He made a cameo appearance playing himself in the anti-war Hindi film Aman, by Mohan Kumar, which was released in India in 1967. This was Russell's only appearance in a feature film.
On 23 November 1969, he wrote to The Times newspaper saying that the preparation for show trials in Czechoslovakia was "highly alarming". The same month, he appealed to Secretary General U Thant of the United Nations to support an international war crimes commission to investigate alleged torture and genocide by the United States in South Vietnam during the Vietnam War. The following month, he protested to Alexei Kosygin over the expulsion of Aleksandr Solzhenitsyn from the Soviet Union of Writers.
On 31 January 1970, Russell issued a statement condemning "Israel's aggression in the Middle East", and in particular, Israeli bombing raids being carried out deep in Egyptian territory as part of the War of Attrition, which he compared to German bombing raids in the Battle of Britain and the US bombing of Vietnam. He called for an Israeli withdrawal to the pre-Six-Day War borders. This was Russell's final political statement or act. It was read out at the International Conference of Parliamentarians in Cairo on 3 February 1970, the day after his death.
Russell died of influenza, just after 8 pm on 2 February 1970 at his home in Penrhyndeudraeth, aged 97. His body was cremated in Colwyn Bay on 5 February 1970 with five people present. In accordance with his will, there was no religious ceremony but one minute's silence; his ashes were later scattered over the Welsh mountains. Although he was born in Monmouthshire, and died in Penrhyndeudraeth in Wales, Russell identified as English. Later in 1970, on 23 October, his will was published showing he had left an estate valued at £69,423 (equivalent to £1.1 million in 2021). In 1980, a memorial to Russell was commissioned by a committee including the philosopher A. J. Ayer. It consists of a bust of Russell in Red Lion Square in London sculpted by Marcelle Quinton.
Lady Katharine Jane Tait, Russell's daughter, founded the Bertrand Russell Society in 1974 to preserve and understand his work. It publishes the Bertrand Russell Society Bulletin, holds meetings and awards prizes for scholarship, including the Bertrand Russell Society Award. She also authored several essays about her father; as well as a book, My Father, Bertrand Russell, which was published in 1975. All members receive Russell: The Journal of Bertrand Russell Studies.
For the sesquicentennial of his birth, in May 2022, McMaster University's Bertrand Russell Archive, the university's largest and most heavily used research collection, organised both a physical and virtual exhibition on Russell's anti-nuclear stance in the post-war era, Scientists for Peace: the Russell-Einstein Manifesto and the Pugwash Conference, which included the earliest version of the Russell–Einstein Manifesto. The Bertrand Russell Peace Foundation held a commemoration at Conway Hall in Red Lion Square, London, on 18 May, the anniversary of his birth. For its part, on the same day, La Estrella de Panamá published a biographical sketch by Francisco Díaz Montilla, who commented that "[if he] had to characterize Russell's work in one sentence [he] would say: criticism and rejection of dogmatism."
Bangladesh's first leader, Mujibur Rahman, named his youngest son Sheikh Russel in honour of Bertrand Russell.
Russell first married Alys Whitall Smith (died 1951) in 1894. The marriage was dissolved in 1921 with no issue. His second marriage was to Dora Winifred Black MBE (died 1986), daughter of Sir Frederick Black, in 1921. This was dissolved in 1935, having produced two children:
Russell's third marriage was to Patricia Helen Spence (died 2004) in 1936, with the marriage producing one child:
Russell's third marriage ended in divorce in 1952. He married Edith Finch in the same year. Finch survived Russell, dying in 1978.
Russell held throughout his life the following styles and honours:
Russell is generally credited with being one of the founders of analytic philosophy. He was deeply impressed by Gottfried Leibniz (1646–1716), and wrote on every major area of philosophy except aesthetics. He was particularly prolific in the fields of metaphysics, logic and the philosophy of mathematics, the philosophy of language, ethics and epistemology. When Brand Blanshard asked Russell why he did not write on aesthetics, Russell replied that he did not know anything about it, though he hastened to add "but that is not a very good excuse, for my friends tell me it has not deterred me from writing on other subjects".
On ethics, Russell wrote that he was a utilitarian in his youth, yet he later distanced himself from this view.
For the advancement of science and protection of liberty of expression, Russell advocated The Will to Doubt, the recognition that all human knowledge is at most a best guess, that one should always remember:
None of our beliefs are quite true; all have at least a penumbra of vagueness and error. The methods of increasing the degree of truth in our beliefs are well known; they consist in hearing all sides, trying to ascertain all the relevant facts, controlling our own bias by discussion with people who have the opposite bias, and cultivating a readiness to discard any hypothesis which has proved inadequate. These methods are practised in science, and have built up the body of scientific knowledge. Every man of science whose outlook is truly scientific is ready to admit that what passes for scientific knowledge at the moment is sure to require correction with the progress of discovery; nevertheless, it is near enough to the truth to serve for most practical purposes, though not for all. In science, where alone something approximating to genuine knowledge is to be found, men's attitude is tentative and full of doubt.
Russell described himself in 1947 as an agnostic or an atheist: he found it difficult to determine which term to adopt, saying:
Therefore, in regard to the Olympic gods, speaking to a purely philosophical audience, I would say that I am an Agnostic. But speaking popularly, I think that all of us would say in regard to those gods that we were Atheists. In regard to the Christian God, I should, I think, take exactly the same line.
For most of his adult life, Russell maintained religion to be little more than superstition and, despite any positive effects, largely harmful to people. He believed that religion and the religious outlook serve to impede knowledge and foster fear and dependency, and to be responsible for much of our world's wars, oppression, and misery. He was a member of the Advisory Council of the British Humanist Association and President of Cardiff Humanists until his death.
Political and social activism occupied much of Russell's time for most of his life. Russell remained politically active almost to the end of his life, writing to and exhorting world leaders and lending his name to various causes. He was a prominent campaigner against Western intervention into the Vietnam War in the 1960s, writing essays, books, attending demonstrations, and even organising the Russell Tribunal in 1966 alongside other prominent philosophers such as Jean-Paul Sartre and Simone de Beauvoir, which fed into his 1967 book War Crimes in Vietnam.
Russell argued for a "scientific society", where war would be abolished, population growth would be limited, and prosperity would be shared. He suggested the establishment of a "single supreme world government" able to enforce peace, claiming that "the only thing that will redeem mankind is co-operation". He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt the Constitution for the Federation of Earth. Russell also expressed support for guild socialism, and commented positively on several socialist thinkers and activists. According to Jean Bricmont and Normand Baillargeon, "Russell was both a liberal and a socialist, a combination that was perfectly comprehensible in his time, but which has become almost unthinkable today. He was a liberal in that he opposed concentrations of power in all its manifestations, military, governmental, or religious, as well as the superstitious or nationalist ideas that usually serve as its justification. But he was also a socialist, even as an extension of his liberalism, because he was equally opposed to the concentrations of power stemming from the private ownership of the major means of production, which therefore needed to be put under social control (which does not mean state control)."
Russell was an active supporter of the Homosexual Law Reform Society, being one of the signatories of A. E. Dyson's 1958 letter to The Times calling for a change in the law regarding male homosexual practices, which were partly legalised in 1967, when Russell was still alive.
He expressed sympathy and support for the Palestinian people and was strongly critical of Israel's actions. He wrote in 1960 that, "I think it was a mistake to establish a Jewish State in Palestine, but it would be a still greater mistake to try to get rid of it now that it exists." In his final written document, read aloud in Cairo three days after his death on 31 January 1970, he condemned Israel as an aggressive imperialist power, which "wishes to consolidate with the least difficulty what it has already taken by violence. Every new conquest becomes the new basis of the proposed negotiation from strength, which ignores the injustice of the previous aggression." In regards to the Palestinian people and refugees, he wrote that, "No people anywhere in the world would accept being expelled en masse from their own country; how can anyone require the people of Palestine to accept a punishment which nobody else would tolerate? A permanent just settlement of the refugees in their homeland is an essential ingredient of any genuine settlement in the Middle East."
Russell advocated – and was one of the first people in the UK to suggest – a universal basic income. In his 1918 book Roads to Freedom, Russell wrote that "Anarchism has the advantage as regards liberty, Socialism as regards the inducement to work. Can we not find a method of combining these two advantages? It seems to me that we can. [...] Stated in more familiar terms, the plan we are advocating amounts essentially to this: that a certain small income, sufficient for necessaries, should be secured to all, whether they work or not, and that a larger income – as much larger as might be warranted by the total amount of commodities produced – should be given to those who are willing to engage in some work which the community recognizes as useful...When education is finished, no one should be compelled to work, and those who choose not to work should receive a bare livelihood and be left completely free."
In "Reflections on My Eightieth Birthday" ("Postscript" in his Autobiography), Russell wrote: "I have lived in the pursuit of a vision, both personal and social. Personal: to care for what is noble, for what is beautiful, for what is gentle; to allow moments of insight to give wisdom at more mundane times. Social: to see in imagination the society that is to be created, where individuals grow freely, and where hate and greed and envy die because there is nothing to nourish them. These things I believe, and the world, for all its horrors, has left me unshaken".
Russell was a champion of freedom of opinion and an opponent of both censorship and indoctrination. In 1928, he wrote: "The fundamental argument for freedom of opinion is the doubtfulness of all our belief... when the State intervenes to ensure the indoctrination of some doctrine, it does so because there is no conclusive evidence in favour of that doctrine ... It is clear that thought is not free if the profession of certain opinions make it impossible to make a living". In 1957, he wrote: "'Free thought' means thinking freely ... to be worthy of the name freethinker he must be free of two things: the force of tradition and the tyranny of his own passions."
Russell has presented ideas on the possible means of control of education in case of scientific dictatorship governments, of the kind of this excerpt taken from chapter II "General Effects of Scientific Technique" of "The Impact of Science on society":
This subject will make great strides when it is taken up by scientists under a scientific dictatorship. Anaxagoras maintained that snow is black, but no one believed him. The social psychologists of the future will have a number of classes of school children on whom they will try different methods of producing an unshakable conviction that snow is black. Various results will soon be arrived at. First, that the influence of home is obstructive. Second, that not much can be done unless indoctrination begins before the age of ten. Third, that verses set to music and repeatedly intoned are very effective. Fourth, that the opinion that snow is white must be held to show a morbid taste for eccentricity. But I anticipate. It is for future scientists to make these maxims precise and discover exactly how much it costs per head to make children believe that snow is black, and how much less it would cost to make them believe it is dark grey. Although this science will be diligently studied, it will be rigidly confined to the governing class. The populace will not be allowed to know how its convictions were generated. When the technique has been perfected, every government that has been in charge of education for a generation will be able to control its subjects securely without the need of armies or policemen. As yet there is only one country which has succeeded in creating this politician's paradise. The social effects of scientific technique have already been many and important, and are likely to be even more noteworthy in the future. Some of these effects depend upon the political and economic character of the country concerned; others are inevitable, whatever this character may be.
He pushed his visionary scenarios even further into details, in the chapter III "Scientific Technique in an Oligarchy" of the same book, stating as an example:
In future such failures are not likely to occur where there is dictatorship. Diet, injections, and injunctions will combine, from a very early age, to produce the sort of character and the sort of beliefs that the authorities consider desirable, and any serious criticism of the powers that be will become psychologically impossible. Even if all are miserable, all will believe themselves happy, because the government will tell them that they are so.
Below are selected Russell's works in English, sorted by year of first publication:
Russell was the author of more than sixty books and over two thousand articles. Additionally, he wrote many pamphlets, introductions, and letters to the editor. One pamphlet titled, I Appeal unto Caesar': The Case of the Conscientious Objectors, ghostwritten for Margaret Hobhouse, the mother of imprisoned peace activist Stephen Hobhouse, allegedly helped secure the release from prison of hundreds of conscientious objectors.
His works can be found in anthologies and collections, including The Collected Papers of Bertrand Russell, which McMaster University began publishing in 1983. By March 2017 this collection of his shorter and previously unpublished works included 18 volumes, and several more are in progress. A bibliography in three additional volumes catalogues his publications. The Russell Archives held by McMaster's William Ready Division of Archives and Research Collections possess over 40,000 of his letters.
Primary sources
Secondary sources | [
{
"paragraph_id": 0,
"text": "Bertrand Arthur William Russell, 3rd Earl Russell, OM, FRS (18 May 1872 – 2 February 1970) was a British mathematician, philosopher, logician, and public intellectual. He had a considerable influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science, and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics.",
"title": ""
},
{
"paragraph_id": 1,
"text": "He was one of the early 20th century's most prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British \"revolt against idealism\". Together with his former teacher A. N. Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt to reduce the whole of mathematics to logic (see Logicism). Russell's article \"On Denoting\" has been considered a \"paradigm of philosophy\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "Russell was a pacifist who championed anti-imperialism and chaired the India League. He went to prison for his pacifism during World War I, but also saw the war against Adolf Hitler's Nazi Germany as a necessary \"lesser of two evils\". In the wake of World War II, he welcomed American global hegemony in favour of either Soviet hegemony or no (or ineffective) world leadership, even if it were to come at the cost of using their nuclear weapons. He would later criticise Stalinist totalitarianism, condemn the United States' involvement in the Vietnam War, and become an outspoken proponent of nuclear disarmament.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 1950, Russell was awarded the Nobel Prize in Literature \"in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought\". He was also the recipient of the De Morgan Medal (1932), Sylvester Medal (1934), Kalinga Prize (1957), and Jerusalem Prize (1963).",
"title": ""
},
{
"paragraph_id": 4,
"text": "Bertrand Arthur William Russell was born at Ravenscroft, a country house in Trellech, Monmouthshire, on 18 May 1872, into an influential and liberal family of the British aristocracy. His parents, Viscount and Viscountess Amberley, were radical for their times. Lord Amberley consented to his wife's affair with their children's tutor, the biologist Douglas Spalding. Both were early advocates of birth control at a time when this was considered scandalous. Lord Amberley was a deist, and even asked the philosopher John Stuart Mill to act as Russell's secular godfather. Mill died the year after Russell's birth, but his writings had a great effect on Russell's life.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "His paternal grandfather, Lord John Russell, later 1st Earl Russell (1792–1878), had twice been prime minister in the 1840s and 1860s. A member of Parliament since the early 1810s, he met with Napoleon Bonaparte in Elba. The Russells had been prominent in England for several centuries before this, coming to power and the peerage with the rise of the Tudor dynasty (see: Duke of Bedford). They established themselves as one of the leading Whig families and participated in every great political event from the dissolution of the monasteries in 1536–1540 to the Glorious Revolution in 1688–1689 and the Great Reform Act in 1832.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Lady Amberley was the daughter of Lord and Lady Stanley of Alderley. Russell often feared the ridicule of his maternal grandmother, one of the campaigners for education of women.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "Russell had two siblings: brother Frank (nearly seven years older), and sister Rachel (four years older). In June 1874, Russell's mother died of diphtheria, followed shortly by Rachel's death. In January 1876, his father died of bronchitis after a long period of depression. Frank and Bertrand were placed in the care of staunchly Victorian paternal grandparents, who lived at Pembroke Lodge in Richmond Park. His grandfather, former Prime Minister Earl Russell, died in 1878, and was remembered by Russell as a kindly old man in a wheelchair. His grandmother, the Countess Russell (née Lady Frances Elliot), was the dominant family figure for the rest of Russell's childhood and youth.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "The Countess was from a Scottish Presbyterian family and successfully petitioned the Court of Chancery to set aside a provision in Amberley's will requiring the children to be raised as agnostics. Despite her religious conservatism, she held progressive views in other areas (accepting Darwinism and supporting Irish Home Rule), and her influence on Bertrand Russell's outlook on social justice and standing up for principle remained with him throughout his life. Her favourite Bible verse, \"Thou shalt not follow a multitude to do evil\", became his motto. The atmosphere at Pembroke Lodge was one of frequent prayer, emotional repression and formality; Frank reacted to this with open rebellion, but the young Bertrand learned to hide his feelings.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "Russell's adolescence was lonely and he often contemplated suicide. He remarked in his autobiography that his keenest interests in \"nature and books and (later) mathematics saved me from complete despondency;\" only his wish to know more mathematics kept him from suicide. He was educated at home by a series of tutors. When Russell was eleven years old, his brother Frank introduced him to the work of Euclid, which he described in his autobiography as \"one of the great events of my life, as dazzling as first love\".",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "During these formative years he also discovered the works of Percy Bysshe Shelley. Russell wrote: \"I spent all my spare time reading him, and learning him by heart, knowing no one to whom I could speak of what I thought or felt, I used to reflect how wonderful it would have been to know Shelley, and to wonder whether I should meet any live human being with whom I should feel so much sympathy.\" Russell claimed that beginning at age 15, he spent considerable time thinking about the validity of Christian religious dogma, which he found unconvincing. At this age, he came to the conclusion that there is no free will and, two years later, that there is no life after death. Finally, at the age of 18, after reading Mill's Autobiography, he abandoned the \"First Cause\" argument and became an atheist.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "He travelled to the continent in 1890 with an American friend, Edward FitzGerald, and with FitzGerald's family he visited the Paris Exhibition of 1889 and climbed the Eiffel Tower soon after it was completed.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "Russell won a scholarship to read for the Mathematical Tripos at Trinity College, Cambridge, and began his studies there in 1890, taking as coach Robert Rumsey Webb. He became acquainted with the younger George Edward Moore and came under the influence of Alfred North Whitehead, who recommended him to the Cambridge Apostles. He quickly distinguished himself in mathematics and philosophy, graduating as seventh Wrangler in the former in 1893 and becoming a Fellow in the latter in 1895.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "Russell was 17 years old in the summer of 1889 when he met the family of Alys Pearsall Smith, an American Quaker five years older, who was a graduate of Bryn Mawr College near Philadelphia. He became a friend of the Pearsall Smith family. They knew him primarily as \"Lord John's grandson\" and enjoyed showing him off.",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "He soon fell in love with the puritanical, high-minded Alys, and contrary to his grandmother's wishes, married her on 13 December 1894. Their marriage began to fall apart in 1901 when it occurred to Russell, while cycling, that he no longer loved her. She asked him if he loved her and he replied that he did not. Russell also disliked Alys's mother, finding her controlling and cruel. A lengthy period of separation began in 1911 with Russell's affair with Lady Ottoline Morrell, and he and Alys finally divorced in 1921 to enable Russell to remarry.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "During his years of separation from Alys, Russell had passionate (and often simultaneous) affairs with a number of women, including Morrell and the actress Lady Constance Malleson. Some have suggested that at this point he had an affair with Vivienne Haigh-Wood, the English governess and writer, and first wife of T. S. Eliot.",
"title": "Biography"
},
{
"paragraph_id": 16,
"text": "Russell began his published work in 1896 with German Social Democracy, a study in politics that was an early indication of a lifelong interest in political and social theory. In 1896 he taught German social democracy at the London School of Economics. He was a member of the Coefficients dining club of social reformers set up in 1902 by the Fabian campaigners Sidney and Beatrice Webb.",
"title": "Biography"
},
{
"paragraph_id": 17,
"text": "He now started an intensive study of the foundations of mathematics at Trinity. In 1897, he wrote An Essay on the Foundations of Geometry (submitted at the Fellowship Examination of Trinity College) which discussed the Cayley–Klein metrics used for non-Euclidean geometry. He attended the First International Congress of Philosophy in Paris in 1900 where he met Giuseppe Peano and Alessandro Padoa. The Italians had responded to Georg Cantor, making a science of set theory; they gave Russell their literature including the Formulario mathematico. Russell was impressed by the precision of Peano's arguments at the Congress, read the literature upon returning to England, and came upon Russell's paradox. In 1903 he published The Principles of Mathematics, a work on foundations of mathematics. It advanced a thesis of logicism, that mathematics and logic are one and the same.",
"title": "Biography"
},
{
"paragraph_id": 18,
"text": "At the age of 29, in February 1901, Russell underwent what he called a \"sort of mystic illumination\", after witnessing Whitehead's wife's acute suffering in an angina attack. \"I found myself filled with semi-mystical feelings about beauty... and with a desire almost as profound as that of the Buddha to find some philosophy which should make human life endurable\", Russell would later recall. \"At the end of those five minutes, I had become a completely different person.\"",
"title": "Biography"
},
{
"paragraph_id": 19,
"text": "In 1905, he wrote the essay \"On Denoting\", which was published in the philosophical journal Mind. Russell was elected a Fellow of the Royal Society (FRS) in 1908. The three-volume Principia Mathematica, written with Whitehead, was published between 1910 and 1913. This, along with the earlier The Principles of Mathematics, soon made Russell world-famous in his field. Russell's first political activity was as the Independent Liberal candidate in the 1907 by-election for the Wimbledon constituency, where he was not elected.",
"title": "Biography"
},
{
"paragraph_id": 20,
"text": "In 1910, he became a University of Cambridge lecturer at Trinity College, where he had studied. He was considered for a Fellowship, which would give him a vote in the college government and protect him from being fired for his opinions, but was passed over because he was \"anti-clerical\", essentially because he was agnostic. He was approached by the Austrian engineering student Ludwig Wittgenstein, who became his PhD student. Russell viewed Wittgenstein as a genius and a successor who would continue his work on logic. He spent hours dealing with Wittgenstein's various phobias and his frequent bouts of despair. This was often a drain on Russell's energy, but Russell continued to be fascinated by him and encouraged his academic development, including the publication of Wittgenstein's Tractatus Logico-Philosophicus in 1922. Russell delivered his lectures on logical atomism, his version of these ideas, in 1918, before the end of World War I. Wittgenstein was, at that time, serving in the Austrian Army and subsequently spent nine months in an Italian prisoner of war camp at the end of the conflict.",
"title": "Biography"
},
{
"paragraph_id": 21,
"text": "During World War I, Russell was one of the few people to engage in active pacifist activities. In 1916, because of his lack of a Fellowship, he was dismissed from Trinity College following his conviction under the Defence of the Realm Act 1914. He later described this, in Free Thought and Official Propaganda, as an illegitimate means the state used to violate freedom of expression. Russell championed the case of Eric Chappelow, a poet jailed and abused as a conscientious objector. Russell played a significant part in the Leeds Convention in June 1917, a historic event which saw well over a thousand \"anti-war socialists\" gather; many being delegates from the Independent Labour Party and the Socialist Party, united in their pacifist beliefs and advocating a peace settlement. The international press reported that Russell appeared with a number of Labour Members of Parliament (MPs), including Ramsay MacDonald and Philip Snowden, as well as former Liberal MP and anti-conscription campaigner, Professor Arnold Lupton. After the event, Russell told Lady Ottoline Morrell that, \"to my surprise, when I got up to speak, I was given the greatest ovation that was possible to give anybody\".",
"title": "Biography"
},
{
"paragraph_id": 22,
"text": "His conviction in 1916 resulted in Russell being fined £100 (equivalent to £6,000 in 2021), which he refused to pay in hope that he would be sent to prison, but his books were sold at auction to raise the money. The books were bought by friends; he later treasured his copy of the King James Bible that was stamped \"Confiscated by Cambridge Police\".",
"title": "Biography"
},
{
"paragraph_id": 23,
"text": "A later conviction for publicly lecturing against inviting the United States to enter the war on the United Kingdom's side resulted in six months' imprisonment in Brixton Prison (see Bertrand Russell's political views) in 1918 (he was prosecuted under the Defence of the Realm Act) He later said of his imprisonment:",
"title": "Biography"
},
{
"paragraph_id": 24,
"text": "I found prison in many ways quite agreeable. I had no engagements, no difficult decisions to make, no fear of callers, no interruptions to my work. I read enormously; I wrote a book, \"Introduction to Mathematical Philosophy\"... and began the work for \"The Analysis of Mind\". I was rather interested in my fellow-prisoners, who seemed to me in no way morally inferior to the rest of the population, though they were on the whole slightly below the usual level of intelligence as was shown by their having been caught.",
"title": "Biography"
},
{
"paragraph_id": 25,
"text": "While he was reading Strachey's Eminent Victorians chapter about Gordon he laughed out loud in his cell prompting the warder to intervene and reminding him that \"prison was a place of punishment\".",
"title": "Biography"
},
{
"paragraph_id": 26,
"text": "Russell was reinstated to Trinity in 1919, resigned in 1920, was Tarner Lecturer in 1926 and became a Fellow again in 1944 until 1949.",
"title": "Biography"
},
{
"paragraph_id": 27,
"text": "In 1924, Russell again gained press attention when attending a \"banquet\" in the House of Commons with well-known campaigners, including Arnold Lupton, who had been an MP and had also endured imprisonment for \"passive resistance to military or naval service\".",
"title": "Biography"
},
{
"paragraph_id": 28,
"text": "In 1941, G. H. Hardy wrote a 61-page pamphlet titled Bertrand Russell and Trinity – published later as a book by Cambridge University Press with a foreword by C. D. Broad—in which he gave an authoritative account of Russell's 1916 dismissal from Trinity College, explaining that a reconciliation between the college and Russell had later taken place and gave details about Russell's personal life. Hardy writes that Russell's dismissal had created a scandal since the vast majority of the Fellows of the College opposed the decision. The ensuing pressure from the Fellows induced the Council to reinstate Russell. In January 1920, it was announced that Russell had accepted the reinstatement offer from Trinity and would begin lecturing from October. In July 1920, Russell applied for a one year leave of absence; this was approved. He spent the year giving lectures in China and Japan. In January 1921, it was announced by Trinity that Russell had resigned and his resignation had been accepted. This resignation, Hardy explains, was completely voluntary and was not the result of another altercation.",
"title": "Biography"
},
{
"paragraph_id": 29,
"text": "The reason for the resignation, according to Hardy, was that Russell was going through a tumultuous time in his personal life with a divorce and subsequent remarriage. Russell contemplated asking Trinity for another one-year leave of absence but decided against it, since this would have been an \"unusual application\" and the situation had the potential to snowball into another controversy. Although Russell did the right thing, in Hardy's opinion, the reputation of the College suffered with Russell's resignation, since the 'world of learning' knew about Russell's altercation with Trinity but not that the rift had healed. In 1925, Russell was asked by the Council of Trinity College to give the Tarner Lectures on the Philosophy of the Sciences; these would later be the basis for one of Russell's best-received books according to Hardy: The Analysis of Matter, published in 1927. In the preface to the Trinity pamphlet, Hardy wrote:",
"title": "Biography"
},
{
"paragraph_id": 30,
"text": "I wish to make it plain that Russell himself is not responsible, directly or indirectly, for the writing of the pamphlet.... I wrote it without his knowledge and, when I sent him the typescript and asked for his permission to print it, I suggested that, unless it contained misstatement of fact, he should make no comment on it. He agreed to this... no word has been changed as the result of any suggestion from him.",
"title": "Biography"
},
{
"paragraph_id": 31,
"text": "In August 1920, Russell travelled to Soviet Russia as part of an official delegation sent by the British government to investigate the effects of the Russian Revolution. He wrote a four-part series of articles, titled \"Soviet Russia—1920\", for the magazine The Nation. He met Vladimir Lenin and had an hour-long conversation with him. In his autobiography, he mentions that he found Lenin disappointing, sensing an \"impish cruelty\" in him and comparing him to \"an opinionated professor\". He cruised down the Volga on a steamship. His experiences destroyed his previous tentative support for the revolution. He subsequently wrote a book, The Practice and Theory of Bolshevism, about his experiences on this trip, taken with a group of 24 others from the UK, all of whom came home thinking well of the Soviet regime, despite Russell's attempts to change their minds. For example, he told them that he had heard shots fired in the middle of the night and was sure that these were clandestine executions, but the others maintained that it was only cars backfiring.",
"title": "Biography"
},
{
"paragraph_id": 32,
"text": "Russell's lover Dora Black, a British author, feminist and socialist campaigner, visited Soviet Russia independently at the same time; in contrast to his reaction, she was enthusiastic about the Bolshevik revolution.",
"title": "Biography"
},
{
"paragraph_id": 33,
"text": "The following year, Russell, accompanied by Dora, visited Peking (as Beijing was then known outside of China) to lecture on philosophy for a year. He went with optimism and hope, seeing China as then being on a new path. Other scholars present in China at the time included John Dewey and Rabindranath Tagore, the Indian Nobel-laureate poet. Before leaving China, Russell became gravely ill with pneumonia, and incorrect reports of his death were published in the Japanese press. When the couple visited Japan on their return journey, Dora took on the role of spurning the local press by handing out notices reading \"Mr. Bertrand Russell, having died according to the Japanese press, is unable to give interviews to Japanese journalists\". Apparently they found this harsh and reacted resentfully.",
"title": "Biography"
},
{
"paragraph_id": 34,
"text": "Dora was six months pregnant when the couple returned to England on 26 August 1921. Russell arranged a hasty divorce from Alys, marrying Dora six days after the divorce was finalised, on 27 September 1921. Russell's children with Dora were John Conrad Russell, 4th Earl Russell, born on 16 November 1921, and Katharine Jane Russell (later Lady Katharine Tait), born on 29 December 1923. Russell supported his family during this time by writing popular books explaining matters of physics, ethics, and education to the layman.",
"title": "Biography"
},
{
"paragraph_id": 35,
"text": "From 1922 to 1927 the Russells divided their time between London and Cornwall, spending summers in Porthcurno. In the 1922 and 1923 general elections Russell stood as a Labour Party candidate in the Chelsea constituency, but only on the basis that he knew he was extremely unlikely to be elected in such a safe Conservative seat, and he was unsuccessful on both occasions.",
"title": "Biography"
},
{
"paragraph_id": 36,
"text": "After the birth of his two children, he became interested in education, especially early childhood education. He was not satisfied with the old traditional education and thought that progressive education also had some flaws; as a result, together with Dora, Russell founded the experimental Beacon Hill School in 1927. The school was run from a succession of different locations, including its original premises at the Russells' residence, Telegraph House, near Harting, West Sussex. During this time, he published On Education, Especially in Early Childhood. On 8 July 1930, Dora gave birth to her third child Harriet Ruth. After he left the school in 1932, Dora continued it until 1943.",
"title": "Biography"
},
{
"paragraph_id": 37,
"text": "In 1927 Russell met Barry Fox (later Barry Stevens), who became a well-known Gestalt therapist and writer in later years. They developed an intense relationship, and in Fox's words: \"... for three years we were very close.\" Fox sent her daughter Judith to Beacon Hill School. From 1927 to 1932 Russell wrote 34 letters to Fox. Upon the death of his elder brother Frank, in 1931, Russell became the 3rd Earl Russell.",
"title": "Biography"
},
{
"paragraph_id": 38,
"text": "Russell's marriage to Dora grew increasingly tenuous, and it reached a breaking point over her having two children with an American journalist, Griffin Barry. They separated in 1932 and finally divorced. On 18 January 1936, Russell married his third wife, an Oxford undergraduate named Patricia (\"Peter\") Spence, who had been his children's governess since 1930. Russell and Peter had one son, Conrad Sebastian Robert Russell, 5th Earl Russell, who became a prominent historian and one of the leading figures in the Liberal Democrat party.",
"title": "Biography"
},
{
"paragraph_id": 39,
"text": "Russell returned in 1937 to the London School of Economics to lecture on the science of power. During the 1930s, Russell became a friend and collaborator of V. K. Krishna Menon, then President of the India League, the foremost lobby in the United Kingdom for Indian independence. Russell chaired the India League from 1932 to 1939.",
"title": "Biography"
},
{
"paragraph_id": 40,
"text": "Russell's political views changed over time, mostly about war. He opposed rearmament against Nazi Germany. In 1937, he wrote in a personal letter: \"If the Germans succeed in sending an invading army to England we should do best to treat them as visitors, give them quarters and invite the commander and chief to dine with the prime minister.\" In 1940, he changed his appeasement view that avoiding a full-scale world war was more important than defeating Hitler. He concluded that Adolf Hitler taking over all of Europe would be a permanent threat to democracy. In 1943, he adopted a stance toward large-scale warfare called \"relative political pacifism\": \"War was always a great evil, but in some particularly extreme circumstances, it may be the lesser of two evils.\"",
"title": "Biography"
},
{
"paragraph_id": 41,
"text": "Before World War II, Russell taught at the University of Chicago, later moving on to Los Angeles to lecture at the UCLA Department of Philosophy. He was appointed professor at the City College of New York (CCNY) in 1940, but after a public outcry the appointment was annulled by a court judgment that pronounced him \"morally unfit\" to teach at the college because of his opinions, especially those relating to sexual morality, detailed in Marriage and Morals (1929). The matter was however taken to the New York Supreme Court by Jean Kay who was afraid that her daughter would be harmed by the appointment, though her daughter was not a student at CCNY. Many intellectuals, led by John Dewey, protested at his treatment. Albert Einstein's oft-quoted aphorism that \"great spirits have always encountered violent opposition from mediocre minds\" originated in his open letter, dated 19 March 1940, to Morris Raphael Cohen, a professor emeritus at CCNY, supporting Russell's appointment. Dewey and Horace M. Kallen edited a collection of articles on the CCNY affair in The Bertrand Russell Case. Russell soon joined the Barnes Foundation, lecturing to a varied audience on the history of philosophy; these lectures formed the basis of A History of Western Philosophy. His relationship with the eccentric Albert C. Barnes soon soured, and he returned to the UK in 1944 to rejoin the faculty of Trinity College.",
"title": "Biography"
},
{
"paragraph_id": 42,
"text": "Russell participated in many broadcasts over the BBC, particularly The Brains Trust and for the Third Programme, on various topical and philosophical subjects. By this time Russell was world-famous outside academic circles, frequently the subject or author of magazine and newspaper articles, and was called upon to offer opinions on a wide variety of subjects, even mundane ones. En route to one of his lectures in Trondheim, Russell was one of 24 survivors (among a total of 43 passengers) of an aeroplane crash in Hommelvik in October 1948. He said he owed his life to smoking since the people who drowned were in the non-smoking part of the plane. A History of Western Philosophy (1945) became a best-seller and provided Russell with a steady income for the remainder of his life.",
"title": "Biography"
},
{
"paragraph_id": 43,
"text": "In 1942, Russell argued in favour of a moderate socialism, capable of overcoming its metaphysical principles. In an inquiry on dialectical materialism, launched by the Austrian artist and philosopher Wolfgang Paalen in his journal DYN, Russell said: \"I think the metaphysics of both Hegel and Marx plain nonsense—Marx's claim to be 'science' is no more justified than Mary Baker Eddy's. This does not mean that I am opposed to socialism.\"",
"title": "Biography"
},
{
"paragraph_id": 44,
"text": "In 1943, Russell expressed support for Zionism: \"I have come gradually to see that, in a dangerous and largely hostile world, it is essential to Jews to have some country which is theirs, some region where they are not suspected aliens, some state which embodies what is distinctive in their culture\".",
"title": "Biography"
},
{
"paragraph_id": 45,
"text": "In a speech in 1948, Russell said that if the USSR's aggression continued, it would be morally worse to go to war after the USSR possessed an atomic bomb than before it possessed one, because if the USSR had no bomb the West's victory would come more swiftly and with fewer casualties than if there were atomic bombs on both sides. At that time, only the United States possessed an atomic bomb, and the USSR was pursuing an extremely aggressive policy towards the countries in Eastern Europe which were being absorbed into the Soviet Union's sphere of influence. Many understood Russell's comments to mean that Russell approved of a first strike in a war with the USSR, including Nigel Lawson, who was present when Russell spoke of such matters. Others, including Griffin, who obtained a transcript of the speech, have argued that he was merely explaining the usefulness of America's atomic arsenal in deterring the USSR from continuing its domination of Eastern Europe.",
"title": "Biography"
},
{
"paragraph_id": 46,
"text": "Just after the atomic bombs exploded over Hiroshima and Nagasaki, Russell wrote letters, and published articles in newspapers from 1945 to 1948, stating clearly that it was morally justified and better to go to war against the USSR using atomic bombs while the United States possessed them and before the USSR did. In September 1949, one week after the USSR tested its first A-bomb, but before this became known, Russell wrote that the USSR would be unable to develop nuclear weapons because following Stalin's purges only science based on Marxist principles would be practised in the Soviet Union. After it became known that the USSR had carried out its nuclear bomb tests, Russell declared his position advocating the total abolition of atomic weapons.",
"title": "Biography"
},
{
"paragraph_id": 47,
"text": "In 1948, Russell was invited by the BBC to deliver the inaugural Reith Lectures—what was to become an annual series of lectures, still broadcast by the BBC. His series of six broadcasts, titled Authority and the Individual, explored themes such as the role of individual initiative in the development of a community and the role of state control in a progressive society. Russell continued to write about philosophy. He wrote a foreword to Words and Things by Ernest Gellner, which was highly critical of the later thought of Ludwig Wittgenstein and of ordinary language philosophy. Gilbert Ryle refused to have the book reviewed in the philosophical journal Mind, which caused Russell to respond via The Times. The result was a month-long correspondence in The Times between the supporters and detractors of ordinary language philosophy, which was only ended when the paper published an editorial critical of both sides but agreeing with the opponents of ordinary language philosophy.",
"title": "Biography"
},
{
"paragraph_id": 48,
"text": "In the King's Birthday Honours of 9 June 1949, Russell was awarded the Order of Merit, and the following year he was awarded the Nobel Prize in Literature. When he was given the Order of Merit, George VI was affable but slightly embarrassed at decorating a former jailbird, saying, \"You have sometimes behaved in a manner that would not do if generally adopted\". Russell merely smiled, but afterwards claimed that the reply \"That's right, just like your brother\" immediately came to mind.",
"title": "Biography"
},
{
"paragraph_id": 49,
"text": "In 1950, Russell attended the inaugural conference for the Congress for Cultural Freedom, a CIA-funded anti-communist organisation committed to the deployment of culture as a weapon during the Cold War. Russell was one of the best-known patrons of the Congress, until he resigned in 1956.",
"title": "Biography"
},
{
"paragraph_id": 50,
"text": "In 1952, Russell was divorced by Spence, with whom he had been very unhappy. Conrad, Russell's son by Spence, did not see his father between the time of the divorce and 1968 (at which time his decision to meet his father caused a permanent breach with his mother). Russell married his fourth wife, Edith Finch, soon after the divorce, on 15 December 1952. They had known each other since 1925, and Edith had taught English at Bryn Mawr College near Philadelphia, sharing a house for 20 years with Russell's old friend Lucy Donnelly. Edith remained with him until his death, and, by all accounts, their marriage was a happy, close, and loving one. Russell's eldest son John suffered from serious mental illness, which was the source of ongoing disputes between Russell and his former wife Dora.",
"title": "Biography"
},
{
"paragraph_id": 51,
"text": "In 1962 Russell played a public role in the Cuban Missile Crisis: in an exchange of telegrams with Soviet leader Nikita Khrushchev, Khrushchev assured him that the Soviet government would not be reckless. Russell sent this telegram to President Kennedy:",
"title": "Biography"
},
{
"paragraph_id": 52,
"text": "YOUR ACTION DESPERATE. THREAT TO HUMAN SURVIVAL. NO CONCEIVABLE JUSTIFICATION. CIVILIZED MAN CONDEMNS IT. WE WILL NOT HAVE MASS MURDER. ULTIMATUM MEANS WAR... END THIS MADNESS.",
"title": "Biography"
},
{
"paragraph_id": 53,
"text": "According to historian Peter Knight, after JFK's assassination, Russell, \"prompted by the emerging work of the lawyer Mark Lane in the US ... rallied support from other noteworthy and left-leaning compatriots to form a Who Killed Kennedy Committee in June 1964, members of which included Michael Foot MP, Caroline Benn, the publisher Victor Gollancz, the writers John Arden and J. B. Priestley, and the Oxford history professor Hugh Trevor-Roper.\" Russell published a highly critical article weeks before the Warren Commission Report was published, setting forth 16 Questions on the Assassination and equating the Oswald case with the Dreyfus affair of late 19th-century France, in which the state convicted an innocent man. Russell also criticised the American press for failing to heed any voices critical of the official version.",
"title": "Biography"
},
{
"paragraph_id": 54,
"text": "Bertrand Russell was opposed to war from a young age; his opposition to World War I being used as grounds for his dismissal from Trinity College at Cambridge. This incident fused two of his most controversial causes, as he had failed to be granted Fellow status which would have protected him from firing, because he was not willing to either pretend to be a devout Christian, or at least avoid admitting he was agnostic.",
"title": "Biography"
},
{
"paragraph_id": 55,
"text": "He later described the resolution of these issues as essential to freedom of thought and expression, citing the incident in Free Thought and Official Propaganda, where he explained that the expression of any idea, even the most obviously \"bad\", must be protected not only from direct State intervention, but also economic leveraging and other means of being silenced:",
"title": "Biography"
},
{
"paragraph_id": 56,
"text": "The opinions which are still persecuted strike the majority as so monstrous and immoral that the general principle of toleration cannot be held to apply to them. But this is exactly the same view as that which made possible the tortures of the Inquisition.",
"title": "Biography"
},
{
"paragraph_id": 57,
"text": "Russell spent the 1950s and 1960s engaged in political causes primarily related to nuclear disarmament and opposing the Vietnam War. The 1955 Russell–Einstein Manifesto was a document calling for nuclear disarmament and was signed by eleven of the most prominent nuclear physicists and intellectuals of the time. In October 1960 \"The Committee of 100\" was formed with a declaration by Russell and Michael Scott, entitled \"Act or Perish\", which called for a \"movement of nonviolent resistance to nuclear war and weapons of mass destruction\". In September 1961, at the age of 89, Russell was jailed for seven days in Brixton Prison for a \"breach of the peace\" after taking part in an anti-nuclear demonstration in London. The magistrate offered to exempt him from jail if he pledged himself to \"good behaviour\", to which Russell replied: \"No, I won't.\"",
"title": "Biography"
},
{
"paragraph_id": 58,
"text": "In 1966–1967, Russell worked with Jean-Paul Sartre and many other intellectual figures to form the Russell Vietnam War Crimes Tribunal to investigate the conduct of the United States in Vietnam. He wrote a great many letters to world leaders during this period.",
"title": "Biography"
},
{
"paragraph_id": 59,
"text": "Early in his life Russell supported eugenicist policies. He proposed in 1894 that the state issue certificates of health to prospective parents and withhold public benefits from those considered unfit. In 1929 he wrote that people deemed \"mentally defective\" and \"feebleminded\" should be sexually sterilised because they \"are apt to have enormous numbers of illegitimate children, all, as a rule, wholly useless to the community.\" Russell was also an advocate of population control:",
"title": "Biography"
},
{
"paragraph_id": 60,
"text": "The nations which at present increase rapidly should be encouraged to adopt the methods by which, in the West, the increase of population has been checked. Educational propaganda, with government help, could achieve this result in a generation. There are, however, two powerful forces opposed to such a policy: one is religion, the other is nationalism. I think it is the duty of all to proclaim that opposition to the spread of birth is appalling depth of misery and degradation, and that within another fifty years or so. I do not pretend that birth control is the only way in which population can be kept from increasing. There are others, which, one must suppose, opponents of birth control would prefer. War, as I remarked a moment ago, has hitherto been disappointing in this respect, but perhaps bacteriological war may prove more effective. If a Black Death could be spread throughout the whole world once in every generation survivors could procreate freely without making the world too full.",
"title": "Biography"
},
{
"paragraph_id": 61,
"text": "On 20 November 1948, in a public speech at Westminster School, addressing a gathering arranged by the New Commonwealth, Russell shocked some observers by suggesting that a preemptive nuclear strike on the Soviet Union was justified. Russell argued that war between the United States and the Soviet Union seemed inevitable, so it would be a humanitarian gesture to get it over with quickly and have the United States in the dominant position. Currently, Russell argued, humanity could survive such a war, whereas a full nuclear war after both sides had manufactured large stockpiles of more destructive weapons was likely to result in the extinction of the human race. Russell later relented from this stance, instead arguing for mutual disarmament by the nuclear powers.",
"title": "Biography"
},
{
"paragraph_id": 62,
"text": "In 1956, immediately before and during the Suez Crisis, Russell expressed his opposition to European imperialism in the Middle East. He viewed the crisis as another reminder of the pressing need for a more effective mechanism for international governance, and to restrict national sovereignty to places such as the Suez Canal area \"where general interest is involved\". At the same time the Suez Crisis was taking place, the world was also captivated by the Hungarian Revolution and the subsequent crushing of the revolt by intervening Soviet forces. Russell attracted criticism for speaking out fervently against the Suez war while ignoring Soviet repression in Hungary, to which he responded that he did not criticise the Soviets \"because there was no need. Most of the so-called Western World was fulminating\". Although he later feigned a lack of concern, at the time he was disgusted by the brutal Soviet response, and on 16 November 1956, he expressed approval for a declaration of support for Hungarian scholars which Michael Polanyi had cabled to the Soviet embassy in London twelve days previously, shortly after Soviet troops had entered Budapest.",
"title": "Biography"
},
{
"paragraph_id": 63,
"text": "In November 1957 Russell wrote an article addressing US President Dwight D. Eisenhower and Soviet Premier Nikita Khrushchev, urging a summit to consider \"the conditions of co-existence\". Khrushchev responded that peace could be served by such a meeting. In January 1958 Russell elaborated his views in The Observer, proposing a cessation of all nuclear weapons production, with the UK taking the first step by unilaterally suspending its own nuclear-weapons program if necessary, and with Germany \"freed from all alien armed forces and pledged to neutrality in any conflict between East and West\". US Secretary of State John Foster Dulles replied for Eisenhower. The exchange of letters was published as The Vital Letters of Russell, Khrushchev, and Dulles.",
"title": "Biography"
},
{
"paragraph_id": 64,
"text": "Russell was asked by The New Republic, a liberal American magazine, to elaborate his views on world peace. He urged that all nuclear weapons testing and flights by planes armed with nuclear weapons be halted immediately, and negotiations be opened for the destruction of all hydrogen bombs, with the number of conventional nuclear devices limited to ensure a balance of power. He proposed that Germany be reunified and accept the Oder-Neisse line as its border, and that a neutral zone be established in Central Europe, consisting at the minimum of Germany, Poland, Hungary, and Czechoslovakia, with each of these countries being free of foreign troops and influence, and prohibited from forming alliances with countries outside the zone. In the Middle East, Russell suggested that the West avoid opposing Arab nationalism, and proposed the creation of a United Nations peacekeeping force to guard Israel's frontiers to ensure that Israel was prevented from committing aggression and protected from it. He also suggested Western recognition of the People's Republic of China, and that it be admitted to the UN with a permanent seat on the UN Security Council.",
"title": "Biography"
},
{
"paragraph_id": 65,
"text": "He was in contact with Lionel Rogosin while the latter was filming his anti-war film Good Times, Wonderful Times in the 1960s. He became a hero to many of the youthful members of the New Left. In early 1963, Russell became increasingly vocal in his disapproval of the Vietnam War, and felt that the US government's policies there were near-genocidal. In 1963 he became the inaugural recipient of the Jerusalem Prize, an award for writers concerned with the freedom of the individual in society. In 1964 he was one of eleven world figures who issued an appeal to Israel and the Arab countries to accept an arms embargo and international supervision of nuclear plants and rocket weaponry. In October 1965 he tore up his Labour Party card because he suspected Harold Wilson's Labour government was going to send troops to support the United States in Vietnam.",
"title": "Biography"
},
{
"paragraph_id": 66,
"text": "In June 1955, Russell had leased Plas Penrhyn in Penrhyndeudraeth, Merionethshire, Wales and on 5 July of the following year it became his and Edith's principal residence.",
"title": "Biography"
},
{
"paragraph_id": 67,
"text": "Russell published his three-volume autobiography in 1967, 1968, and 1969. He made a cameo appearance playing himself in the anti-war Hindi film Aman, by Mohan Kumar, which was released in India in 1967. This was Russell's only appearance in a feature film.",
"title": "Biography"
},
{
"paragraph_id": 68,
"text": "On 23 November 1969, he wrote to The Times newspaper saying that the preparation for show trials in Czechoslovakia was \"highly alarming\". The same month, he appealed to Secretary General U Thant of the United Nations to support an international war crimes commission to investigate alleged torture and genocide by the United States in South Vietnam during the Vietnam War. The following month, he protested to Alexei Kosygin over the expulsion of Aleksandr Solzhenitsyn from the Soviet Union of Writers.",
"title": "Biography"
},
{
"paragraph_id": 69,
"text": "On 31 January 1970, Russell issued a statement condemning \"Israel's aggression in the Middle East\", and in particular, Israeli bombing raids being carried out deep in Egyptian territory as part of the War of Attrition, which he compared to German bombing raids in the Battle of Britain and the US bombing of Vietnam. He called for an Israeli withdrawal to the pre-Six-Day War borders. This was Russell's final political statement or act. It was read out at the International Conference of Parliamentarians in Cairo on 3 February 1970, the day after his death.",
"title": "Biography"
},
{
"paragraph_id": 70,
"text": "Russell died of influenza, just after 8 pm on 2 February 1970 at his home in Penrhyndeudraeth, aged 97. His body was cremated in Colwyn Bay on 5 February 1970 with five people present. In accordance with his will, there was no religious ceremony but one minute's silence; his ashes were later scattered over the Welsh mountains. Although he was born in Monmouthshire, and died in Penrhyndeudraeth in Wales, Russell identified as English. Later in 1970, on 23 October, his will was published showing he had left an estate valued at £69,423 (equivalent to £1.1 million in 2021). In 1980, a memorial to Russell was commissioned by a committee including the philosopher A. J. Ayer. It consists of a bust of Russell in Red Lion Square in London sculpted by Marcelle Quinton.",
"title": "Biography"
},
{
"paragraph_id": 71,
"text": "Lady Katharine Jane Tait, Russell's daughter, founded the Bertrand Russell Society in 1974 to preserve and understand his work. It publishes the Bertrand Russell Society Bulletin, holds meetings and awards prizes for scholarship, including the Bertrand Russell Society Award. She also authored several essays about her father; as well as a book, My Father, Bertrand Russell, which was published in 1975. All members receive Russell: The Journal of Bertrand Russell Studies.",
"title": "Biography"
},
{
"paragraph_id": 72,
"text": "For the sesquicentennial of his birth, in May 2022, McMaster University's Bertrand Russell Archive, the university's largest and most heavily used research collection, organised both a physical and virtual exhibition on Russell's anti-nuclear stance in the post-war era, Scientists for Peace: the Russell-Einstein Manifesto and the Pugwash Conference, which included the earliest version of the Russell–Einstein Manifesto. The Bertrand Russell Peace Foundation held a commemoration at Conway Hall in Red Lion Square, London, on 18 May, the anniversary of his birth. For its part, on the same day, La Estrella de Panamá published a biographical sketch by Francisco Díaz Montilla, who commented that \"[if he] had to characterize Russell's work in one sentence [he] would say: criticism and rejection of dogmatism.\"",
"title": "Biography"
},
{
"paragraph_id": 73,
"text": "Bangladesh's first leader, Mujibur Rahman, named his youngest son Sheikh Russel in honour of Bertrand Russell.",
"title": "Biography"
},
{
"paragraph_id": 74,
"text": "Russell first married Alys Whitall Smith (died 1951) in 1894. The marriage was dissolved in 1921 with no issue. His second marriage was to Dora Winifred Black MBE (died 1986), daughter of Sir Frederick Black, in 1921. This was dissolved in 1935, having produced two children:",
"title": "Biography"
},
{
"paragraph_id": 75,
"text": "Russell's third marriage was to Patricia Helen Spence (died 2004) in 1936, with the marriage producing one child:",
"title": "Biography"
},
{
"paragraph_id": 76,
"text": "Russell's third marriage ended in divorce in 1952. He married Edith Finch in the same year. Finch survived Russell, dying in 1978.",
"title": "Biography"
},
{
"paragraph_id": 77,
"text": "Russell held throughout his life the following styles and honours:",
"title": "Biography"
},
{
"paragraph_id": 78,
"text": "Russell is generally credited with being one of the founders of analytic philosophy. He was deeply impressed by Gottfried Leibniz (1646–1716), and wrote on every major area of philosophy except aesthetics. He was particularly prolific in the fields of metaphysics, logic and the philosophy of mathematics, the philosophy of language, ethics and epistemology. When Brand Blanshard asked Russell why he did not write on aesthetics, Russell replied that he did not know anything about it, though he hastened to add \"but that is not a very good excuse, for my friends tell me it has not deterred me from writing on other subjects\".",
"title": "Views"
},
{
"paragraph_id": 79,
"text": "On ethics, Russell wrote that he was a utilitarian in his youth, yet he later distanced himself from this view.",
"title": "Views"
},
{
"paragraph_id": 80,
"text": "For the advancement of science and protection of liberty of expression, Russell advocated The Will to Doubt, the recognition that all human knowledge is at most a best guess, that one should always remember:",
"title": "Views"
},
{
"paragraph_id": 81,
"text": "None of our beliefs are quite true; all have at least a penumbra of vagueness and error. The methods of increasing the degree of truth in our beliefs are well known; they consist in hearing all sides, trying to ascertain all the relevant facts, controlling our own bias by discussion with people who have the opposite bias, and cultivating a readiness to discard any hypothesis which has proved inadequate. These methods are practised in science, and have built up the body of scientific knowledge. Every man of science whose outlook is truly scientific is ready to admit that what passes for scientific knowledge at the moment is sure to require correction with the progress of discovery; nevertheless, it is near enough to the truth to serve for most practical purposes, though not for all. In science, where alone something approximating to genuine knowledge is to be found, men's attitude is tentative and full of doubt.",
"title": "Views"
},
{
"paragraph_id": 82,
"text": "Russell described himself in 1947 as an agnostic or an atheist: he found it difficult to determine which term to adopt, saying:",
"title": "Views"
},
{
"paragraph_id": 83,
"text": "Therefore, in regard to the Olympic gods, speaking to a purely philosophical audience, I would say that I am an Agnostic. But speaking popularly, I think that all of us would say in regard to those gods that we were Atheists. In regard to the Christian God, I should, I think, take exactly the same line.",
"title": "Views"
},
{
"paragraph_id": 84,
"text": "For most of his adult life, Russell maintained religion to be little more than superstition and, despite any positive effects, largely harmful to people. He believed that religion and the religious outlook serve to impede knowledge and foster fear and dependency, and to be responsible for much of our world's wars, oppression, and misery. He was a member of the Advisory Council of the British Humanist Association and President of Cardiff Humanists until his death.",
"title": "Views"
},
{
"paragraph_id": 85,
"text": "Political and social activism occupied much of Russell's time for most of his life. Russell remained politically active almost to the end of his life, writing to and exhorting world leaders and lending his name to various causes. He was a prominent campaigner against Western intervention into the Vietnam War in the 1960s, writing essays, books, attending demonstrations, and even organising the Russell Tribunal in 1966 alongside other prominent philosophers such as Jean-Paul Sartre and Simone de Beauvoir, which fed into his 1967 book War Crimes in Vietnam.",
"title": "Views"
},
{
"paragraph_id": 86,
"text": "Russell argued for a \"scientific society\", where war would be abolished, population growth would be limited, and prosperity would be shared. He suggested the establishment of a \"single supreme world government\" able to enforce peace, claiming that \"the only thing that will redeem mankind is co-operation\". He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt the Constitution for the Federation of Earth. Russell also expressed support for guild socialism, and commented positively on several socialist thinkers and activists. According to Jean Bricmont and Normand Baillargeon, \"Russell was both a liberal and a socialist, a combination that was perfectly comprehensible in his time, but which has become almost unthinkable today. He was a liberal in that he opposed concentrations of power in all its manifestations, military, governmental, or religious, as well as the superstitious or nationalist ideas that usually serve as its justification. But he was also a socialist, even as an extension of his liberalism, because he was equally opposed to the concentrations of power stemming from the private ownership of the major means of production, which therefore needed to be put under social control (which does not mean state control).\"",
"title": "Views"
},
{
"paragraph_id": 87,
"text": "Russell was an active supporter of the Homosexual Law Reform Society, being one of the signatories of A. E. Dyson's 1958 letter to The Times calling for a change in the law regarding male homosexual practices, which were partly legalised in 1967, when Russell was still alive.",
"title": "Views"
},
{
"paragraph_id": 88,
"text": "He expressed sympathy and support for the Palestinian people and was strongly critical of Israel's actions. He wrote in 1960 that, \"I think it was a mistake to establish a Jewish State in Palestine, but it would be a still greater mistake to try to get rid of it now that it exists.\" In his final written document, read aloud in Cairo three days after his death on 31 January 1970, he condemned Israel as an aggressive imperialist power, which \"wishes to consolidate with the least difficulty what it has already taken by violence. Every new conquest becomes the new basis of the proposed negotiation from strength, which ignores the injustice of the previous aggression.\" In regards to the Palestinian people and refugees, he wrote that, \"No people anywhere in the world would accept being expelled en masse from their own country; how can anyone require the people of Palestine to accept a punishment which nobody else would tolerate? A permanent just settlement of the refugees in their homeland is an essential ingredient of any genuine settlement in the Middle East.\"",
"title": "Views"
},
{
"paragraph_id": 89,
"text": "Russell advocated – and was one of the first people in the UK to suggest – a universal basic income. In his 1918 book Roads to Freedom, Russell wrote that \"Anarchism has the advantage as regards liberty, Socialism as regards the inducement to work. Can we not find a method of combining these two advantages? It seems to me that we can. [...] Stated in more familiar terms, the plan we are advocating amounts essentially to this: that a certain small income, sufficient for necessaries, should be secured to all, whether they work or not, and that a larger income – as much larger as might be warranted by the total amount of commodities produced – should be given to those who are willing to engage in some work which the community recognizes as useful...When education is finished, no one should be compelled to work, and those who choose not to work should receive a bare livelihood and be left completely free.\"",
"title": "Views"
},
{
"paragraph_id": 90,
"text": "In \"Reflections on My Eightieth Birthday\" (\"Postscript\" in his Autobiography), Russell wrote: \"I have lived in the pursuit of a vision, both personal and social. Personal: to care for what is noble, for what is beautiful, for what is gentle; to allow moments of insight to give wisdom at more mundane times. Social: to see in imagination the society that is to be created, where individuals grow freely, and where hate and greed and envy die because there is nothing to nourish them. These things I believe, and the world, for all its horrors, has left me unshaken\".",
"title": "Views"
},
{
"paragraph_id": 91,
"text": "Russell was a champion of freedom of opinion and an opponent of both censorship and indoctrination. In 1928, he wrote: \"The fundamental argument for freedom of opinion is the doubtfulness of all our belief... when the State intervenes to ensure the indoctrination of some doctrine, it does so because there is no conclusive evidence in favour of that doctrine ... It is clear that thought is not free if the profession of certain opinions make it impossible to make a living\". In 1957, he wrote: \"'Free thought' means thinking freely ... to be worthy of the name freethinker he must be free of two things: the force of tradition and the tyranny of his own passions.\"",
"title": "Views"
},
{
"paragraph_id": 92,
"text": "Russell has presented ideas on the possible means of control of education in case of scientific dictatorship governments, of the kind of this excerpt taken from chapter II \"General Effects of Scientific Technique\" of \"The Impact of Science on society\":",
"title": "Views"
},
{
"paragraph_id": 93,
"text": "This subject will make great strides when it is taken up by scientists under a scientific dictatorship. Anaxagoras maintained that snow is black, but no one believed him. The social psychologists of the future will have a number of classes of school children on whom they will try different methods of producing an unshakable conviction that snow is black. Various results will soon be arrived at. First, that the influence of home is obstructive. Second, that not much can be done unless indoctrination begins before the age of ten. Third, that verses set to music and repeatedly intoned are very effective. Fourth, that the opinion that snow is white must be held to show a morbid taste for eccentricity. But I anticipate. It is for future scientists to make these maxims precise and discover exactly how much it costs per head to make children believe that snow is black, and how much less it would cost to make them believe it is dark grey. Although this science will be diligently studied, it will be rigidly confined to the governing class. The populace will not be allowed to know how its convictions were generated. When the technique has been perfected, every government that has been in charge of education for a generation will be able to control its subjects securely without the need of armies or policemen. As yet there is only one country which has succeeded in creating this politician's paradise. The social effects of scientific technique have already been many and important, and are likely to be even more noteworthy in the future. Some of these effects depend upon the political and economic character of the country concerned; others are inevitable, whatever this character may be.",
"title": "Views"
},
{
"paragraph_id": 94,
"text": "He pushed his visionary scenarios even further into details, in the chapter III \"Scientific Technique in an Oligarchy\" of the same book, stating as an example:",
"title": "Views"
},
{
"paragraph_id": 95,
"text": "In future such failures are not likely to occur where there is dictatorship. Diet, injections, and injunctions will combine, from a very early age, to produce the sort of character and the sort of beliefs that the authorities consider desirable, and any serious criticism of the powers that be will become psychologically impossible. Even if all are miserable, all will believe themselves happy, because the government will tell them that they are so.",
"title": "Views"
},
{
"paragraph_id": 96,
"text": "Below are selected Russell's works in English, sorted by year of first publication:",
"title": "Selected works"
},
{
"paragraph_id": 97,
"text": "Russell was the author of more than sixty books and over two thousand articles. Additionally, he wrote many pamphlets, introductions, and letters to the editor. One pamphlet titled, I Appeal unto Caesar': The Case of the Conscientious Objectors, ghostwritten for Margaret Hobhouse, the mother of imprisoned peace activist Stephen Hobhouse, allegedly helped secure the release from prison of hundreds of conscientious objectors.",
"title": "Selected works"
},
{
"paragraph_id": 98,
"text": "His works can be found in anthologies and collections, including The Collected Papers of Bertrand Russell, which McMaster University began publishing in 1983. By March 2017 this collection of his shorter and previously unpublished works included 18 volumes, and several more are in progress. A bibliography in three additional volumes catalogues his publications. The Russell Archives held by McMaster's William Ready Division of Archives and Research Collections possess over 40,000 of his letters.",
"title": "Selected works"
},
{
"paragraph_id": 99,
"text": "Primary sources",
"title": "References"
},
{
"paragraph_id": 100,
"text": "Secondary sources",
"title": "References"
}
] | Bertrand Arthur William Russell, 3rd Earl Russell, was a British mathematician, philosopher, logician, and public intellectual. He had a considerable influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science, and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics. He was one of the early 20th century's most prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British "revolt against idealism". Together with his former teacher A. N. Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt to reduce the whole of mathematics to logic. Russell's article "On Denoting" has been considered a "paradigm of philosophy". Russell was a pacifist who championed anti-imperialism and chaired the India League. He went to prison for his pacifism during World War I, but also saw the war against Adolf Hitler's Nazi Germany as a necessary "lesser of two evils". In the wake of World War II, he welcomed American global hegemony in favour of either Soviet hegemony or no world leadership, even if it were to come at the cost of using their nuclear weapons. He would later criticise Stalinist totalitarianism, condemn the United States' involvement in the Vietnam War, and become an outspoken proponent of nuclear disarmament. In 1950, Russell was awarded the Nobel Prize in Literature "in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought". He was also the recipient of the De Morgan Medal (1932), Sylvester Medal (1934), Kalinga Prize (1957), and Jerusalem Prize (1963). | 2001-11-04T12:53:11Z | 2023-12-29T04:55:39Z | [
"Template:Nobelprize",
"Template:Blockquote",
"Template:Ref-cleanup",
"Template:S-bef",
"Template:Navboxes",
"Template:Inflation",
"Template:Isbn",
"Template:Nonspecific",
"Template:S-start",
"Template:Nbsp",
"Template:Main",
"Template:Reflist",
"Template:ISSN",
"Template:Acad",
"Template:MacTutor Biography",
"Template:S-aft",
"Template:Citation needed",
"Template:Cite book",
"Template:Cite IEP",
"Template:OL author",
"Template:Webarchive",
"Template:Cite web",
"Template:Internet Archive author",
"Template:S-end",
"Template:Vanchor",
"Template:Librivox author",
"Template:Inflation/year",
"Template:Infobox Bertrand Russell",
"Template:Cite journal",
"Template:Cite news",
"Template:Postnominals",
"Template:Colend",
"Template:Citation",
"Template:S-reg",
"Template:Short description",
"Template:Efn",
"Template:Cite ODNB",
"Template:Sister project links",
"Template:ISBN",
"Template:Bibleverse",
"Template:London Gazette",
"Template:S-ttl",
"Template:Use British English",
"Template:Use dmy dates",
"Template:See also",
"Template:Gutenberg author",
"Template:Cols",
"Template:Notelist",
"Template:Cite magazine",
"Template:Oclc",
"Template:Infobox philosopher",
"Template:Multiple image",
"Template:Cbignore",
"Template:Authority control",
"Template:Subscription required",
"Template:Dead link"
] | https://en.wikipedia.org/wiki/Bertrand_Russell |
4,165 | Boeing 767 | The Boeing 767 is an American wide-body aircraft developed and manufactured by Boeing Commercial Airplanes. The aircraft was launched as the 7X7 program on July 14, 1978, the prototype first flew on September 26, 1981, and it was certified on July 30, 1982. The initial 767-200 variant entered service on September 8, 1982, with United Airlines, and the extended-range 767-200ER in 1984. It was stretched into the 767-300 in October 1986, followed by the 767-300ER in 1988, the most popular variant. The 767-300F, a production freighter version, debuted in October 1995. It was stretched again into the 767-400ER from September 2000.
To complement the larger 747, it has a seven-abreast cross-section, accommodating smaller LD2 ULD cargo containers. The 767 is Boeing's first wide-body twinjet, powered by General Electric CF6, Rolls-Royce RB211, or Pratt & Whitney JT9D turbofans. JT9D engines were eventually replaced by PW4000 engines. The aircraft has a conventional tail and a supercritical wing for reduced aerodynamic drag. Its two-crew glass cockpit, a first for a Boeing airliner, was developed jointly for the 757 − a narrow-body aircraft, allowing a common pilot type rating. Studies for a higher-capacity 767 in 1986 led Boeing to develop the larger 777 twinjet, introduced in June 1995.
The 159-foot-long (48.5 m) 767-200 typically seats 216 passengers over 3,900 nautical miles [nmi] (7,200 km; 3,566 mi), while the 767-200ER seats 181 over a 6,590 nmi (12,200 km; 7,580 mi) range. The 180-foot-long (54.9 m) 767-300 typically seats 269 passengers over 3,900 nmi (7,200 km; 4,500 mi), while the 767-300ER seats 218 over 5,980 nmi (11,070 km; 6,880 mi). The 767-300F can haul 116,000 lb (52.7 t) over 3,225 nmi (6,025 km; 3,711 mi), and the 201.3-foot-long (61.37 m) 767-400ER typically seats 245 passengers over 5,625 nmi (10,415 km; 6,473 mi). Military derivatives include the E-767 for surveillance and the KC-767 and KC-46 aerial tankers.
Initially marketed for transcontinental routes, a loosening of ETOPS rules starting in 1985 allowed the aircraft to operate transatlantic flights. A total of 742 of these aircraft were in service in July 2018, with Delta Air Lines being the largest operator with 77 aircraft in its fleet. As of November 2023, Boeing has received 1,407 orders from 74 customers, of which 1,296 airplanes have been delivered, while the remaining orders are for cargo or tanker variants. Competitors have included the Airbus A300, A310, and A330-200. Its successor, the 787 Dreamliner, entered service in 2011.
In 1970, the 747 entered service as the first wide-body jetliner with a fuselage wide enough to feature a twin-aisle cabin. Two years later, the manufacturer began a development study, code-named 7X7, for a new wide-body jetliner intended to replace the 707 and other early generation narrow-body airliners. The aircraft would also provide twin-aisle seating, but in a smaller fuselage than the existing 747, McDonnell Douglas DC-10, and Lockheed L-1011 TriStar wide-bodies. To defray the high cost of development, Boeing signed risk-sharing agreements with Italian corporation Aeritalia and the Civil Transport Development Corporation (CTDC), a consortium of Japanese aerospace companies. This marked the manufacturer's first major international joint venture, and both Aeritalia and the CTDC received supply contracts in return for their early participation. The initial 7X7 was conceived as a short take-off and landing airliner intended for short-distance flights, but customers were unenthusiastic about the concept, leading to its redefinition as a mid-size, transcontinental-range airliner. At this stage the proposed aircraft featured two or three engines, with possible configurations including over-wing engines and a T-tail.
By 1976, a twinjet layout, similar to the one which had debuted on the Airbus A300, became the baseline configuration. The decision to use two engines reflected increased industry confidence in the reliability and economics of new-generation jet powerplants. While airline requirements for new wide-body aircraft remained ambiguous, the 7X7 was generally focused on mid-size, high-density markets. As such, it was intended to transport large numbers of passengers between major cities. Advancements in civil aerospace technology, including high-bypass-ratio turbofan engines, new flight deck systems, aerodynamic improvements, and more efficient lightweight designs were to be applied to the 7X7. Many of these features were also included in a parallel development effort for a new mid-size narrow-body airliner, code-named 7N7, which would become the 757. Work on both proposals proceeded through the airline industry upturn in the late 1970s.
In January 1978, Boeing announced a major extension of its Everett factory—which was then dedicated to manufacturing the 747—to accommodate its new wide-body family. In February 1978, the new jetliner received the 767 model designation, and three variants were planned: a 767-100 with 190 seats, a 767-200 with 210 seats, and a trijet 767MR/LR version with 200 seats intended for intercontinental routes. The 767MR/LR was subsequently renamed 777 for differentiation purposes. The 767 was officially launched on July 14, 1978, when United Airlines ordered 30 of the 767-200 variant, followed by 50 more 767-200 orders from American Airlines and Delta Air Lines later that year. The 767-100 was ultimately not offered for sale, as its capacity was too close to the 757's seating, while the 777 trijet was eventually dropped in favor of standardizing the twinjet configuration.
In the late 1970s, operating cost replaced capacity as the primary factor in airliner purchases. As a result, the 767's design process emphasized fuel efficiency from the outset. Boeing targeted a 20 to 30 percent cost saving over earlier aircraft, mainly through new engine and wing technology. As development progressed, engineers used computer-aided design for over a third of the 767's design drawings, and performed 26,000 hours of wind tunnel tests. Design work occurred concurrently with the 757 twinjet, leading Boeing to treat both as almost one program to reduce risk and cost. Both aircraft would ultimately receive shared design features, including avionics, flight management systems, instruments, and handling characteristics. Combined development costs were estimated at $3.5 to $4 billion.
Early 767 customers were given the choice of Pratt & Whitney JT9D or General Electric CF6 turbofans, marking the first time that Boeing had offered more than one engine option at the launch of a new airliner. Both jet engine models had a maximum output of 48,000 pounds-force (210 kN) of thrust. The engines were mounted approximately one-third the length of the wing from the fuselage, similar to previous wide-body trijets. The larger wings were designed using an aft-loaded shape which reduced aerodynamic drag and distributed lift more evenly across their surface span than any of the manufacturer's previous aircraft. The wings provided higher-altitude cruise performance, added fuel capacity, and expansion room for future stretched variants. The initial 767-200 was designed for sufficient range to fly across North America or across the northern Atlantic, and would be capable of operating routes up to 3,850 nautical miles (7,130 km; 4,430 mi).
The 767's fuselage width was set midway between that of the 707 and the 747 at 16.5 feet (5.03 m). While it was narrower than previous wide-body designs, seven abreast seating with two aisles could be fitted, and the reduced width produced less aerodynamic drag. The fuselage was not wide enough to accommodate two standard LD3 wide-body unit load devices side-by-side, so a smaller container, the LD2, was created specifically for the 767. Using a conventional tail design also allowed the rear fuselage to be tapered over a shorter section, providing for parallel aisles along the full length of the passenger cabin, and eliminating irregular seat rows toward the rear of the aircraft.
The 767 was the first Boeing wide-body to be designed with a two-crew digital glass cockpit. Cathode ray tube (CRT) color displays and new electronics replaced the role of the flight engineer by enabling the pilot and co-pilot to monitor aircraft systems directly. Despite the promise of reduced crew costs, United Airlines initially demanded a conventional three-person cockpit, citing concerns about the risks associated with introducing a new aircraft. The carrier maintained this position until July 1981, when a US presidential task force determined that a crew of two was safe for operating wide-body jets. A three-crew cockpit remained as an option and was fitted to the first production models. Ansett Australia ordered 767s with three-crew cockpits due to union demands; it was the only airline to operate 767s so configured. The 767's two-crew cockpit was also applied to the 757, allowing pilots to operate both aircraft after a short conversion course, and adding incentive for airlines to purchase both types.
To produce the 767, Boeing formed a network of subcontractors which included domestic suppliers and international contributions from Italy's Aeritalia and Japan's CTDC. The wings and cabin floor were produced in-house, while Aeritalia provided control surfaces, Boeing Vertol made the leading edge for the wings, and Boeing Wichita produced the forward fuselage. The CTDC provided multiple assemblies through its constituent companies, namely Fuji Heavy Industries (wing fairings and gear doors), Kawasaki Heavy Industries (center fuselage), and Mitsubishi Heavy Industries (rear fuselage, doors, and tail). Components were integrated during final assembly at the Everett factory. For expedited production of wing spars, the main structural member of aircraft wings, the Everett factory received robotic machinery to automate the process of drilling holes and inserting fasteners. This method of wing construction expanded on techniques developed for the 747. Final assembly of the first aircraft began in July 1979.
The prototype aircraft, registered N767BA and equipped with JT9D turbofans, rolled out on August 4, 1981. By this time, the 767 program had accumulated 173 firm orders from 17 customers, including Air Canada, All Nippon Airways, Britannia Airways, Transbrasil, and Trans World Airlines (TWA). On September 26, 1981, the prototype took its maiden flight under the command of company test pilots Tommy Edmonds, Lew Wallick, and John Brit. The maiden flight was largely uneventful, save for the inability to retract the landing gear because of a hydraulic fluid leak. The prototype was used for subsequent flight tests.
The 10-month 767 flight test program utilized the first six aircraft built. The first four aircraft were equipped with JT9D engines, while the fifth and sixth were fitted with CF6 engines. The test fleet was largely used to evaluate avionics, flight systems, handling, and performance, while the sixth aircraft was used for route-proving flights. During testing, pilots described the 767 as generally easy to fly, with its maneuverability unencumbered by the bulkiness associated with larger wide-body jets. Following 1,600 hours of flight tests, the JT9D-powered 767-200 received certification from the US Federal Aviation Administration (FAA) and the UK Civil Aviation Authority (CAA) in July 1982. The first delivery occurred on August 19, 1982, to United Airlines. The CF6-powered 767-200 received certification in September 1982, followed by the first delivery to Delta Air Lines on October 25, 1982.
The 767 entered service with United Airlines on September 8, 1982. The aircraft's first commercial flight used a JT9D-powered 767-200 on the Chicago-to-Denver route. The CF6-powered 767-200 commenced service three months later with Delta Air Lines. Upon delivery, early 767s were mainly deployed on domestic routes, including US transcontinental services. American Airlines and TWA began flying the 767-200 in late 1982, while Air Canada, China Airlines, El Al, and Pacific Western began operating the aircraft in 1983. The aircraft's introduction was relatively smooth, with few operational glitches and greater dispatch reliability than prior jetliners.
Forecasting airline interest in larger-capacity models, Boeing announced the stretched 767-300 in 1983 and the extended-range 767-300ER in 1984. Both models offered a 20 percent passenger capacity increase, while the extended-range version was capable of operating flights up to 5,990 nautical miles (11,090 km; 6,890 mi). Japan Airlines placed the first order for the -300 in September 1983. Following its first flight on January 30, 1986, the type entered service with Japan Airlines on October 20, 1986. The 767-300ER completed its first flight on December 9, 1986, but it was not until March 1987 that the first firm order, from American Airlines, was placed. The type entered service with American Airlines on March 3, 1988. The 767-300 and 767-300ER gained popularity after entering service, and came to account for approximately two-thirds of all 767s sold. Until the 777's 1995 debut, the 767-300 and 767-300ER remained Boeing's second-largest wide-bodies behind the 747.
Buoyed by a recovering global economy and ETOPS approval, 767 sales accelerated in the mid-to-late 1980s; 1989 was the most prolific year with 132 firm orders. By the early 1990s, the wide-body twinjet had become its manufacturer's annual best-selling aircraft, despite a slight decrease due to economic recession. During this period, the 767 became the most common airliner for transatlantic flights between North America and Europe. By the end of the decade, 767s crossed the Atlantic more frequently than all other aircraft types combined. The 767 also propelled the growth of point-to-point flights which bypassed major airline hubs in favor of direct routes. Taking advantage of the aircraft's lower operating costs and smaller capacity, operators added non-stop flights to secondary population centers, thereby eliminating the need for connecting flights. The increased number of cities receiving non-stop services caused a paradigm shift in the airline industry as point-to-point travel gained prominence at the expense of the traditional hub-and-spoke model.
In February 1990, the first 767 equipped with Rolls-Royce RB211 turbofans, a 767-300, was delivered to British Airways. Six months later, the carrier temporarily grounded its entire 767 fleet after discovering cracks in the engine pylons of several aircraft. The cracks were related to the extra weight of the RB211 engines, which are 2,205 pounds (1,000 kg) heavier than other 767 engines. During the grounding, interim repairs were conducted to alleviate stress on engine pylon components, and a parts redesign in 1991 prevented further cracks. Boeing also performed a structural reassessment, resulting in production changes and modifications to the engine pylons of all 767s in service.
In January 1993, following an order from UPS Airlines, Boeing launched a freighter variant, the 767-300F, which entered service with UPS on October 16, 1995. The 767-300F featured a main deck cargo hold, upgraded landing gear, and strengthened wing structure. In November 1993, the Japanese government launched the first 767 military derivative when it placed orders for the E-767, an Airborne Early Warning and Control (AWACS) variant based on the 767-200ER. The first two E-767s, featuring extensive modifications to accommodate surveillance radar and other monitoring equipment, were delivered in 1998 to the Japan Self-Defense Forces.
In November 1995, after abandoning development of a smaller version of the 777, Boeing announced that it was revisiting studies for a larger 767. The proposed 767-400X, a second stretch of the aircraft, offered a 12 percent capacity increase versus the 767-300, and featured an upgraded flight deck, enhanced interior, and greater wingspan. The variant was specifically aimed at Delta Air Lines' pending replacement of its aging Lockheed L-1011 TriStars, and faced competition from the A330-200, a shortened derivative of the Airbus A330. In March 1997, Delta Air Lines launched the 767-400ER when it ordered the type to replace its L-1011 fleet. In October 1997, Continental Airlines also ordered the 767-400ER to replace its McDonnell Douglas DC-10 fleet. The type completed its first flight on October 9, 1999, and entered service with Continental Airlines on September 14, 2000.
In the early 2000s, cumulative 767 deliveries approached 900, but new sales declined during an airline industry downturn. In 2001, Boeing dropped plans for a longer-range model, the 767-400ERX, in favor of the proposed Sonic Cruiser, a new jetliner which aimed to fly 15 percent faster while having comparable fuel costs to the 767. The following year, Boeing announced the KC-767 Tanker Transport, a second military derivative of the 767-200ER. Launched with an order in October 2002 from the Italian Air Force, the KC-767 was intended for the dual role of refueling other aircraft and carrying cargo. The Japanese government became the second customer for the type in March 2003. In May 2003, the United States Air Force (USAF) announced its intent to lease KC-767s to replace its aging KC-135 tankers. The plan was suspended in March 2004 amid a conflict of interest scandal, resulting in multiple US government investigations and the departure of several Boeing officials, including Philip Condit, the company's chief executive officer, and chief financial officer Michael Sears. The first KC-767s were delivered in 2008 to the Japan Self-Defense Forces.
In late 2002, after airlines expressed reservations about its emphasis on speed over cost reduction, Boeing halted development of the Sonic Cruiser. The following year, the manufacturer announced the 7E7, a mid-size 767 successor made from composite materials which promised to be 20 percent more fuel efficient. The new jetliner was the first stage of a replacement aircraft initiative called the Boeing Yellowstone Project. Customers embraced the 7E7, later renamed 787 Dreamliner, and within two years it had become the fastest-selling airliner in the company's history. In 2005, Boeing opted to continue 767 production despite record Dreamliner sales, citing a need to provide customers waiting for the 787 with a more readily available option. Subsequently, the 767-300ER was offered to customers affected by 787 delays, including All Nippon Airways and Japan Airlines. Some aging 767s, exceeding 20 years in age, were also kept in service past planned retirement dates due to the delays. To extend the operational lives of older aircraft, airlines increased heavy maintenance procedures, including D-check teardowns and inspections for corrosion, a recurring issue on aging 767s. The first 787s entered service with All Nippon Airways in October 2011, 42 months behind schedule.
In 2007, the 767 received a production boost when UPS and DHL Aviation placed a combined 33 orders for the 767-300F. Renewed freighter interest led Boeing to consider enhanced versions of the 767-200 and 767-300F with increased gross weights, 767-400ER wing extensions, and 777 avionics. Net orders for the 767 declined from 24 in 2008 to just three in 2010. During the same period, operators upgraded aircraft already in service; in 2008, the first 767-300ER retrofitted with blended winglets from Aviation Partners Incorporated debuted with American Airlines. The manufacturer-sanctioned winglets, at 11 feet (3.35 m) in height, improved fuel efficiency by an estimated 6.5 percent. Other carriers including All Nippon Airways and Delta Air Lines also ordered winglet kits.
On February 2, 2011, the 1,000th 767 rolled out, destined for All Nippon Airways. The aircraft was the 91st 767-300ER ordered by the Japanese carrier, and with its completion the 767 became the second wide-body airliner to reach the thousand-unit milestone after the 747. The 1,000th aircraft also marked the last model produced on the original 767 assembly line. Beginning with the 1,001st aircraft, production moved to another area in the Everett factory which occupied about half of the previous floor space. The new assembly line made room for 787 production and aimed to boost manufacturing efficiency by over twenty percent.
At the inauguration of its new assembly line, the 767's order backlog numbered approximately 50, only enough for production to last until 2013. Despite the reduced backlog, Boeing officials expressed optimism that additional orders would be forthcoming. On February 24, 2011, the USAF announced its selection of the KC-767 Advanced Tanker, an upgraded variant of the KC-767, for its KC-X fleet renewal program. The selection followed two rounds of tanker competition between Boeing and Airbus parent EADS, and came eight years after the USAF's original 2003 announcement of its plan to lease KC-767s. The tanker order encompassed 179 aircraft and was expected to sustain 767 production past 2013.
In December 2011, FedEx Express announced a 767-300F order for 27 aircraft to replace its DC-10 freighters, citing the USAF tanker order and Boeing's decision to continue production as contributing factors. FedEx Express agreed to buy 19 more of the −300F variant in June 2012. In June 2015, FedEx said it was accelerating retirements of planes both to reflect demand and to modernize its fleet, recording charges of $276 million (~$335 million in 2022). On July 21, 2015, FedEx announced an order for 50 767-300F with options on another 50, the largest order for the type. With the announcement FedEx confirmed that it has firm orders for 106 of the freighters for delivery between 2018 and 2023. In February 2018, UPS announced an order for 4 more 767-300Fs to increase the total on order to 63.
With its successor, the Boeing New Midsize Airplane, that was planned for introduction in 2025 or later, and the 787 being much larger, Boeing could restart a passenger 767-300ER production to bridge the gap. A demand for 50 to 60 aircraft could have to be satisfied. Having to replace its 40 767s, United Airlines requested a price quote for other widebodies. In November 2017, Boeing CEO Dennis Muilenburg cited interest beyond military and freighter uses. However, in early 2018 Boeing Commercial Airplanes VP of marketing Randy Tinseth stated that the company did not intend to resume production of the passenger variant.
In its first quarter of 2018 earnings report, Boeing plan to increase its production from 2.5 to 3 monthly beginning in January 2020 due to increased demand in the cargo market, as FedEx had 56 on order, UPS has four, and an unidentified customer has three on order. This rate could rise to 3.5 per month in July 2020 and 4 per month in January 2021, before decreasing to 3 per month in January 2025 and then 2 per month in July 2025. In 2019, unit cost was US$217.9 million for a -300ER, and US$220.3 million for a -300F.
After the debut of the first stretched 767s, Boeing sought to address airline requests for greater capacity by proposing larger models, including a partial double-deck version informally named the "Hunchback of Mukilteo" (from a town near Boeing's Everett factory) with a 757 body section mounted over the aft main fuselage. In 1986, Boeing proposed the 767-X, a revised model with extended wings and a wider cabin, but received little interest. By 1988, the 767-X had evolved into an all-new twinjet, which adopted the 777 designation. The 767-X did not get enough interest from airlines to launch and the model was shelved in 1988 in favor of the Boeing 777.
In March 2000, Boeing was to launch the 259-seat 767-400ERX with an initial order for three from Kenya Airways with deliveries planned for 2004, as it was proposed to Lauda Air. Increased gross weight and a tailplane fuel tank would have boosted its range by 1,110 to 12,025 km (600 to 6,490 nmi; 690 to 7,470 mi), and GE could offer its 65,000–68,000 lbf (290–300 kN) CF6-80C2/G2. Rolls-Royce offered its 68,000–72,000 lbf (300–320 kN) Trent 600 for the 767-400ERX and the Boeing 747X.
Offered in July, the longer-range -400ERX would have a strengthened wing, fuselage and landing gear for a 15,000 lb (6.8 t) higher MTOW, up to 465,000 lb (210.92 t). Thrust would rise to 72,000 lbf (320 kN) for better takeoff performance, with the Trent 600 or the General Electric/Pratt & Whitney Engine Alliance GP7172, also offered on the 747X. Range would increase by 525 nmi (950 km; 604 mi) to 6,150 nmi (11,390 km; 7,080 mi), with an additional fuel tank of 2,145 U.S. gallons (8,120 L) in the horizontal tail. The 767-400ERX would offer the capacity of the Airbus A330-200 with 3% lower fuel burn and costs. Boeing cancelled the variant development in 2001. Kenya Airways then switched its order to the 777-200ER.
In October 2019, Boeing was reportedly studying a re-engined 767-XF for entry into service around 2025, based on the 767-400ER with an extended landing gear to accommodate larger General Electric GEnx turbofan engines. The cargo market is the main target, but a passenger version could be a cheaper alternative to the proposed New Midsize Airplane.
The 767 is a low-wing cantilever monoplane with a conventional tail unit featuring a single fin and rudder. The wings are swept at 31.5 degrees and optimized for a cruising speed of Mach 0.8 (533 mph or 858 km/h). Each wing features a supercritical airfoil cross-section and is equipped with six-panel leading edge slats, single- and double-slotted flaps, inboard and outboard ailerons, and six spoilers. The airframe further incorporates Carbon-fiber-reinforced polymer composite material wing surfaces, Kevlar fairings and access panels, plus improved aluminum alloys, which together reduce overall weight by 1,900 pounds (860 kg) versus preceding aircraft.
To distribute the aircraft's weight on the ground, the 767 has a retractable tricycle landing gear with four wheels on each main gear and two for the nose gear. The original wing and gear design accommodated the stretched 767-300 without major changes. The 767-400ER features a larger, more widely spaced main gear with 777 wheels, tires, and brakes. To prevent damage if the tail section contacts the runway surface during takeoff, 767-300 and 767-400ER models are fitted with a retractable tailskid.
All passenger 767 models have exit doors near the front and rear of the aircraft. Most 767-200 and -200ER models have one overwing exit door for emergency use; an optional second overwing exit increases maximum allowable capacity from 255 to 290. The 767-300 and -300ER typically feature two overwing exit doors or, in a configuration with no overwing exits, three exit doors on each side and a smaller exit door aft of the wing. A further configuration featuring three exit doors on each side plus one overwing exit allows an increase in maximum capacity from 290 to 351. All 767-400ERs are configured with three exit doors on each side and a smaller exit door aft of the wing. The 767-300F has one exit door at the forward left-hand side of the aircraft.
In addition to shared avionics and computer technology, the 767 uses the same auxiliary power unit, electric power systems, and hydraulic parts as the 757. A raised cockpit floor and the same forward cockpit windows result in similar pilot viewing angles. Related design and functionality allows 767 pilots to obtain a common type rating to operate the 757 and share the same seniority roster with pilots of either aircraft.
The original 767 flight deck uses six Rockwell Collins CRT screens to display electronic flight instrument system (EFIS) and engine indication and crew alerting system (EICAS) information, allowing pilots to handle monitoring tasks previously performed by the flight engineer. The CRTs replace conventional electromechanical instruments found on earlier aircraft. An enhanced flight management system, improved over versions used on early 747s, automates navigation and other functions, while an automatic landing system facilitates CAT IIIb instrument landings in low visibility situations. The 767 became the first aircraft to receive CAT IIIb certification from the FAA for landings with 980 feet (300 m) minimum visibility in 1984. On the 767-400ER, the cockpit layout is simplified further with six Rockwell Collins liquid crystal display (LCD) screens, and adapted for similarities with the 777 and the Next Generation 737. To retain operational commonality, the LCD screens can be programmed to display information in the same manner as earlier 767s. In 2012, Boeing and Rockwell Collins launched a further 787-based cockpit upgrade for the 767, featuring three landscape-format LCD screens that can display two windows each.
The 767 is equipped with three redundant hydraulic systems for operation of control surfaces, landing gear, and utility actuation systems. Each engine powers a separate hydraulic system, and the third system uses electric pumps. A ram air turbine provides power for basic controls in the event of an emergency. An early form of fly-by-wire is employed for spoiler operation, utilizing electric signaling instead of traditional control cables. The fly-by-wire system reduces weight and allows independent operation of individual spoilers.
The 767 features a twin-aisle cabin with a typical configuration of six abreast in business class and seven across in economy. The standard seven abreast, 2–3–2 economy class layout places approximately 87 percent of all seats at a window or aisle. As a result, the aircraft can be largely occupied before center seats need to be filled, and each passenger is no more than one seat from the aisle. It is possible to configure the aircraft with extra seats for up to an eight abreast configuration, but this is less common.
The 767 interior introduced larger overhead bins and more lavatories per passenger than previous aircraft. The bins are wider to accommodate garment bags without folding, and strengthened for heavier carry-on items. A single, large galley is installed near the aft doors, allowing for more efficient meal service and simpler ground resupply. Passenger and service doors are an overhead plug type, which retract upwards, and commonly used doors can be equipped with an electric-assist system.
In 2000, a 777-style interior, known as the Boeing Signature Interior, debuted on the 767-400ER. Subsequently, adopted for all new-build 767s, the Signature Interior features even larger overhead bins, indirect lighting, and sculpted, curved panels. The 767-400ER also received larger windows derived from the 777. Older 767s can be retrofitted with the Signature Interior. Some operators have adopted a simpler modification known as the Enhanced Interior, featuring curved ceiling panels and indirect lighting with minimal modification of cabin architecture, as well as aftermarket modifications such as the NuLook 767 package by Heath Tecna.
In its first year, the 767 logged a 96.1 percent dispatch rate, which exceeded the industry average for all-new aircraft. Operators reported generally favorable ratings for the twinjet's sound levels, interior comfort, and economic performance. Resolved issues were minor and included the recalibration of a leading edge sensor to prevent false readings, the replacement of an evacuation slide latch, and the repair of a tailplane pivot to match production specifications.
Seeking to capitalize on its new wide-body's potential for growth, Boeing offered an extended-range model, the 767-200ER, in its first year of service. Ethiopian Airlines placed the first order for the type in December 1982. Featuring increased gross weight and greater fuel capacity, the extended-range model could carry heavier payloads at distances up to 6,385 nautical miles (11,825 km; 7,348 mi), and was targeted at overseas customers. The 767-200ER entered service with El Al Airline on March 27, 1984. The type was mainly ordered by international airlines operating medium-traffic, long-distance flights. In May 1984, an Ethiopian Airlines 767-200ER set a non-stop record for a commercial twinjet of 12,082 km (6,524 nmi; 7,507 mi) from Washington DC to Addis Ababa.
In the mid-1980s, the 767 and its European rivals, the Airbus A300 and A310, spearheaded the growth of twinjet flights across the northern Atlantic under extended-range twin-engine operational performance standards (ETOPS) regulations, the FAA's safety rules governing transoceanic flights by aircraft with two engines. In 1976, the A300 was the first twinjet to secure permission to fly 90 minutes away from diversion airports, up from 60 minutes. In May 1985, the FAA granted its first approval for 120-minute ETOPS flights to the 767, on an individual airline basis starting with TWA, provided that the operator met flight safety criteria. This allowed the aircraft to fly overseas routes at up to two hours' distance from land. The 767 burned 7,000 lb (3.2 t) less fuel per hour than a Lockheed L-1011 TriStar on the route between Boston and Paris, a huge savings. The Airbus A310 secured approval for 120-minute ETOPS flights one month later in June. The larger safety margins were permitted because of the improved reliability demonstrated by twinjets and their turbofan engines. The FAA lengthened the ETOPS time to 180 minutes for CF6-powered 767s in 1989, making the type the first to be certified under the longer duration, and all available engines received approval by 1993. Regulatory approval spurred the expansion of transoceanic flights with twinjet aircraft and boosted the sales of both the 767 and its rivals.
The 767 has been produced in three fuselage lengths. These debuted in progressively larger form as the 767-200, 767-300, and 767-400ER. Longer-range variants include the 767-200ER and 767-300ER, while cargo models include the 767-300F, a production freighter, and conversions of passenger 767-200 and 767-300 models.
When referring to different variants, Boeing and airlines often collapse the model number (767) and the variant designator, e.g. –200 or –300, into a truncated form, e.g. "762" or "763". Subsequent to the capacity number, designations may append the range identifier, though -200ER and -300ER are company marketing designations and not certificated as such. The International Civil Aviation Organization (ICAO) aircraft type designator system uses a similar numbering scheme, but adds a preceding manufacturer letter; all variants based on the 767-200 and 767-300 are classified under the codes "B762" and "B763"; the 767-400ER receives the designation of "B764".
The 767-200 was the original model and entered service with United Airlines in 1982. The type has been used primarily by mainline U.S. carriers for domestic routes between major hub centers such as Los Angeles to Washington. The 767-200 was the first aircraft to be used on transatlantic ETOPS flights, beginning with TWA on February 1, 1985, under 90-minute diversion rules. Deliveries for the variant totaled 128 aircraft. There were 52 examples of the model in commercial service as of July 2018, almost entirely as freighter conversions. The type's competitors included the Airbus A300 and A310.
The 767-200 was produced until 1987 when production switched to the extended-range 767-200ER. Some early 767-200s were subsequently upgraded to extended-range specification. In 1998, Boeing began offering 767-200 conversions to 767-200SF (Special Freighter) specification for cargo use, and Israel Aerospace Industries has been licensed to perform cargo conversions since 2005. The conversion process entails the installation of a side cargo door, strengthened main deck floor, and added freight monitoring and safety equipment. The 767-200SF was positioned as a replacement for Douglas DC-8 freighters.
A commercial freighter version of the Boeing 767-200 with wings from the -300 series and an updated flightdeck was first flown on December 29, 2014. A military tanker variant of the Boeing 767-2C is developed for the USAF as the KC-46. Boeing is building two aircraft as commercial freighters which will be used to obtain Federal Aviation Administration certification, a further two Boeing 767-2Cs will be modified as military tankers. As of 2014, Boeing does not have customers for the freighter.
The 767-200ER was the first extended-range model and entered service with El Al in 1984. The type's increased range is due to extra fuel capacity and higher maximum takeoff weight (MTOW) of up to 395,000 lb (179,000 kg). The additional fuel capacity is accomplished by using the center tank's dry dock to carry fuel. The non-ER variant's center tank is what is called cheek tanks; two interconnected halves in each wing root with a dry dock in between. The center tank is also used on the -300ER and -400ER variants.
This version was originally offered with the same engines as the 767-200, while more powerful Pratt & Whitney PW4000 and General Electric CF6 engines later became available. The 767-200ER was the first 767 to complete a non-stop transatlantic journey, and broke the flying distance record for a twinjet airliner on April 17, 1988, with an Air Mauritius flight from Halifax, Nova Scotia to Port Louis, Mauritius, covering 8,727 nmi (16,200 km; 10,000 mi). The 767-200ER has been acquired by international operators seeking smaller wide-body aircraft for long-haul routes such as New York to Beijing. Deliveries of the type totaled 121 with no unfilled orders. As of July 2018, 21 examples of passenger and freighter conversion versions were in airline service. The type's main competitors of the time included the Airbus A300-600R and the A310-300.
The 767-300, the first stretched version of the aircraft, entered service with Japan Airlines in 1986. The type features a 21.1-foot (6.43 m) fuselage extension over the 767-200, achieved by additional sections inserted before and after the wings, for an overall length of 180.25 ft (54.9 m). Reflecting the growth potential built into the original 767 design, the wings, engines, and most systems were largely unchanged on the 767-300. An optional mid-cabin exit door is positioned ahead of the wings on the left, while more powerful Pratt & Whitney PW4000 and Rolls-Royce RB211 engines later became available. The 767-300's increased capacity has been used on high-density routes within Asia and Europe. The 767-300 was produced from 1986 until 2000. Deliveries for the type totaled 104 aircraft with no unfilled orders remaining. The type's main competitor was the Airbus A300.
The 767-300ER, the extended-range version of the 767-300, entered service with American Airlines in 1988. The type's increased range was made possible by greater fuel tankage and a higher MTOW of 407,000 lb (185,000 kg). Design improvements allowed the available MTOW to increase to 412,000 lb (187,000 kg) by 1993. Power is provided by Pratt & Whitney PW4000, General Electric CF6, or Rolls-Royce RB211 engines. The 767-300ER comes in three exit configurations: the baseline configuration has four main cabin doors and four over-wing window exits, the second configuration has six main cabin doors and two over-wing window exits; and the third configuration has six main cabin doors, as well as two smaller doors that are located behind the wings. Typical routes for the type include New York to Frankfurt.
The combination of increased capacity and range for the -300ER has been particularly attractive to both new and existing 767 operators. It is the most successful 767 version, with more orders placed than all other variants combined. As of November 2017, 767-300ER deliveries stand at 583 with no unfilled orders. There were 376 examples in service as of July 2018. The type's main competitor is the Airbus A330-200. At its 1990s peak, a new 767-300ER was valued at $85 million, dipping to around $12 million in 2018 for a 1996 build.
The 767-300F, the production freighter version of the 767-300ER, entered service with UPS Airlines in 1995. The 767-300F can hold up to 24 standard 88-by-125-inch (220 by 320 cm) pallets on its main deck and up to 30 LD2 unit load devices on the lower deck, with a total cargo volume of 15,469 cubic feet (438 m). The freighter has a main deck cargo door and crew exit, while the lower deck features two starboard-side cargo doors and one port-side cargo door. A general market version with onboard freight-handling systems, refrigeration capability, and crew facilities was delivered to Asiana Airlines on August 23, 1996. As of August 2019, 767-300F deliveries stand at 161 with 61 unfilled orders. Airlines operated 222 examples of the freighter variant and freighter conversions in July 2018.
In June 2008, All Nippon Airways took delivery of the first 767-300BCF (Boeing Converted Freighter), a modified passenger-to-freighter model. The conversion work was performed in Singapore by ST Aerospace Services, the first supplier to offer a 767-300BCF program, and involved the addition of a main deck cargo door, strengthened main deck floor, and additional freight monitoring and safety equipment.
Israel Aerospace Industries offers a passenger-to-freighter conversion program called the 767-300BDSF (BEDEK Special Freighter). Wagner Aeronautical also offers a passenger-to-freighter conversion program for 767-300 series aircraft.
The 767-400ER, the first Boeing wide-body jet resulting from two fuselage stretches, entered service with Continental Airlines in 2000. The type features a 21.1-foot (6.43-metre) stretch over the 767-300, for a total length of 201.25 feet (61.3 m). The wingspan is also increased by 14.3 feet (4.36 m) through the addition of raked wingtips. The exit configuration uses six main cabin doors and two smaller exit doors behind the wings, similar to certain 767-300ERs. Other differences include an updated cockpit, redesigned landing gear, and 777-style Signature Interior. Power is provided by uprated General Electric CF6 engines.
The FAA granted approval for the 767-400ER to operate 180-minute ETOPS flights before it entered service. Because its fuel capacity was not increased over preceding models, the 767-400ER has a range of 5,625 nautical miles (10,418 km; 6,473 mi), less than previous extended-range 767s. No 767-400 (non-extended range) version was developed.
The longer-range 767-400ERX was offered in July 2000 before being cancelled a year later, leaving the 767-400ER as the sole version of the largest 767. Boeing dropped the 767-400ER and the -200ER from its pricing list in 2014.
A total of 37 767-400ERs were delivered to the variant's two airline customers, Continental Airlines (now merged with United Airlines) and Delta Air Lines, with no unfilled orders. All 37 examples of the -400ER were in service in July 2018. One additional example was produced as a military testbed, and later sold as a VIP transport. The type's closest competitor is the Airbus A330-200.
Versions of the 767 serve in a number of military and government applications, with responsibilities ranging from airborne surveillance and refueling to cargo and VIP transport. Several military 767s have been derived from the 767-200ER, the longest-range version of the aircraft.
In July 2018, 742 aircraft were in airline service: 73 -200s, 632 -300, and 37 -400ER with 65 -300F on order; the largest operators are Delta Air Lines (77), FedEx (60; largest cargo operator), UPS Airlines (59), United Airlines (51), Japan Airlines (35), All Nippon Airways (34). The type's competitors included the Airbus A300 and A310.
The largest 767 customers by orders placed are FedEx Express (150), Delta Air Lines (117), All Nippon Airways (96), American Airlines (88), and United Airlines (82). Delta and United are the only customers of all -200, -300, and -400ER passenger variants. In July 2015, FedEx placed a firm order for 50 Boeing 767 freighters with deliveries from 2018 to 2023.
Boeing 767 orders and deliveries (cumulative, by year):
Orders
Deliveries
As of February 2019, the Boeing 767 has been in 60 aviation occurrences, including 19 hull-loss accidents. Seven fatal crashes, including three hijackings, have resulted in a total of 854 occupant fatalities.
The airliner's first fatal crash, Lauda Air Flight 004, occurred near Bangkok on May 26, 1991, following the in-flight deployment of the left engine thrust reverser on a 767-300ER. None of the 223 aboard survived. As a result of this accident, all 767 thrust reversers were deactivated until a redesign was implemented. Investigators determined that an electronically controlled valve, common to late-model Boeing aircraft, was to blame. A new locking device was installed on all affected jetliners, including 767s.
On October 31, 1999, EgyptAir Flight 990, a 767-300ER, crashed off Nantucket, Massachusetts, in international waters killing all 217 people on board. The United States National Transportation Safety Board (NTSB) concluded "not determined", but determined the probable cause to be a deliberate action by the first officer; Egypt disputed this conclusion.
On April 15, 2002, Air China Flight 129, a 767-200ER, crashed into a hill amid inclement weather while trying to land at Gimhae International Airport in Busan, South Korea. The crash resulted in the death of 129 of the 166 people on board, and the cause was attributed to pilot error.
On February 23, 2019, Atlas Air Flight 3591, a Boeing 767-300ERF air freighter operating for Amazon Air, crashed into Trinity Bay near Houston, Texas, while on descent into George Bush Intercontinental Airport; both pilots and the single passenger were killed. The cause was attributed to pilot error and spatial disorientation.
On November 1, 2011, LOT Polish Airlines Flight 16, a 767-300ER, safely landed at Warsaw Chopin Airport in Warsaw, Poland, after a mechanical failure of the landing gear forced an emergency landing with the landing gear retracted. There were no injuries, but the aircraft involved was damaged and subsequently written off. At the time of the incident, aviation analysts speculated that it may have been the first instance of a complete landing gear failure in the 767's service history. Subsequent investigation determined that while a damaged hose had disabled the aircraft's primary landing gear extension system, an otherwise functional backup system was inoperative due to an accidentally deactivated circuit breaker.
On October 28, 2016, American Airlines Flight 383, a 767-300ER with 161 passengers and 9 crew members, aborted takeoff at Chicago O'Hare Airport following an uncontained failure of the right GE CF6-80C2 engine. The engine failure, which hurled fragments over a considerable distance, caused a fuel leak, resulting in a fire under the right wing. Fire and smoke entered the cabin. All passengers and crew evacuated the aircraft, with 20 passengers and one flight attendant sustaining minor injuries using the evacuation slides.
The 767 has been involved in six hijackings, three resulting in loss of life, for a combined total of 282 occupant fatalities. On November 23, 1996, Ethiopian Airlines Flight 961, a 767-200ER, was hijacked and crash-landed in the Indian Ocean near the Comoro Islands after running out of fuel, killing 125 out of the 175 persons on board; this was a rare example of occupants surviving a land-based aircraft ditching on water. Two 767s were involved in the September 11 attacks on the World Trade Center in 2001, resulting in the collapse of its two main towers. American Airlines Flight 11, a 767-200ER, crashed into the North Tower, killing all 92 people on board, and United Airlines Flight 175, a 767-200, crashed into the South Tower, with the death of all 65 on board. In addition, more than 2,600 people were killed in the towers or on the ground. A failed 2001 shoe bomb attempt that December involved an American Airlines 767-300ER.
The 767's first incident was Air Canada Flight 143, a 767-200, on July 23, 1983. The airplane ran out of fuel in-flight and had to glide with both engines out for almost 43 nautical miles (80 km; 49 mi) to an emergency landing at Gimli, Manitoba, Canada. The pilots used the aircraft's ram air turbine to power the hydraulic systems for aerodynamic control. There were no fatalities and only minor injuries. This aircraft was nicknamed "Gimli Glider" after its landing site. The aircraft, registered C-GAUN, continued flying for Air Canada until its retirement in January 2008.
In January 2014, the U.S. Federal Aviation Administration issued a directive that ordered inspections of the elevators on more than 400 767s beginning in March 2014; the focus was on fasteners and other parts that can fail and cause the elevators to jam. The issue was first identified in 2000 and has been the subject of several Boeing service bulletins. The inspections and repairs are required to be completed within six years. The aircraft has also had multiple occurrences of "uncommanded escape slide inflation" during maintenance or operations, and during flight. In late 2015, the FAA issued a preliminary directive to address the issue.
As new 767 variants roll off the assembly line, older series models have been retired and converted to cargo use, stored or scrapped. One complete aircraft, N102DA, is the first 767-200 to operate for Delta Air Lines and the twelfth example built. It was retired from airline service in February 2006 after being repainted back to its original 1982 Delta widget livery and given a farewell tour. It was then put on display at the Delta Flight Museum in the Delta corporate campus at the edge of Hartsfield–Jackson Atlanta International Airport. "The Spirit of Delta" is on public display as of 2022.
In 2013 a Brazilian entrepreneur purchased a 767-200 that had operated for the now-defunct carrier Transbrasil under the registration PT-TAC. The aircraft, which was sold at a bankruptcy auction, was placed on outdoor display in Taguatinga as part of a proposed commercial development. As of 2019, however, the development has not come to fruition. The aircraft is devoid of engines or landing gear, has deteriorated due to weather exposure and acts of vandalism, but remains publicly accessible to view.
Related development
Aircraft of comparable role, configuration, and era
Related lists | [
{
"paragraph_id": 0,
"text": "The Boeing 767 is an American wide-body aircraft developed and manufactured by Boeing Commercial Airplanes. The aircraft was launched as the 7X7 program on July 14, 1978, the prototype first flew on September 26, 1981, and it was certified on July 30, 1982. The initial 767-200 variant entered service on September 8, 1982, with United Airlines, and the extended-range 767-200ER in 1984. It was stretched into the 767-300 in October 1986, followed by the 767-300ER in 1988, the most popular variant. The 767-300F, a production freighter version, debuted in October 1995. It was stretched again into the 767-400ER from September 2000.",
"title": ""
},
{
"paragraph_id": 1,
"text": "To complement the larger 747, it has a seven-abreast cross-section, accommodating smaller LD2 ULD cargo containers. The 767 is Boeing's first wide-body twinjet, powered by General Electric CF6, Rolls-Royce RB211, or Pratt & Whitney JT9D turbofans. JT9D engines were eventually replaced by PW4000 engines. The aircraft has a conventional tail and a supercritical wing for reduced aerodynamic drag. Its two-crew glass cockpit, a first for a Boeing airliner, was developed jointly for the 757 − a narrow-body aircraft, allowing a common pilot type rating. Studies for a higher-capacity 767 in 1986 led Boeing to develop the larger 777 twinjet, introduced in June 1995.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The 159-foot-long (48.5 m) 767-200 typically seats 216 passengers over 3,900 nautical miles [nmi] (7,200 km; 3,566 mi), while the 767-200ER seats 181 over a 6,590 nmi (12,200 km; 7,580 mi) range. The 180-foot-long (54.9 m) 767-300 typically seats 269 passengers over 3,900 nmi (7,200 km; 4,500 mi), while the 767-300ER seats 218 over 5,980 nmi (11,070 km; 6,880 mi). The 767-300F can haul 116,000 lb (52.7 t) over 3,225 nmi (6,025 km; 3,711 mi), and the 201.3-foot-long (61.37 m) 767-400ER typically seats 245 passengers over 5,625 nmi (10,415 km; 6,473 mi). Military derivatives include the E-767 for surveillance and the KC-767 and KC-46 aerial tankers.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Initially marketed for transcontinental routes, a loosening of ETOPS rules starting in 1985 allowed the aircraft to operate transatlantic flights. A total of 742 of these aircraft were in service in July 2018, with Delta Air Lines being the largest operator with 77 aircraft in its fleet. As of November 2023, Boeing has received 1,407 orders from 74 customers, of which 1,296 airplanes have been delivered, while the remaining orders are for cargo or tanker variants. Competitors have included the Airbus A300, A310, and A330-200. Its successor, the 787 Dreamliner, entered service in 2011.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In 1970, the 747 entered service as the first wide-body jetliner with a fuselage wide enough to feature a twin-aisle cabin. Two years later, the manufacturer began a development study, code-named 7X7, for a new wide-body jetliner intended to replace the 707 and other early generation narrow-body airliners. The aircraft would also provide twin-aisle seating, but in a smaller fuselage than the existing 747, McDonnell Douglas DC-10, and Lockheed L-1011 TriStar wide-bodies. To defray the high cost of development, Boeing signed risk-sharing agreements with Italian corporation Aeritalia and the Civil Transport Development Corporation (CTDC), a consortium of Japanese aerospace companies. This marked the manufacturer's first major international joint venture, and both Aeritalia and the CTDC received supply contracts in return for their early participation. The initial 7X7 was conceived as a short take-off and landing airliner intended for short-distance flights, but customers were unenthusiastic about the concept, leading to its redefinition as a mid-size, transcontinental-range airliner. At this stage the proposed aircraft featured two or three engines, with possible configurations including over-wing engines and a T-tail.",
"title": "Development"
},
{
"paragraph_id": 5,
"text": "By 1976, a twinjet layout, similar to the one which had debuted on the Airbus A300, became the baseline configuration. The decision to use two engines reflected increased industry confidence in the reliability and economics of new-generation jet powerplants. While airline requirements for new wide-body aircraft remained ambiguous, the 7X7 was generally focused on mid-size, high-density markets. As such, it was intended to transport large numbers of passengers between major cities. Advancements in civil aerospace technology, including high-bypass-ratio turbofan engines, new flight deck systems, aerodynamic improvements, and more efficient lightweight designs were to be applied to the 7X7. Many of these features were also included in a parallel development effort for a new mid-size narrow-body airliner, code-named 7N7, which would become the 757. Work on both proposals proceeded through the airline industry upturn in the late 1970s.",
"title": "Development"
},
{
"paragraph_id": 6,
"text": "In January 1978, Boeing announced a major extension of its Everett factory—which was then dedicated to manufacturing the 747—to accommodate its new wide-body family. In February 1978, the new jetliner received the 767 model designation, and three variants were planned: a 767-100 with 190 seats, a 767-200 with 210 seats, and a trijet 767MR/LR version with 200 seats intended for intercontinental routes. The 767MR/LR was subsequently renamed 777 for differentiation purposes. The 767 was officially launched on July 14, 1978, when United Airlines ordered 30 of the 767-200 variant, followed by 50 more 767-200 orders from American Airlines and Delta Air Lines later that year. The 767-100 was ultimately not offered for sale, as its capacity was too close to the 757's seating, while the 777 trijet was eventually dropped in favor of standardizing the twinjet configuration.",
"title": "Development"
},
{
"paragraph_id": 7,
"text": "In the late 1970s, operating cost replaced capacity as the primary factor in airliner purchases. As a result, the 767's design process emphasized fuel efficiency from the outset. Boeing targeted a 20 to 30 percent cost saving over earlier aircraft, mainly through new engine and wing technology. As development progressed, engineers used computer-aided design for over a third of the 767's design drawings, and performed 26,000 hours of wind tunnel tests. Design work occurred concurrently with the 757 twinjet, leading Boeing to treat both as almost one program to reduce risk and cost. Both aircraft would ultimately receive shared design features, including avionics, flight management systems, instruments, and handling characteristics. Combined development costs were estimated at $3.5 to $4 billion.",
"title": "Development"
},
{
"paragraph_id": 8,
"text": "Early 767 customers were given the choice of Pratt & Whitney JT9D or General Electric CF6 turbofans, marking the first time that Boeing had offered more than one engine option at the launch of a new airliner. Both jet engine models had a maximum output of 48,000 pounds-force (210 kN) of thrust. The engines were mounted approximately one-third the length of the wing from the fuselage, similar to previous wide-body trijets. The larger wings were designed using an aft-loaded shape which reduced aerodynamic drag and distributed lift more evenly across their surface span than any of the manufacturer's previous aircraft. The wings provided higher-altitude cruise performance, added fuel capacity, and expansion room for future stretched variants. The initial 767-200 was designed for sufficient range to fly across North America or across the northern Atlantic, and would be capable of operating routes up to 3,850 nautical miles (7,130 km; 4,430 mi).",
"title": "Development"
},
{
"paragraph_id": 9,
"text": "The 767's fuselage width was set midway between that of the 707 and the 747 at 16.5 feet (5.03 m). While it was narrower than previous wide-body designs, seven abreast seating with two aisles could be fitted, and the reduced width produced less aerodynamic drag. The fuselage was not wide enough to accommodate two standard LD3 wide-body unit load devices side-by-side, so a smaller container, the LD2, was created specifically for the 767. Using a conventional tail design also allowed the rear fuselage to be tapered over a shorter section, providing for parallel aisles along the full length of the passenger cabin, and eliminating irregular seat rows toward the rear of the aircraft.",
"title": "Development"
},
{
"paragraph_id": 10,
"text": "The 767 was the first Boeing wide-body to be designed with a two-crew digital glass cockpit. Cathode ray tube (CRT) color displays and new electronics replaced the role of the flight engineer by enabling the pilot and co-pilot to monitor aircraft systems directly. Despite the promise of reduced crew costs, United Airlines initially demanded a conventional three-person cockpit, citing concerns about the risks associated with introducing a new aircraft. The carrier maintained this position until July 1981, when a US presidential task force determined that a crew of two was safe for operating wide-body jets. A three-crew cockpit remained as an option and was fitted to the first production models. Ansett Australia ordered 767s with three-crew cockpits due to union demands; it was the only airline to operate 767s so configured. The 767's two-crew cockpit was also applied to the 757, allowing pilots to operate both aircraft after a short conversion course, and adding incentive for airlines to purchase both types.",
"title": "Development"
},
{
"paragraph_id": 11,
"text": "To produce the 767, Boeing formed a network of subcontractors which included domestic suppliers and international contributions from Italy's Aeritalia and Japan's CTDC. The wings and cabin floor were produced in-house, while Aeritalia provided control surfaces, Boeing Vertol made the leading edge for the wings, and Boeing Wichita produced the forward fuselage. The CTDC provided multiple assemblies through its constituent companies, namely Fuji Heavy Industries (wing fairings and gear doors), Kawasaki Heavy Industries (center fuselage), and Mitsubishi Heavy Industries (rear fuselage, doors, and tail). Components were integrated during final assembly at the Everett factory. For expedited production of wing spars, the main structural member of aircraft wings, the Everett factory received robotic machinery to automate the process of drilling holes and inserting fasteners. This method of wing construction expanded on techniques developed for the 747. Final assembly of the first aircraft began in July 1979.",
"title": "Development"
},
{
"paragraph_id": 12,
"text": "The prototype aircraft, registered N767BA and equipped with JT9D turbofans, rolled out on August 4, 1981. By this time, the 767 program had accumulated 173 firm orders from 17 customers, including Air Canada, All Nippon Airways, Britannia Airways, Transbrasil, and Trans World Airlines (TWA). On September 26, 1981, the prototype took its maiden flight under the command of company test pilots Tommy Edmonds, Lew Wallick, and John Brit. The maiden flight was largely uneventful, save for the inability to retract the landing gear because of a hydraulic fluid leak. The prototype was used for subsequent flight tests.",
"title": "Development"
},
{
"paragraph_id": 13,
"text": "The 10-month 767 flight test program utilized the first six aircraft built. The first four aircraft were equipped with JT9D engines, while the fifth and sixth were fitted with CF6 engines. The test fleet was largely used to evaluate avionics, flight systems, handling, and performance, while the sixth aircraft was used for route-proving flights. During testing, pilots described the 767 as generally easy to fly, with its maneuverability unencumbered by the bulkiness associated with larger wide-body jets. Following 1,600 hours of flight tests, the JT9D-powered 767-200 received certification from the US Federal Aviation Administration (FAA) and the UK Civil Aviation Authority (CAA) in July 1982. The first delivery occurred on August 19, 1982, to United Airlines. The CF6-powered 767-200 received certification in September 1982, followed by the first delivery to Delta Air Lines on October 25, 1982.",
"title": "Development"
},
{
"paragraph_id": 14,
"text": "The 767 entered service with United Airlines on September 8, 1982. The aircraft's first commercial flight used a JT9D-powered 767-200 on the Chicago-to-Denver route. The CF6-powered 767-200 commenced service three months later with Delta Air Lines. Upon delivery, early 767s were mainly deployed on domestic routes, including US transcontinental services. American Airlines and TWA began flying the 767-200 in late 1982, while Air Canada, China Airlines, El Al, and Pacific Western began operating the aircraft in 1983. The aircraft's introduction was relatively smooth, with few operational glitches and greater dispatch reliability than prior jetliners.",
"title": "Development"
},
{
"paragraph_id": 15,
"text": "Forecasting airline interest in larger-capacity models, Boeing announced the stretched 767-300 in 1983 and the extended-range 767-300ER in 1984. Both models offered a 20 percent passenger capacity increase, while the extended-range version was capable of operating flights up to 5,990 nautical miles (11,090 km; 6,890 mi). Japan Airlines placed the first order for the -300 in September 1983. Following its first flight on January 30, 1986, the type entered service with Japan Airlines on October 20, 1986. The 767-300ER completed its first flight on December 9, 1986, but it was not until March 1987 that the first firm order, from American Airlines, was placed. The type entered service with American Airlines on March 3, 1988. The 767-300 and 767-300ER gained popularity after entering service, and came to account for approximately two-thirds of all 767s sold. Until the 777's 1995 debut, the 767-300 and 767-300ER remained Boeing's second-largest wide-bodies behind the 747.",
"title": "Development"
},
{
"paragraph_id": 16,
"text": "Buoyed by a recovering global economy and ETOPS approval, 767 sales accelerated in the mid-to-late 1980s; 1989 was the most prolific year with 132 firm orders. By the early 1990s, the wide-body twinjet had become its manufacturer's annual best-selling aircraft, despite a slight decrease due to economic recession. During this period, the 767 became the most common airliner for transatlantic flights between North America and Europe. By the end of the decade, 767s crossed the Atlantic more frequently than all other aircraft types combined. The 767 also propelled the growth of point-to-point flights which bypassed major airline hubs in favor of direct routes. Taking advantage of the aircraft's lower operating costs and smaller capacity, operators added non-stop flights to secondary population centers, thereby eliminating the need for connecting flights. The increased number of cities receiving non-stop services caused a paradigm shift in the airline industry as point-to-point travel gained prominence at the expense of the traditional hub-and-spoke model.",
"title": "Development"
},
{
"paragraph_id": 17,
"text": "In February 1990, the first 767 equipped with Rolls-Royce RB211 turbofans, a 767-300, was delivered to British Airways. Six months later, the carrier temporarily grounded its entire 767 fleet after discovering cracks in the engine pylons of several aircraft. The cracks were related to the extra weight of the RB211 engines, which are 2,205 pounds (1,000 kg) heavier than other 767 engines. During the grounding, interim repairs were conducted to alleviate stress on engine pylon components, and a parts redesign in 1991 prevented further cracks. Boeing also performed a structural reassessment, resulting in production changes and modifications to the engine pylons of all 767s in service.",
"title": "Development"
},
{
"paragraph_id": 18,
"text": "In January 1993, following an order from UPS Airlines, Boeing launched a freighter variant, the 767-300F, which entered service with UPS on October 16, 1995. The 767-300F featured a main deck cargo hold, upgraded landing gear, and strengthened wing structure. In November 1993, the Japanese government launched the first 767 military derivative when it placed orders for the E-767, an Airborne Early Warning and Control (AWACS) variant based on the 767-200ER. The first two E-767s, featuring extensive modifications to accommodate surveillance radar and other monitoring equipment, were delivered in 1998 to the Japan Self-Defense Forces.",
"title": "Development"
},
{
"paragraph_id": 19,
"text": "In November 1995, after abandoning development of a smaller version of the 777, Boeing announced that it was revisiting studies for a larger 767. The proposed 767-400X, a second stretch of the aircraft, offered a 12 percent capacity increase versus the 767-300, and featured an upgraded flight deck, enhanced interior, and greater wingspan. The variant was specifically aimed at Delta Air Lines' pending replacement of its aging Lockheed L-1011 TriStars, and faced competition from the A330-200, a shortened derivative of the Airbus A330. In March 1997, Delta Air Lines launched the 767-400ER when it ordered the type to replace its L-1011 fleet. In October 1997, Continental Airlines also ordered the 767-400ER to replace its McDonnell Douglas DC-10 fleet. The type completed its first flight on October 9, 1999, and entered service with Continental Airlines on September 14, 2000.",
"title": "Development"
},
{
"paragraph_id": 20,
"text": "In the early 2000s, cumulative 767 deliveries approached 900, but new sales declined during an airline industry downturn. In 2001, Boeing dropped plans for a longer-range model, the 767-400ERX, in favor of the proposed Sonic Cruiser, a new jetliner which aimed to fly 15 percent faster while having comparable fuel costs to the 767. The following year, Boeing announced the KC-767 Tanker Transport, a second military derivative of the 767-200ER. Launched with an order in October 2002 from the Italian Air Force, the KC-767 was intended for the dual role of refueling other aircraft and carrying cargo. The Japanese government became the second customer for the type in March 2003. In May 2003, the United States Air Force (USAF) announced its intent to lease KC-767s to replace its aging KC-135 tankers. The plan was suspended in March 2004 amid a conflict of interest scandal, resulting in multiple US government investigations and the departure of several Boeing officials, including Philip Condit, the company's chief executive officer, and chief financial officer Michael Sears. The first KC-767s were delivered in 2008 to the Japan Self-Defense Forces.",
"title": "Development"
},
{
"paragraph_id": 21,
"text": "In late 2002, after airlines expressed reservations about its emphasis on speed over cost reduction, Boeing halted development of the Sonic Cruiser. The following year, the manufacturer announced the 7E7, a mid-size 767 successor made from composite materials which promised to be 20 percent more fuel efficient. The new jetliner was the first stage of a replacement aircraft initiative called the Boeing Yellowstone Project. Customers embraced the 7E7, later renamed 787 Dreamliner, and within two years it had become the fastest-selling airliner in the company's history. In 2005, Boeing opted to continue 767 production despite record Dreamliner sales, citing a need to provide customers waiting for the 787 with a more readily available option. Subsequently, the 767-300ER was offered to customers affected by 787 delays, including All Nippon Airways and Japan Airlines. Some aging 767s, exceeding 20 years in age, were also kept in service past planned retirement dates due to the delays. To extend the operational lives of older aircraft, airlines increased heavy maintenance procedures, including D-check teardowns and inspections for corrosion, a recurring issue on aging 767s. The first 787s entered service with All Nippon Airways in October 2011, 42 months behind schedule.",
"title": "Development"
},
{
"paragraph_id": 22,
"text": "In 2007, the 767 received a production boost when UPS and DHL Aviation placed a combined 33 orders for the 767-300F. Renewed freighter interest led Boeing to consider enhanced versions of the 767-200 and 767-300F with increased gross weights, 767-400ER wing extensions, and 777 avionics. Net orders for the 767 declined from 24 in 2008 to just three in 2010. During the same period, operators upgraded aircraft already in service; in 2008, the first 767-300ER retrofitted with blended winglets from Aviation Partners Incorporated debuted with American Airlines. The manufacturer-sanctioned winglets, at 11 feet (3.35 m) in height, improved fuel efficiency by an estimated 6.5 percent. Other carriers including All Nippon Airways and Delta Air Lines also ordered winglet kits.",
"title": "Development"
},
{
"paragraph_id": 23,
"text": "On February 2, 2011, the 1,000th 767 rolled out, destined for All Nippon Airways. The aircraft was the 91st 767-300ER ordered by the Japanese carrier, and with its completion the 767 became the second wide-body airliner to reach the thousand-unit milestone after the 747. The 1,000th aircraft also marked the last model produced on the original 767 assembly line. Beginning with the 1,001st aircraft, production moved to another area in the Everett factory which occupied about half of the previous floor space. The new assembly line made room for 787 production and aimed to boost manufacturing efficiency by over twenty percent.",
"title": "Development"
},
{
"paragraph_id": 24,
"text": "At the inauguration of its new assembly line, the 767's order backlog numbered approximately 50, only enough for production to last until 2013. Despite the reduced backlog, Boeing officials expressed optimism that additional orders would be forthcoming. On February 24, 2011, the USAF announced its selection of the KC-767 Advanced Tanker, an upgraded variant of the KC-767, for its KC-X fleet renewal program. The selection followed two rounds of tanker competition between Boeing and Airbus parent EADS, and came eight years after the USAF's original 2003 announcement of its plan to lease KC-767s. The tanker order encompassed 179 aircraft and was expected to sustain 767 production past 2013.",
"title": "Development"
},
{
"paragraph_id": 25,
"text": "In December 2011, FedEx Express announced a 767-300F order for 27 aircraft to replace its DC-10 freighters, citing the USAF tanker order and Boeing's decision to continue production as contributing factors. FedEx Express agreed to buy 19 more of the −300F variant in June 2012. In June 2015, FedEx said it was accelerating retirements of planes both to reflect demand and to modernize its fleet, recording charges of $276 million (~$335 million in 2022). On July 21, 2015, FedEx announced an order for 50 767-300F with options on another 50, the largest order for the type. With the announcement FedEx confirmed that it has firm orders for 106 of the freighters for delivery between 2018 and 2023. In February 2018, UPS announced an order for 4 more 767-300Fs to increase the total on order to 63.",
"title": "Development"
},
{
"paragraph_id": 26,
"text": "With its successor, the Boeing New Midsize Airplane, that was planned for introduction in 2025 or later, and the 787 being much larger, Boeing could restart a passenger 767-300ER production to bridge the gap. A demand for 50 to 60 aircraft could have to be satisfied. Having to replace its 40 767s, United Airlines requested a price quote for other widebodies. In November 2017, Boeing CEO Dennis Muilenburg cited interest beyond military and freighter uses. However, in early 2018 Boeing Commercial Airplanes VP of marketing Randy Tinseth stated that the company did not intend to resume production of the passenger variant.",
"title": "Development"
},
{
"paragraph_id": 27,
"text": "In its first quarter of 2018 earnings report, Boeing plan to increase its production from 2.5 to 3 monthly beginning in January 2020 due to increased demand in the cargo market, as FedEx had 56 on order, UPS has four, and an unidentified customer has three on order. This rate could rise to 3.5 per month in July 2020 and 4 per month in January 2021, before decreasing to 3 per month in January 2025 and then 2 per month in July 2025. In 2019, unit cost was US$217.9 million for a -300ER, and US$220.3 million for a -300F.",
"title": "Development"
},
{
"paragraph_id": 28,
"text": "After the debut of the first stretched 767s, Boeing sought to address airline requests for greater capacity by proposing larger models, including a partial double-deck version informally named the \"Hunchback of Mukilteo\" (from a town near Boeing's Everett factory) with a 757 body section mounted over the aft main fuselage. In 1986, Boeing proposed the 767-X, a revised model with extended wings and a wider cabin, but received little interest. By 1988, the 767-X had evolved into an all-new twinjet, which adopted the 777 designation. The 767-X did not get enough interest from airlines to launch and the model was shelved in 1988 in favor of the Boeing 777.",
"title": "Development"
},
{
"paragraph_id": 29,
"text": "In March 2000, Boeing was to launch the 259-seat 767-400ERX with an initial order for three from Kenya Airways with deliveries planned for 2004, as it was proposed to Lauda Air. Increased gross weight and a tailplane fuel tank would have boosted its range by 1,110 to 12,025 km (600 to 6,490 nmi; 690 to 7,470 mi), and GE could offer its 65,000–68,000 lbf (290–300 kN) CF6-80C2/G2. Rolls-Royce offered its 68,000–72,000 lbf (300–320 kN) Trent 600 for the 767-400ERX and the Boeing 747X.",
"title": "Development"
},
{
"paragraph_id": 30,
"text": "Offered in July, the longer-range -400ERX would have a strengthened wing, fuselage and landing gear for a 15,000 lb (6.8 t) higher MTOW, up to 465,000 lb (210.92 t). Thrust would rise to 72,000 lbf (320 kN) for better takeoff performance, with the Trent 600 or the General Electric/Pratt & Whitney Engine Alliance GP7172, also offered on the 747X. Range would increase by 525 nmi (950 km; 604 mi) to 6,150 nmi (11,390 km; 7,080 mi), with an additional fuel tank of 2,145 U.S. gallons (8,120 L) in the horizontal tail. The 767-400ERX would offer the capacity of the Airbus A330-200 with 3% lower fuel burn and costs. Boeing cancelled the variant development in 2001. Kenya Airways then switched its order to the 777-200ER.",
"title": "Development"
},
{
"paragraph_id": 31,
"text": "In October 2019, Boeing was reportedly studying a re-engined 767-XF for entry into service around 2025, based on the 767-400ER with an extended landing gear to accommodate larger General Electric GEnx turbofan engines. The cargo market is the main target, but a passenger version could be a cheaper alternative to the proposed New Midsize Airplane.",
"title": "Development"
},
{
"paragraph_id": 32,
"text": "The 767 is a low-wing cantilever monoplane with a conventional tail unit featuring a single fin and rudder. The wings are swept at 31.5 degrees and optimized for a cruising speed of Mach 0.8 (533 mph or 858 km/h). Each wing features a supercritical airfoil cross-section and is equipped with six-panel leading edge slats, single- and double-slotted flaps, inboard and outboard ailerons, and six spoilers. The airframe further incorporates Carbon-fiber-reinforced polymer composite material wing surfaces, Kevlar fairings and access panels, plus improved aluminum alloys, which together reduce overall weight by 1,900 pounds (860 kg) versus preceding aircraft.",
"title": "Design"
},
{
"paragraph_id": 33,
"text": "To distribute the aircraft's weight on the ground, the 767 has a retractable tricycle landing gear with four wheels on each main gear and two for the nose gear. The original wing and gear design accommodated the stretched 767-300 without major changes. The 767-400ER features a larger, more widely spaced main gear with 777 wheels, tires, and brakes. To prevent damage if the tail section contacts the runway surface during takeoff, 767-300 and 767-400ER models are fitted with a retractable tailskid.",
"title": "Design"
},
{
"paragraph_id": 34,
"text": "All passenger 767 models have exit doors near the front and rear of the aircraft. Most 767-200 and -200ER models have one overwing exit door for emergency use; an optional second overwing exit increases maximum allowable capacity from 255 to 290. The 767-300 and -300ER typically feature two overwing exit doors or, in a configuration with no overwing exits, three exit doors on each side and a smaller exit door aft of the wing. A further configuration featuring three exit doors on each side plus one overwing exit allows an increase in maximum capacity from 290 to 351. All 767-400ERs are configured with three exit doors on each side and a smaller exit door aft of the wing. The 767-300F has one exit door at the forward left-hand side of the aircraft.",
"title": "Design"
},
{
"paragraph_id": 35,
"text": "In addition to shared avionics and computer technology, the 767 uses the same auxiliary power unit, electric power systems, and hydraulic parts as the 757. A raised cockpit floor and the same forward cockpit windows result in similar pilot viewing angles. Related design and functionality allows 767 pilots to obtain a common type rating to operate the 757 and share the same seniority roster with pilots of either aircraft.",
"title": "Design"
},
{
"paragraph_id": 36,
"text": "The original 767 flight deck uses six Rockwell Collins CRT screens to display electronic flight instrument system (EFIS) and engine indication and crew alerting system (EICAS) information, allowing pilots to handle monitoring tasks previously performed by the flight engineer. The CRTs replace conventional electromechanical instruments found on earlier aircraft. An enhanced flight management system, improved over versions used on early 747s, automates navigation and other functions, while an automatic landing system facilitates CAT IIIb instrument landings in low visibility situations. The 767 became the first aircraft to receive CAT IIIb certification from the FAA for landings with 980 feet (300 m) minimum visibility in 1984. On the 767-400ER, the cockpit layout is simplified further with six Rockwell Collins liquid crystal display (LCD) screens, and adapted for similarities with the 777 and the Next Generation 737. To retain operational commonality, the LCD screens can be programmed to display information in the same manner as earlier 767s. In 2012, Boeing and Rockwell Collins launched a further 787-based cockpit upgrade for the 767, featuring three landscape-format LCD screens that can display two windows each.",
"title": "Design"
},
{
"paragraph_id": 37,
"text": "The 767 is equipped with three redundant hydraulic systems for operation of control surfaces, landing gear, and utility actuation systems. Each engine powers a separate hydraulic system, and the third system uses electric pumps. A ram air turbine provides power for basic controls in the event of an emergency. An early form of fly-by-wire is employed for spoiler operation, utilizing electric signaling instead of traditional control cables. The fly-by-wire system reduces weight and allows independent operation of individual spoilers.",
"title": "Design"
},
{
"paragraph_id": 38,
"text": "The 767 features a twin-aisle cabin with a typical configuration of six abreast in business class and seven across in economy. The standard seven abreast, 2–3–2 economy class layout places approximately 87 percent of all seats at a window or aisle. As a result, the aircraft can be largely occupied before center seats need to be filled, and each passenger is no more than one seat from the aisle. It is possible to configure the aircraft with extra seats for up to an eight abreast configuration, but this is less common.",
"title": "Design"
},
{
"paragraph_id": 39,
"text": "The 767 interior introduced larger overhead bins and more lavatories per passenger than previous aircraft. The bins are wider to accommodate garment bags without folding, and strengthened for heavier carry-on items. A single, large galley is installed near the aft doors, allowing for more efficient meal service and simpler ground resupply. Passenger and service doors are an overhead plug type, which retract upwards, and commonly used doors can be equipped with an electric-assist system.",
"title": "Design"
},
{
"paragraph_id": 40,
"text": "In 2000, a 777-style interior, known as the Boeing Signature Interior, debuted on the 767-400ER. Subsequently, adopted for all new-build 767s, the Signature Interior features even larger overhead bins, indirect lighting, and sculpted, curved panels. The 767-400ER also received larger windows derived from the 777. Older 767s can be retrofitted with the Signature Interior. Some operators have adopted a simpler modification known as the Enhanced Interior, featuring curved ceiling panels and indirect lighting with minimal modification of cabin architecture, as well as aftermarket modifications such as the NuLook 767 package by Heath Tecna.",
"title": "Design"
},
{
"paragraph_id": 41,
"text": "In its first year, the 767 logged a 96.1 percent dispatch rate, which exceeded the industry average for all-new aircraft. Operators reported generally favorable ratings for the twinjet's sound levels, interior comfort, and economic performance. Resolved issues were minor and included the recalibration of a leading edge sensor to prevent false readings, the replacement of an evacuation slide latch, and the repair of a tailplane pivot to match production specifications.",
"title": "Operational history"
},
{
"paragraph_id": 42,
"text": "Seeking to capitalize on its new wide-body's potential for growth, Boeing offered an extended-range model, the 767-200ER, in its first year of service. Ethiopian Airlines placed the first order for the type in December 1982. Featuring increased gross weight and greater fuel capacity, the extended-range model could carry heavier payloads at distances up to 6,385 nautical miles (11,825 km; 7,348 mi), and was targeted at overseas customers. The 767-200ER entered service with El Al Airline on March 27, 1984. The type was mainly ordered by international airlines operating medium-traffic, long-distance flights. In May 1984, an Ethiopian Airlines 767-200ER set a non-stop record for a commercial twinjet of 12,082 km (6,524 nmi; 7,507 mi) from Washington DC to Addis Ababa.",
"title": "Operational history"
},
{
"paragraph_id": 43,
"text": "In the mid-1980s, the 767 and its European rivals, the Airbus A300 and A310, spearheaded the growth of twinjet flights across the northern Atlantic under extended-range twin-engine operational performance standards (ETOPS) regulations, the FAA's safety rules governing transoceanic flights by aircraft with two engines. In 1976, the A300 was the first twinjet to secure permission to fly 90 minutes away from diversion airports, up from 60 minutes. In May 1985, the FAA granted its first approval for 120-minute ETOPS flights to the 767, on an individual airline basis starting with TWA, provided that the operator met flight safety criteria. This allowed the aircraft to fly overseas routes at up to two hours' distance from land. The 767 burned 7,000 lb (3.2 t) less fuel per hour than a Lockheed L-1011 TriStar on the route between Boston and Paris, a huge savings. The Airbus A310 secured approval for 120-minute ETOPS flights one month later in June. The larger safety margins were permitted because of the improved reliability demonstrated by twinjets and their turbofan engines. The FAA lengthened the ETOPS time to 180 minutes for CF6-powered 767s in 1989, making the type the first to be certified under the longer duration, and all available engines received approval by 1993. Regulatory approval spurred the expansion of transoceanic flights with twinjet aircraft and boosted the sales of both the 767 and its rivals.",
"title": "Operational history"
},
{
"paragraph_id": 44,
"text": "The 767 has been produced in three fuselage lengths. These debuted in progressively larger form as the 767-200, 767-300, and 767-400ER. Longer-range variants include the 767-200ER and 767-300ER, while cargo models include the 767-300F, a production freighter, and conversions of passenger 767-200 and 767-300 models.",
"title": "Variants"
},
{
"paragraph_id": 45,
"text": "When referring to different variants, Boeing and airlines often collapse the model number (767) and the variant designator, e.g. –200 or –300, into a truncated form, e.g. \"762\" or \"763\". Subsequent to the capacity number, designations may append the range identifier, though -200ER and -300ER are company marketing designations and not certificated as such. The International Civil Aviation Organization (ICAO) aircraft type designator system uses a similar numbering scheme, but adds a preceding manufacturer letter; all variants based on the 767-200 and 767-300 are classified under the codes \"B762\" and \"B763\"; the 767-400ER receives the designation of \"B764\".",
"title": "Variants"
},
{
"paragraph_id": 46,
"text": "The 767-200 was the original model and entered service with United Airlines in 1982. The type has been used primarily by mainline U.S. carriers for domestic routes between major hub centers such as Los Angeles to Washington. The 767-200 was the first aircraft to be used on transatlantic ETOPS flights, beginning with TWA on February 1, 1985, under 90-minute diversion rules. Deliveries for the variant totaled 128 aircraft. There were 52 examples of the model in commercial service as of July 2018, almost entirely as freighter conversions. The type's competitors included the Airbus A300 and A310.",
"title": "Variants"
},
{
"paragraph_id": 47,
"text": "The 767-200 was produced until 1987 when production switched to the extended-range 767-200ER. Some early 767-200s were subsequently upgraded to extended-range specification. In 1998, Boeing began offering 767-200 conversions to 767-200SF (Special Freighter) specification for cargo use, and Israel Aerospace Industries has been licensed to perform cargo conversions since 2005. The conversion process entails the installation of a side cargo door, strengthened main deck floor, and added freight monitoring and safety equipment. The 767-200SF was positioned as a replacement for Douglas DC-8 freighters.",
"title": "Variants"
},
{
"paragraph_id": 48,
"text": "A commercial freighter version of the Boeing 767-200 with wings from the -300 series and an updated flightdeck was first flown on December 29, 2014. A military tanker variant of the Boeing 767-2C is developed for the USAF as the KC-46. Boeing is building two aircraft as commercial freighters which will be used to obtain Federal Aviation Administration certification, a further two Boeing 767-2Cs will be modified as military tankers. As of 2014, Boeing does not have customers for the freighter.",
"title": "Variants"
},
{
"paragraph_id": 49,
"text": "The 767-200ER was the first extended-range model and entered service with El Al in 1984. The type's increased range is due to extra fuel capacity and higher maximum takeoff weight (MTOW) of up to 395,000 lb (179,000 kg). The additional fuel capacity is accomplished by using the center tank's dry dock to carry fuel. The non-ER variant's center tank is what is called cheek tanks; two interconnected halves in each wing root with a dry dock in between. The center tank is also used on the -300ER and -400ER variants.",
"title": "Variants"
},
{
"paragraph_id": 50,
"text": "This version was originally offered with the same engines as the 767-200, while more powerful Pratt & Whitney PW4000 and General Electric CF6 engines later became available. The 767-200ER was the first 767 to complete a non-stop transatlantic journey, and broke the flying distance record for a twinjet airliner on April 17, 1988, with an Air Mauritius flight from Halifax, Nova Scotia to Port Louis, Mauritius, covering 8,727 nmi (16,200 km; 10,000 mi). The 767-200ER has been acquired by international operators seeking smaller wide-body aircraft for long-haul routes such as New York to Beijing. Deliveries of the type totaled 121 with no unfilled orders. As of July 2018, 21 examples of passenger and freighter conversion versions were in airline service. The type's main competitors of the time included the Airbus A300-600R and the A310-300.",
"title": "Variants"
},
{
"paragraph_id": 51,
"text": "The 767-300, the first stretched version of the aircraft, entered service with Japan Airlines in 1986. The type features a 21.1-foot (6.43 m) fuselage extension over the 767-200, achieved by additional sections inserted before and after the wings, for an overall length of 180.25 ft (54.9 m). Reflecting the growth potential built into the original 767 design, the wings, engines, and most systems were largely unchanged on the 767-300. An optional mid-cabin exit door is positioned ahead of the wings on the left, while more powerful Pratt & Whitney PW4000 and Rolls-Royce RB211 engines later became available. The 767-300's increased capacity has been used on high-density routes within Asia and Europe. The 767-300 was produced from 1986 until 2000. Deliveries for the type totaled 104 aircraft with no unfilled orders remaining. The type's main competitor was the Airbus A300.",
"title": "Variants"
},
{
"paragraph_id": 52,
"text": "The 767-300ER, the extended-range version of the 767-300, entered service with American Airlines in 1988. The type's increased range was made possible by greater fuel tankage and a higher MTOW of 407,000 lb (185,000 kg). Design improvements allowed the available MTOW to increase to 412,000 lb (187,000 kg) by 1993. Power is provided by Pratt & Whitney PW4000, General Electric CF6, or Rolls-Royce RB211 engines. The 767-300ER comes in three exit configurations: the baseline configuration has four main cabin doors and four over-wing window exits, the second configuration has six main cabin doors and two over-wing window exits; and the third configuration has six main cabin doors, as well as two smaller doors that are located behind the wings. Typical routes for the type include New York to Frankfurt.",
"title": "Variants"
},
{
"paragraph_id": 53,
"text": "The combination of increased capacity and range for the -300ER has been particularly attractive to both new and existing 767 operators. It is the most successful 767 version, with more orders placed than all other variants combined. As of November 2017, 767-300ER deliveries stand at 583 with no unfilled orders. There were 376 examples in service as of July 2018. The type's main competitor is the Airbus A330-200. At its 1990s peak, a new 767-300ER was valued at $85 million, dipping to around $12 million in 2018 for a 1996 build.",
"title": "Variants"
},
{
"paragraph_id": 54,
"text": "The 767-300F, the production freighter version of the 767-300ER, entered service with UPS Airlines in 1995. The 767-300F can hold up to 24 standard 88-by-125-inch (220 by 320 cm) pallets on its main deck and up to 30 LD2 unit load devices on the lower deck, with a total cargo volume of 15,469 cubic feet (438 m). The freighter has a main deck cargo door and crew exit, while the lower deck features two starboard-side cargo doors and one port-side cargo door. A general market version with onboard freight-handling systems, refrigeration capability, and crew facilities was delivered to Asiana Airlines on August 23, 1996. As of August 2019, 767-300F deliveries stand at 161 with 61 unfilled orders. Airlines operated 222 examples of the freighter variant and freighter conversions in July 2018.",
"title": "Variants"
},
{
"paragraph_id": 55,
"text": "In June 2008, All Nippon Airways took delivery of the first 767-300BCF (Boeing Converted Freighter), a modified passenger-to-freighter model. The conversion work was performed in Singapore by ST Aerospace Services, the first supplier to offer a 767-300BCF program, and involved the addition of a main deck cargo door, strengthened main deck floor, and additional freight monitoring and safety equipment.",
"title": "Variants"
},
{
"paragraph_id": 56,
"text": "Israel Aerospace Industries offers a passenger-to-freighter conversion program called the 767-300BDSF (BEDEK Special Freighter). Wagner Aeronautical also offers a passenger-to-freighter conversion program for 767-300 series aircraft.",
"title": "Variants"
},
{
"paragraph_id": 57,
"text": "The 767-400ER, the first Boeing wide-body jet resulting from two fuselage stretches, entered service with Continental Airlines in 2000. The type features a 21.1-foot (6.43-metre) stretch over the 767-300, for a total length of 201.25 feet (61.3 m). The wingspan is also increased by 14.3 feet (4.36 m) through the addition of raked wingtips. The exit configuration uses six main cabin doors and two smaller exit doors behind the wings, similar to certain 767-300ERs. Other differences include an updated cockpit, redesigned landing gear, and 777-style Signature Interior. Power is provided by uprated General Electric CF6 engines.",
"title": "Variants"
},
{
"paragraph_id": 58,
"text": "The FAA granted approval for the 767-400ER to operate 180-minute ETOPS flights before it entered service. Because its fuel capacity was not increased over preceding models, the 767-400ER has a range of 5,625 nautical miles (10,418 km; 6,473 mi), less than previous extended-range 767s. No 767-400 (non-extended range) version was developed.",
"title": "Variants"
},
{
"paragraph_id": 59,
"text": "The longer-range 767-400ERX was offered in July 2000 before being cancelled a year later, leaving the 767-400ER as the sole version of the largest 767. Boeing dropped the 767-400ER and the -200ER from its pricing list in 2014.",
"title": "Variants"
},
{
"paragraph_id": 60,
"text": "A total of 37 767-400ERs were delivered to the variant's two airline customers, Continental Airlines (now merged with United Airlines) and Delta Air Lines, with no unfilled orders. All 37 examples of the -400ER were in service in July 2018. One additional example was produced as a military testbed, and later sold as a VIP transport. The type's closest competitor is the Airbus A330-200.",
"title": "Variants"
},
{
"paragraph_id": 61,
"text": "Versions of the 767 serve in a number of military and government applications, with responsibilities ranging from airborne surveillance and refueling to cargo and VIP transport. Several military 767s have been derived from the 767-200ER, the longest-range version of the aircraft.",
"title": "Variants"
},
{
"paragraph_id": 62,
"text": "In July 2018, 742 aircraft were in airline service: 73 -200s, 632 -300, and 37 -400ER with 65 -300F on order; the largest operators are Delta Air Lines (77), FedEx (60; largest cargo operator), UPS Airlines (59), United Airlines (51), Japan Airlines (35), All Nippon Airways (34). The type's competitors included the Airbus A300 and A310.",
"title": "Operators"
},
{
"paragraph_id": 63,
"text": "The largest 767 customers by orders placed are FedEx Express (150), Delta Air Lines (117), All Nippon Airways (96), American Airlines (88), and United Airlines (82). Delta and United are the only customers of all -200, -300, and -400ER passenger variants. In July 2015, FedEx placed a firm order for 50 Boeing 767 freighters with deliveries from 2018 to 2023.",
"title": "Operators"
},
{
"paragraph_id": 64,
"text": "Boeing 767 orders and deliveries (cumulative, by year):",
"title": "Operators"
},
{
"paragraph_id": 65,
"text": "Orders",
"title": "Operators"
},
{
"paragraph_id": 66,
"text": "Deliveries",
"title": "Operators"
},
{
"paragraph_id": 67,
"text": "As of February 2019, the Boeing 767 has been in 60 aviation occurrences, including 19 hull-loss accidents. Seven fatal crashes, including three hijackings, have resulted in a total of 854 occupant fatalities.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 68,
"text": "The airliner's first fatal crash, Lauda Air Flight 004, occurred near Bangkok on May 26, 1991, following the in-flight deployment of the left engine thrust reverser on a 767-300ER. None of the 223 aboard survived. As a result of this accident, all 767 thrust reversers were deactivated until a redesign was implemented. Investigators determined that an electronically controlled valve, common to late-model Boeing aircraft, was to blame. A new locking device was installed on all affected jetliners, including 767s.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 69,
"text": "On October 31, 1999, EgyptAir Flight 990, a 767-300ER, crashed off Nantucket, Massachusetts, in international waters killing all 217 people on board. The United States National Transportation Safety Board (NTSB) concluded \"not determined\", but determined the probable cause to be a deliberate action by the first officer; Egypt disputed this conclusion.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 70,
"text": "On April 15, 2002, Air China Flight 129, a 767-200ER, crashed into a hill amid inclement weather while trying to land at Gimhae International Airport in Busan, South Korea. The crash resulted in the death of 129 of the 166 people on board, and the cause was attributed to pilot error.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 71,
"text": "On February 23, 2019, Atlas Air Flight 3591, a Boeing 767-300ERF air freighter operating for Amazon Air, crashed into Trinity Bay near Houston, Texas, while on descent into George Bush Intercontinental Airport; both pilots and the single passenger were killed. The cause was attributed to pilot error and spatial disorientation.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 72,
"text": "On November 1, 2011, LOT Polish Airlines Flight 16, a 767-300ER, safely landed at Warsaw Chopin Airport in Warsaw, Poland, after a mechanical failure of the landing gear forced an emergency landing with the landing gear retracted. There were no injuries, but the aircraft involved was damaged and subsequently written off. At the time of the incident, aviation analysts speculated that it may have been the first instance of a complete landing gear failure in the 767's service history. Subsequent investigation determined that while a damaged hose had disabled the aircraft's primary landing gear extension system, an otherwise functional backup system was inoperative due to an accidentally deactivated circuit breaker.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 73,
"text": "On October 28, 2016, American Airlines Flight 383, a 767-300ER with 161 passengers and 9 crew members, aborted takeoff at Chicago O'Hare Airport following an uncontained failure of the right GE CF6-80C2 engine. The engine failure, which hurled fragments over a considerable distance, caused a fuel leak, resulting in a fire under the right wing. Fire and smoke entered the cabin. All passengers and crew evacuated the aircraft, with 20 passengers and one flight attendant sustaining minor injuries using the evacuation slides.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 74,
"text": "The 767 has been involved in six hijackings, three resulting in loss of life, for a combined total of 282 occupant fatalities. On November 23, 1996, Ethiopian Airlines Flight 961, a 767-200ER, was hijacked and crash-landed in the Indian Ocean near the Comoro Islands after running out of fuel, killing 125 out of the 175 persons on board; this was a rare example of occupants surviving a land-based aircraft ditching on water. Two 767s were involved in the September 11 attacks on the World Trade Center in 2001, resulting in the collapse of its two main towers. American Airlines Flight 11, a 767-200ER, crashed into the North Tower, killing all 92 people on board, and United Airlines Flight 175, a 767-200, crashed into the South Tower, with the death of all 65 on board. In addition, more than 2,600 people were killed in the towers or on the ground. A failed 2001 shoe bomb attempt that December involved an American Airlines 767-300ER.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 75,
"text": "The 767's first incident was Air Canada Flight 143, a 767-200, on July 23, 1983. The airplane ran out of fuel in-flight and had to glide with both engines out for almost 43 nautical miles (80 km; 49 mi) to an emergency landing at Gimli, Manitoba, Canada. The pilots used the aircraft's ram air turbine to power the hydraulic systems for aerodynamic control. There were no fatalities and only minor injuries. This aircraft was nicknamed \"Gimli Glider\" after its landing site. The aircraft, registered C-GAUN, continued flying for Air Canada until its retirement in January 2008.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 76,
"text": "In January 2014, the U.S. Federal Aviation Administration issued a directive that ordered inspections of the elevators on more than 400 767s beginning in March 2014; the focus was on fasteners and other parts that can fail and cause the elevators to jam. The issue was first identified in 2000 and has been the subject of several Boeing service bulletins. The inspections and repairs are required to be completed within six years. The aircraft has also had multiple occurrences of \"uncommanded escape slide inflation\" during maintenance or operations, and during flight. In late 2015, the FAA issued a preliminary directive to address the issue.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 77,
"text": "As new 767 variants roll off the assembly line, older series models have been retired and converted to cargo use, stored or scrapped. One complete aircraft, N102DA, is the first 767-200 to operate for Delta Air Lines and the twelfth example built. It was retired from airline service in February 2006 after being repainted back to its original 1982 Delta widget livery and given a farewell tour. It was then put on display at the Delta Flight Museum in the Delta corporate campus at the edge of Hartsfield–Jackson Atlanta International Airport. \"The Spirit of Delta\" is on public display as of 2022.",
"title": "Aircraft on display"
},
{
"paragraph_id": 78,
"text": "In 2013 a Brazilian entrepreneur purchased a 767-200 that had operated for the now-defunct carrier Transbrasil under the registration PT-TAC. The aircraft, which was sold at a bankruptcy auction, was placed on outdoor display in Taguatinga as part of a proposed commercial development. As of 2019, however, the development has not come to fruition. The aircraft is devoid of engines or landing gear, has deteriorated due to weather exposure and acts of vandalism, but remains publicly accessible to view.",
"title": "Aircraft on display"
},
{
"paragraph_id": 79,
"text": "Related development",
"title": "See also"
},
{
"paragraph_id": 80,
"text": "Aircraft of comparable role, configuration, and era",
"title": "See also"
},
{
"paragraph_id": 81,
"text": "Related lists",
"title": "See also"
}
] | The Boeing 767 is an American wide-body aircraft developed and manufactured by Boeing Commercial Airplanes. The aircraft was launched as the 7X7 program on July 14, 1978, the prototype first flew on September 26, 1981, and it was certified on July 30, 1982. The initial 767-200 variant entered service on September 8, 1982, with United Airlines, and the extended-range 767-200ER in 1984. It was stretched into the 767-300 in October 1986, followed by the 767-300ER in 1988, the most popular variant.
The 767-300F, a production freighter version, debuted in October 1995. It was stretched again into the 767-400ER from September 2000. To complement the larger 747, it has a seven-abreast cross-section, accommodating smaller LD2 ULD cargo containers.
The 767 is Boeing's first wide-body twinjet, powered by General Electric CF6, Rolls-Royce RB211, or Pratt & Whitney JT9D turbofans. JT9D engines were eventually replaced by PW4000 engines.
The aircraft has a conventional tail and a supercritical wing for reduced aerodynamic drag.
Its two-crew glass cockpit, a first for a Boeing airliner, was developed jointly for the 757 − a narrow-body aircraft, allowing a common pilot type rating. Studies for a higher-capacity 767 in 1986 led Boeing to develop the larger 777 twinjet, introduced in June 1995. The 159-foot-long (48.5 m) 767-200 typically seats 216 passengers over 3,900 nautical miles [nmi], while the 767-200ER seats 181 over a 6,590 nmi range.
The 180-foot-long (54.9 m) 767-300 typically seats 269 passengers over 3,900 nmi, while the 767-300ER seats 218 over 5,980 nmi.
The 767-300F can haul 116,000 lb (52.7 t) over 3,225 nmi, and the 201.3-foot-long (61.37 m) 767-400ER typically seats 245 passengers over 5,625 nmi. Military derivatives include the E-767 for surveillance and the KC-767 and KC-46 aerial tankers. Initially marketed for transcontinental routes, a loosening of ETOPS rules starting in 1985 allowed the aircraft to operate transatlantic flights.
A total of 742 of these aircraft were in service in July 2018, with Delta Air Lines being the largest operator with 77 aircraft in its fleet. As of November 2023, Boeing has received 1,407 orders from 74 customers, of which 1,296 airplanes have been delivered, while the remaining orders are for cargo or tanker variants. Competitors have included the Airbus A300, A310, and A330-200. Its successor, the 787 Dreamliner, entered service in 2011. | 2001-09-11T14:58:16Z | 2023-12-26T05:42:12Z | [
"Template:Clear",
"Template:Reflist",
"Template:Infobox aircraft begin",
"Template:As of",
"Template:Efn",
"Template:Cite web",
"Template:Dead link",
"Template:Refbegin",
"Template:Boeing airliners",
"Template:Cvt",
"Template:Inflation/year",
"Template:Aircontent",
"Template:Boeing 767 related",
"Template:Boeing 7x7 timeline",
"Template:Boeing model numbers",
"Template:Format price",
"Template:Anchor",
"Template:Harvnb",
"Template:Cite magazine",
"Template:Abbr",
"Template:Portal",
"Template:Notelist",
"Template:Commons category",
"Template:Rp",
"Template:Main",
"Template:Nowrap",
"Template:See also",
"Template:Cite journal",
"Template:Cite news",
"Template:Cite book",
"Template:Use mdy dates",
"Template:Infobox aircraft type",
"Template:Authority control",
"Template:Short description",
"Template:Featured article",
"Template:Timeline Legend",
"Template:Cite press release",
"Template:Refend",
"Template:Official website",
"Template:Not a typo",
"Template:Convert"
] | https://en.wikipedia.org/wiki/Boeing_767 |
4,166 | Bill Walsh (American football coach) | William Ernest Walsh (November 30, 1931 – July 30, 2007) was an American professional and college football coach. He served as head coach of the San Francisco 49ers and the Stanford Cardinal, during which time he popularized the West Coast offense. After retiring from the 49ers, Walsh worked as a sports broadcaster for several years and then returned as head coach at Stanford for three seasons.
Walsh went 102–63–1 (wins-losses-ties) with the 49ers, winning 10 of his 14 postseason games along with six division titles, three NFC Championship titles, and three Super Bowls. He was named NFL Coach of the Year in 1981 and 1984. In 1993, he was elected to the Pro Football Hall of Fame. He is widely considered amongst the greatest coaches in NFL history.
Walsh was born in Los Angeles. He attended Hayward High School in Hayward in the San Francisco Bay Area, where he played running back. Walsh played quarterback at the College of San Mateo for two seasons. (Both John Madden and Walsh played and coached at the College of San Mateo early in their careers.) After playing at the College of San Mateo, Walsh transferred to San José State University, where he played tight end and defensive end. He also participated in intercollegiate boxing, winning the golden glove.
Walsh graduated from San Jose State with a bachelor's degree in physical education in 1955. After two years in the U.S. Army participating on their boxing team, Walsh built a championship team at Washington High School in Fremont before becoming an assistant coach at Cal, Stanford and then the Oakland Raiders in 1966.
He served under Bob Bronzan as a graduate assistant coach on the Spartans football coaching staff and graduated with a master's degree in physical education from San Jose State in 1959. His master's thesis was entitled Flank Formation Football -- Stress: Defense. Thesis 796.W228f.
Following graduation, Walsh coached the football and swim teams at Washington High School in Fremont, California. While there he interviewed for an assistant coaching position with the new head coach of the University of California, Berkeley California Golden Bears football team, Marv Levy.
"I was very impressed, individually, by his knowledge, by his intelligence, by his personality, and hired him," Levy said. Levy and Walsh, two future NFL Hall of Famers, would never produce a winning season for the Golden Bears.
Leaving Berkeley, Walsh did a stint at Stanford University as an assistant coach of its Cardinal football team before beginning his pro coaching career.
Walsh began his pro coaching career in 1966 as an assistant with the AFL's Oakland Raiders. There he was versed in the downfield-oriented "vertical" passing offense favored by Al Davis, an acolyte of Sid Gillman.
Walsh left the Raiders the next year to become the head coach and general manager of the San Jose Apaches of the Continental Football League (CFL). He led the Apaches to second place in the Pacific Division, but the team ceased all football operations prior to the start of the 1968 CFL season.
In 1968, Walsh joined the staff of head coach Paul Brown of the AFL expansion Cincinnati Bengals, where he coached wide receivers from 1968 to 1970. It was there that Walsh developed the philosophy now known as the "West Coast offense". Cincinnati's new quarterback, Virgil Carter, was known for his great mobility and accuracy but lacked a strong arm necessary to throw deep passes. To suit his strengths, Walsh suggested a modification of the downfield based "vertical passing scheme" he had learned during his time with the Raiders with one featuring a "horizontal" approach that relied on quick, short throws, often spreading the ball across the entire width of the field. In 1971 Walsh was given the additional responsibility of coaching the quarterbacks, and Carter went on to lead the league in pass completion percentage.
Ken Anderson eventually replaced Carter as starting quarterback, and, together with star wide receiver Isaac Curtis, produced a consistent, effective offensive attack.
When Brown retired as head coach following the 1975 season and appointed Bill "Tiger" Johnson as his successor, Walsh resigned and served as an assistant coach in 1976 for the San Diego Chargers under head coach Tommy Prothro. In a 2006 interview, Walsh claimed that during his tenure with the Bengals, Brown "worked against my candidacy" to be a head coach anywhere in the league. "All the way through I had opportunities, and I never knew about them", Walsh said. "And then when I left him, he called whoever he thought was necessary to keep me out of the NFL." Walsh also claimed that Brown kept talking him down any time Brown was called by NFL teams considering hiring Walsh as a head coach.
In 1977, Walsh was hired by Stanford University as the head coach of its Cardinal football team, where he stayed for two seasons. He was quite successful, with his teams posting a 9–3 record in 1977 with a win in the Sun Bowl, and going 8–4 in 1978 with a win in the Bluebonnet Bowl. His notable players at Stanford included quarterbacks Guy Benjamin, Steve Dils, and John Elway, wide receivers James Lofton and Ken Margerum, linebacker Gordy Ceresino, and running back Darrin Nelson. Walsh was the Pac-8 Conference Coach of the Year in 1977.
On January 9, 1979, Walsh resigned as head coach at Stanford, and San Francisco 49ers team owner Edward J. DeBartolo, Jr. fired head coach Fred O'Connor and general manager Joe Thomas following a 2–14 in 1978 season. Walsh was appointed head coach of the 49ers the next day.
The 49ers went 2-14 again in 1979. Hidden behind that record were organizational changes made by Walsh that set the team on a better course, including selecting Notre Dame quarterback Joe Montana in the third round of the 1979 NFL Draft.
In 1980 starting quarterback Steve DeBerg got the 49ers off to a 3–0 start, but after a week 6 blowout loss to the Dallas Cowboys by a score of 59–14, Walsh gave Montana a chance to start. On December 7 vs. the New Orleans Saints, the second-year player brought the 49ers back from a 35–7 halftime deficit to a 38–35 overtime win. In spite of this switch, the team struggled to a 6–10 finish – a record that belied a championship team in the making.
In 1981, Walsh's efforts as head coach resulted in wins during a 13–3 regular season. Key victories were two wins each over the Los Angeles Rams and the Dallas Cowboys. The Rams were only two seasons removed from a Super Bowl appearance, and had dominated the series with the 49ers since 1967, winning 23, losing 3 and tying 1. San Francisco's two wins over the Rams in 1981 marked the shift of dominance in favor of the 49ers that lasted until 1998 with 30 wins (including 17 consecutively) against only 6 defeats. The 49ers blew out the Cowboys in week 6 of the regular season. On Monday Night Football that week, the win was not included in the halftime highlights. Walsh felt that this was because the Cowboys were scheduled to play the Rams the next week in a Sunday night game and that showing the highlights of the 49ers' win would potentially hurt the game's ratings. However, Walsh used this as a motivating factor for his team, who felt they were disrespected.
The 49ers faced the Cowboys again in the NFC title game. The contest was very close, and in the fourth quarter Walsh called a series of running plays as the 49ers marched down the field against the Cowboys' prevent defense, which had been expecting the 49ers to mainly pass. The 49ers came from behind to win the game on Joe Montana's pass completion to Dwight Clark for a touchdown, a play that came to be known simply as The Catch, propelling Walsh to his first appearance in a Super Bowl. Walsh would later write that the 49ers' two wins over the Rams showed a shift of power in their division, while the wins over the Cowboys showed a shift of power in the conference.
Two weeks later, on January 24, 1982, San Francisco faced the Cincinnati Bengals in Super Bowl XVI, winning 26–21 for the team's first NFL championship. Only a year removed from back-to-back two-win seasons, the 49ers had risen from the cellar to the top of the NFL in just two seasons. What came to be known as the West Coast offense developed by Walsh had proven a winner.
In all, Walsh served as 49ers head coach for 10 years, winning three Super Bowl championships, in the 1981, 1984, and 1988 seasons, and establishing a new NFL record.
Walsh had a disciplined approach to game-planning, famously scripting the first 10–15 offensive plays before the start of each game. His innovative play calling and design earned him the nickname "The Genius". In the ten-year span under Walsh, San Francisco scored 3,714 points (24.4 per game), the most of any team in the league.
In addition to Joe Montana, Walsh drafted Ronnie Lott, Charles Haley, and Jerry Rice, each one going on to the Pro Football Hall of Fame. He also traded a 2nd and 4th round pick in the 1987 draft for Steve Young, who took over from Montana, led the team to Super Bowl success, and was enshrined in Canton after his playing career. Walsh's success at every level of football, especially with the 49ers, earned him his own ticket to Canton in 1993.
Walsh's upline coaching tree included working as assistant for American Football League great and Hall of Fame head coach Al Davis and NFL legend and Hall of Famer Paul Brown, and, through Davis, AFL great and Hall of Fame head coach Sid Gillman of the then AFL Los Angeles/San Diego Chargers.
Tree updated through December 9, 2015.
Many Walsh assistants went on to become head coaches,. including George Seifert, Mike Holmgren, Ray Rhodes, and Dennis Green. Seifert succeeded Walsh as 49ers head coach, and guided San Francisco to victories in Super Bowl XXIV and Super Bowl XXIX. Holmgren won a Super Bowl with the Green Bay Packers, and made 3 Super Bowl appearances as a head coach: 2 with the Packers, and another with the Seattle Seahawks. These coaches in turn have their own disciples who have used Walsh's West Coast system, such as former Denver Broncos head coach Mike Shanahan and former Houston Texans head coach Gary Kubiak. Mike Shanahan was an offensive coordinator under George Seifert and went on to win Super Bowl XXXII and Super Bowl XXXIII during his time as head coach of the Denver Broncos. Kubiak was first a quarterback coach with the 49ers, and then offensive coordinator for Shanahan with the Broncos. In 2015, he became the Broncos' head coach and led Denver to victory in Super Bowl 50. Dennis Green trained Tony Dungy, who won a Super Bowl with the Indianapolis Colts, and Brian Billick with his brother-in law and linebackers coach Mike Smith. Billick won a Super Bowl as head coach of the Baltimore Ravens.
Mike Holmgren trained many of his assistants to become head coaches, including Jon Gruden and Andy Reid. Gruden won a Super Bowl with the Tampa Bay Buccaneers. Reid served as head coach of the Philadelphia Eagles from 1999 to 2012, and guided the Eagles to multiple winning seasons and numerous playoff appearances. Ever since 2013, Reid has served as head coach of the Kansas City Chiefs. He was finally able to win a Super Bowl, when his Chiefs defeated the San Francisco 49ers in Super Bowl LIV. In addition to this, Marc Trestman, former head coach of the Chicago Bears, served as offensive coordinator under Seifert in the 90's. Gruden himself would train Mike Tomlin, who led the Pittsburgh Steelers to their sixth Super Bowl championship, and Jim Harbaugh, whose 49ers would face his brother, John Harbaugh, whom Reid himself trained, and the Baltimore Ravens at Super Bowl XLVII, which marked the Ravens' second World Championship.
Bill Walsh was viewed as a strong advocate for African-American head coaches in the NFL and NCAA. Thus, the impact of Walsh also changed the NFL into an equal opportunity for African-American coaches. Along with Ray Rhodes and Dennis Green, Tyrone Willingham became the head coach at Stanford, then later Notre Dame and Washington. One of Mike Shanahan's assistants, Karl Dorrell, went on to be the head coach at UCLA. Walsh directly helped propel Dennis Green into the NFL head coaching ranks by offering to take on the head coaching job at Stanford.
After leaving the coaching ranks immediately following his team's victory in Super Bowl XXIII, Walsh went to work as a broadcaster for NBC, teaming with Dick Enberg to form the lead broadcasting team, replacing Merlin Olsen.
During his time with NBC, rumors began to surface that Walsh would coach again in the NFL. There were at least two known instances.
First, according to a February 2015 article by Mike Florio of NBC Sports, after a 5–11 season in 1989, the Patriots fired Raymond Berry and unsuccessfully attempted to lure Walsh to Foxborough to become head coach and general manager. When that failed, New England promoted defensive coordinator Rod Rust; the team split its first two games and then lost 14 straight in 1990.
Second, late in the 1990 season, Walsh was rumored to become Tampa Bay's next head coach and general manager after the team fired Ray Perkins and promoted Richard Williamson on an interim basis. Part of the speculation was fueled by the fact that Walsh's contract with NBC, which ran for 1989 and 1990, would soon be up for renewal, to say nothing of the pressure Hugh Culverhouse faced to increase fan support and to fill the seats at Tampa Stadium. However, less than a week after Super Bowl XXV, Walsh not only declined Tampa Bay's offer, but he and NBC agreed on a contract extension. Walsh would continue in his role with NBC for 1991. Meanwhile, after unsuccessfully courting then-recently fired Eagles coach Buddy Ryan or Giants then-defensive coordinator Bill Belichick to man the sidelines for Tampa Bay in 1991, the Bucs stuck with Williamson. Under Williamson's leadership, Tampa Bay won only three games in 1991.
Walsh did return to Stanford as head coach in 1992, leading the Cardinal to a 10–3 record and a Pacific-10 Conference co-championship. Stanford finished the season with a victory over Penn State in the Blockbuster Bowl on January 1, 1993, and a #9 ranking in the final AP Poll. In 1994, after consecutive losing seasons, Walsh left Stanford and retired from coaching.
In 1996 Walsh returned to the 49ers as an administrative aide Walsh was the vice president and general manager for the 49ers from 1999 to 2001 and was a special consultant to the team for three years afterwards.
In 2004, Walsh was appointed as special assistant to the athletic director at Stanford. In 2005, after then-athletic director Ted Leland stepped down, Walsh was named interim athletic director. He also acted as a consultant for his alma mater San Jose State University in their search for an athletic director and Head Football Coach in 2005.
Walsh was also the author of three books, a motivational speaker, and taught classes at the Stanford Graduate School of Business.
Walsh was a board member for the Lott IMPACT Trophy, which is named after Pro Football Hall of Fame defensive back Ronnie Lott, and is awarded annually to college football's Defensive IMPACT Player of the Year. Walsh served as a keynote speaker at the award's banquet.
Bill married his college sweetheart Geri, and had 3 children; Steve, Craig and Elizabeth.
Bill Walsh died of leukemia on July 30, 2007, at his home in Woodside, California.
Following Walsh's death, the playing field at the former Candlestick Park was renamed "Bill Walsh Field". Additionally, the regular San Jose State versus Stanford football game was renamed the "Bill Walsh Legacy Game". | [
{
"paragraph_id": 0,
"text": "William Ernest Walsh (November 30, 1931 – July 30, 2007) was an American professional and college football coach. He served as head coach of the San Francisco 49ers and the Stanford Cardinal, during which time he popularized the West Coast offense. After retiring from the 49ers, Walsh worked as a sports broadcaster for several years and then returned as head coach at Stanford for three seasons.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Walsh went 102–63–1 (wins-losses-ties) with the 49ers, winning 10 of his 14 postseason games along with six division titles, three NFC Championship titles, and three Super Bowls. He was named NFL Coach of the Year in 1981 and 1984. In 1993, he was elected to the Pro Football Hall of Fame. He is widely considered amongst the greatest coaches in NFL history.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Walsh was born in Los Angeles. He attended Hayward High School in Hayward in the San Francisco Bay Area, where he played running back. Walsh played quarterback at the College of San Mateo for two seasons. (Both John Madden and Walsh played and coached at the College of San Mateo early in their careers.) After playing at the College of San Mateo, Walsh transferred to San José State University, where he played tight end and defensive end. He also participated in intercollegiate boxing, winning the golden glove.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "Walsh graduated from San Jose State with a bachelor's degree in physical education in 1955. After two years in the U.S. Army participating on their boxing team, Walsh built a championship team at Washington High School in Fremont before becoming an assistant coach at Cal, Stanford and then the Oakland Raiders in 1966.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "He served under Bob Bronzan as a graduate assistant coach on the Spartans football coaching staff and graduated with a master's degree in physical education from San Jose State in 1959. His master's thesis was entitled Flank Formation Football -- Stress: Defense. Thesis 796.W228f.",
"title": "College coaching career"
},
{
"paragraph_id": 5,
"text": "Following graduation, Walsh coached the football and swim teams at Washington High School in Fremont, California. While there he interviewed for an assistant coaching position with the new head coach of the University of California, Berkeley California Golden Bears football team, Marv Levy.",
"title": "College coaching career"
},
{
"paragraph_id": 6,
"text": "\"I was very impressed, individually, by his knowledge, by his intelligence, by his personality, and hired him,\" Levy said. Levy and Walsh, two future NFL Hall of Famers, would never produce a winning season for the Golden Bears.",
"title": "College coaching career"
},
{
"paragraph_id": 7,
"text": "Leaving Berkeley, Walsh did a stint at Stanford University as an assistant coach of its Cardinal football team before beginning his pro coaching career.",
"title": "College coaching career"
},
{
"paragraph_id": 8,
"text": "Walsh began his pro coaching career in 1966 as an assistant with the AFL's Oakland Raiders. There he was versed in the downfield-oriented \"vertical\" passing offense favored by Al Davis, an acolyte of Sid Gillman.",
"title": "Professional coaching career"
},
{
"paragraph_id": 9,
"text": "Walsh left the Raiders the next year to become the head coach and general manager of the San Jose Apaches of the Continental Football League (CFL). He led the Apaches to second place in the Pacific Division, but the team ceased all football operations prior to the start of the 1968 CFL season.",
"title": "Professional coaching career"
},
{
"paragraph_id": 10,
"text": "In 1968, Walsh joined the staff of head coach Paul Brown of the AFL expansion Cincinnati Bengals, where he coached wide receivers from 1968 to 1970. It was there that Walsh developed the philosophy now known as the \"West Coast offense\". Cincinnati's new quarterback, Virgil Carter, was known for his great mobility and accuracy but lacked a strong arm necessary to throw deep passes. To suit his strengths, Walsh suggested a modification of the downfield based \"vertical passing scheme\" he had learned during his time with the Raiders with one featuring a \"horizontal\" approach that relied on quick, short throws, often spreading the ball across the entire width of the field. In 1971 Walsh was given the additional responsibility of coaching the quarterbacks, and Carter went on to lead the league in pass completion percentage.",
"title": "Professional coaching career"
},
{
"paragraph_id": 11,
"text": "Ken Anderson eventually replaced Carter as starting quarterback, and, together with star wide receiver Isaac Curtis, produced a consistent, effective offensive attack.",
"title": "Professional coaching career"
},
{
"paragraph_id": 12,
"text": "When Brown retired as head coach following the 1975 season and appointed Bill \"Tiger\" Johnson as his successor, Walsh resigned and served as an assistant coach in 1976 for the San Diego Chargers under head coach Tommy Prothro. In a 2006 interview, Walsh claimed that during his tenure with the Bengals, Brown \"worked against my candidacy\" to be a head coach anywhere in the league. \"All the way through I had opportunities, and I never knew about them\", Walsh said. \"And then when I left him, he called whoever he thought was necessary to keep me out of the NFL.\" Walsh also claimed that Brown kept talking him down any time Brown was called by NFL teams considering hiring Walsh as a head coach.",
"title": "Professional coaching career"
},
{
"paragraph_id": 13,
"text": "In 1977, Walsh was hired by Stanford University as the head coach of its Cardinal football team, where he stayed for two seasons. He was quite successful, with his teams posting a 9–3 record in 1977 with a win in the Sun Bowl, and going 8–4 in 1978 with a win in the Bluebonnet Bowl. His notable players at Stanford included quarterbacks Guy Benjamin, Steve Dils, and John Elway, wide receivers James Lofton and Ken Margerum, linebacker Gordy Ceresino, and running back Darrin Nelson. Walsh was the Pac-8 Conference Coach of the Year in 1977.",
"title": "Professional coaching career"
},
{
"paragraph_id": 14,
"text": "On January 9, 1979, Walsh resigned as head coach at Stanford, and San Francisco 49ers team owner Edward J. DeBartolo, Jr. fired head coach Fred O'Connor and general manager Joe Thomas following a 2–14 in 1978 season. Walsh was appointed head coach of the 49ers the next day.",
"title": "Professional coaching career"
},
{
"paragraph_id": 15,
"text": "The 49ers went 2-14 again in 1979. Hidden behind that record were organizational changes made by Walsh that set the team on a better course, including selecting Notre Dame quarterback Joe Montana in the third round of the 1979 NFL Draft.",
"title": "Professional coaching career"
},
{
"paragraph_id": 16,
"text": "In 1980 starting quarterback Steve DeBerg got the 49ers off to a 3–0 start, but after a week 6 blowout loss to the Dallas Cowboys by a score of 59–14, Walsh gave Montana a chance to start. On December 7 vs. the New Orleans Saints, the second-year player brought the 49ers back from a 35–7 halftime deficit to a 38–35 overtime win. In spite of this switch, the team struggled to a 6–10 finish – a record that belied a championship team in the making.",
"title": "Professional coaching career"
},
{
"paragraph_id": 17,
"text": "In 1981, Walsh's efforts as head coach resulted in wins during a 13–3 regular season. Key victories were two wins each over the Los Angeles Rams and the Dallas Cowboys. The Rams were only two seasons removed from a Super Bowl appearance, and had dominated the series with the 49ers since 1967, winning 23, losing 3 and tying 1. San Francisco's two wins over the Rams in 1981 marked the shift of dominance in favor of the 49ers that lasted until 1998 with 30 wins (including 17 consecutively) against only 6 defeats. The 49ers blew out the Cowboys in week 6 of the regular season. On Monday Night Football that week, the win was not included in the halftime highlights. Walsh felt that this was because the Cowboys were scheduled to play the Rams the next week in a Sunday night game and that showing the highlights of the 49ers' win would potentially hurt the game's ratings. However, Walsh used this as a motivating factor for his team, who felt they were disrespected.",
"title": "Professional coaching career"
},
{
"paragraph_id": 18,
"text": "The 49ers faced the Cowboys again in the NFC title game. The contest was very close, and in the fourth quarter Walsh called a series of running plays as the 49ers marched down the field against the Cowboys' prevent defense, which had been expecting the 49ers to mainly pass. The 49ers came from behind to win the game on Joe Montana's pass completion to Dwight Clark for a touchdown, a play that came to be known simply as The Catch, propelling Walsh to his first appearance in a Super Bowl. Walsh would later write that the 49ers' two wins over the Rams showed a shift of power in their division, while the wins over the Cowboys showed a shift of power in the conference.",
"title": "Professional coaching career"
},
{
"paragraph_id": 19,
"text": "Two weeks later, on January 24, 1982, San Francisco faced the Cincinnati Bengals in Super Bowl XVI, winning 26–21 for the team's first NFL championship. Only a year removed from back-to-back two-win seasons, the 49ers had risen from the cellar to the top of the NFL in just two seasons. What came to be known as the West Coast offense developed by Walsh had proven a winner.",
"title": "Professional coaching career"
},
{
"paragraph_id": 20,
"text": "In all, Walsh served as 49ers head coach for 10 years, winning three Super Bowl championships, in the 1981, 1984, and 1988 seasons, and establishing a new NFL record.",
"title": "Professional coaching career"
},
{
"paragraph_id": 21,
"text": "Walsh had a disciplined approach to game-planning, famously scripting the first 10–15 offensive plays before the start of each game. His innovative play calling and design earned him the nickname \"The Genius\". In the ten-year span under Walsh, San Francisco scored 3,714 points (24.4 per game), the most of any team in the league.",
"title": "Professional coaching career"
},
{
"paragraph_id": 22,
"text": "In addition to Joe Montana, Walsh drafted Ronnie Lott, Charles Haley, and Jerry Rice, each one going on to the Pro Football Hall of Fame. He also traded a 2nd and 4th round pick in the 1987 draft for Steve Young, who took over from Montana, led the team to Super Bowl success, and was enshrined in Canton after his playing career. Walsh's success at every level of football, especially with the 49ers, earned him his own ticket to Canton in 1993.",
"title": "Professional coaching career"
},
{
"paragraph_id": 23,
"text": "Walsh's upline coaching tree included working as assistant for American Football League great and Hall of Fame head coach Al Davis and NFL legend and Hall of Famer Paul Brown, and, through Davis, AFL great and Hall of Fame head coach Sid Gillman of the then AFL Los Angeles/San Diego Chargers.",
"title": "Professional coaching career"
},
{
"paragraph_id": 24,
"text": "Tree updated through December 9, 2015.",
"title": "Professional coaching career"
},
{
"paragraph_id": 25,
"text": "Many Walsh assistants went on to become head coaches,. including George Seifert, Mike Holmgren, Ray Rhodes, and Dennis Green. Seifert succeeded Walsh as 49ers head coach, and guided San Francisco to victories in Super Bowl XXIV and Super Bowl XXIX. Holmgren won a Super Bowl with the Green Bay Packers, and made 3 Super Bowl appearances as a head coach: 2 with the Packers, and another with the Seattle Seahawks. These coaches in turn have their own disciples who have used Walsh's West Coast system, such as former Denver Broncos head coach Mike Shanahan and former Houston Texans head coach Gary Kubiak. Mike Shanahan was an offensive coordinator under George Seifert and went on to win Super Bowl XXXII and Super Bowl XXXIII during his time as head coach of the Denver Broncos. Kubiak was first a quarterback coach with the 49ers, and then offensive coordinator for Shanahan with the Broncos. In 2015, he became the Broncos' head coach and led Denver to victory in Super Bowl 50. Dennis Green trained Tony Dungy, who won a Super Bowl with the Indianapolis Colts, and Brian Billick with his brother-in law and linebackers coach Mike Smith. Billick won a Super Bowl as head coach of the Baltimore Ravens.",
"title": "Professional coaching career"
},
{
"paragraph_id": 26,
"text": "Mike Holmgren trained many of his assistants to become head coaches, including Jon Gruden and Andy Reid. Gruden won a Super Bowl with the Tampa Bay Buccaneers. Reid served as head coach of the Philadelphia Eagles from 1999 to 2012, and guided the Eagles to multiple winning seasons and numerous playoff appearances. Ever since 2013, Reid has served as head coach of the Kansas City Chiefs. He was finally able to win a Super Bowl, when his Chiefs defeated the San Francisco 49ers in Super Bowl LIV. In addition to this, Marc Trestman, former head coach of the Chicago Bears, served as offensive coordinator under Seifert in the 90's. Gruden himself would train Mike Tomlin, who led the Pittsburgh Steelers to their sixth Super Bowl championship, and Jim Harbaugh, whose 49ers would face his brother, John Harbaugh, whom Reid himself trained, and the Baltimore Ravens at Super Bowl XLVII, which marked the Ravens' second World Championship.",
"title": "Professional coaching career"
},
{
"paragraph_id": 27,
"text": "Bill Walsh was viewed as a strong advocate for African-American head coaches in the NFL and NCAA. Thus, the impact of Walsh also changed the NFL into an equal opportunity for African-American coaches. Along with Ray Rhodes and Dennis Green, Tyrone Willingham became the head coach at Stanford, then later Notre Dame and Washington. One of Mike Shanahan's assistants, Karl Dorrell, went on to be the head coach at UCLA. Walsh directly helped propel Dennis Green into the NFL head coaching ranks by offering to take on the head coaching job at Stanford.",
"title": "Professional coaching career"
},
{
"paragraph_id": 28,
"text": "After leaving the coaching ranks immediately following his team's victory in Super Bowl XXIII, Walsh went to work as a broadcaster for NBC, teaming with Dick Enberg to form the lead broadcasting team, replacing Merlin Olsen.",
"title": "Professional coaching career"
},
{
"paragraph_id": 29,
"text": "During his time with NBC, rumors began to surface that Walsh would coach again in the NFL. There were at least two known instances.",
"title": "Professional coaching career"
},
{
"paragraph_id": 30,
"text": "First, according to a February 2015 article by Mike Florio of NBC Sports, after a 5–11 season in 1989, the Patriots fired Raymond Berry and unsuccessfully attempted to lure Walsh to Foxborough to become head coach and general manager. When that failed, New England promoted defensive coordinator Rod Rust; the team split its first two games and then lost 14 straight in 1990.",
"title": "Professional coaching career"
},
{
"paragraph_id": 31,
"text": "Second, late in the 1990 season, Walsh was rumored to become Tampa Bay's next head coach and general manager after the team fired Ray Perkins and promoted Richard Williamson on an interim basis. Part of the speculation was fueled by the fact that Walsh's contract with NBC, which ran for 1989 and 1990, would soon be up for renewal, to say nothing of the pressure Hugh Culverhouse faced to increase fan support and to fill the seats at Tampa Stadium. However, less than a week after Super Bowl XXV, Walsh not only declined Tampa Bay's offer, but he and NBC agreed on a contract extension. Walsh would continue in his role with NBC for 1991. Meanwhile, after unsuccessfully courting then-recently fired Eagles coach Buddy Ryan or Giants then-defensive coordinator Bill Belichick to man the sidelines for Tampa Bay in 1991, the Bucs stuck with Williamson. Under Williamson's leadership, Tampa Bay won only three games in 1991.",
"title": "Professional coaching career"
},
{
"paragraph_id": 32,
"text": "Walsh did return to Stanford as head coach in 1992, leading the Cardinal to a 10–3 record and a Pacific-10 Conference co-championship. Stanford finished the season with a victory over Penn State in the Blockbuster Bowl on January 1, 1993, and a #9 ranking in the final AP Poll. In 1994, after consecutive losing seasons, Walsh left Stanford and retired from coaching.",
"title": "Professional coaching career"
},
{
"paragraph_id": 33,
"text": "In 1996 Walsh returned to the 49ers as an administrative aide Walsh was the vice president and general manager for the 49ers from 1999 to 2001 and was a special consultant to the team for three years afterwards.",
"title": "Professional coaching career"
},
{
"paragraph_id": 34,
"text": "In 2004, Walsh was appointed as special assistant to the athletic director at Stanford. In 2005, after then-athletic director Ted Leland stepped down, Walsh was named interim athletic director. He also acted as a consultant for his alma mater San Jose State University in their search for an athletic director and Head Football Coach in 2005.",
"title": "Professional coaching career"
},
{
"paragraph_id": 35,
"text": "Walsh was also the author of three books, a motivational speaker, and taught classes at the Stanford Graduate School of Business.",
"title": "Professional coaching career"
},
{
"paragraph_id": 36,
"text": "Walsh was a board member for the Lott IMPACT Trophy, which is named after Pro Football Hall of Fame defensive back Ronnie Lott, and is awarded annually to college football's Defensive IMPACT Player of the Year. Walsh served as a keynote speaker at the award's banquet.",
"title": "Professional coaching career"
},
{
"paragraph_id": 37,
"text": "Bill married his college sweetheart Geri, and had 3 children; Steve, Craig and Elizabeth.",
"title": "Personal life"
},
{
"paragraph_id": 38,
"text": "Bill Walsh died of leukemia on July 30, 2007, at his home in Woodside, California.",
"title": "Death"
},
{
"paragraph_id": 39,
"text": "Following Walsh's death, the playing field at the former Candlestick Park was renamed \"Bill Walsh Field\". Additionally, the regular San Jose State versus Stanford football game was renamed the \"Bill Walsh Legacy Game\".",
"title": "Death"
}
] | William Ernest Walsh was an American professional and college football coach. He served as head coach of the San Francisco 49ers and the Stanford Cardinal, during which time he popularized the West Coast offense. After retiring from the 49ers, Walsh worked as a sports broadcaster for several years and then returned as head coach at Stanford for three seasons. Walsh went 102–63–1 (wins-losses-ties) with the 49ers, winning 10 of his 14 postseason games along with six division titles, three NFC Championship titles, and three Super Bowls. He was named NFL Coach of the Year in 1981 and 1984. In 1993, he was elected to the Pro Football Hall of Fame. He is widely considered amongst the greatest coaches in NFL history. | 2001-09-11T19:12:33Z | 2023-12-27T14:45:21Z | [
"Template:Use American English",
"Template:CFB Yearly Record Start",
"Template:CFB Yearly Record End",
"Template:S-aft",
"Template:Short description",
"Template:Infobox NFL biography",
"Template:Reflist",
"Template:Cite news",
"Template:Use mdy dates",
"Template:Authority control",
"Template:ISBN",
"Template:Cite episode",
"Template:Cite book",
"Template:Profootballhof",
"Template:S-bef",
"Template:S-ttl",
"Template:CFB Yearly Record Subhead",
"Template:Small",
"Template:Expand section",
"Template:CFB Yearly Record Entry",
"Template:Nowrap",
"Template:Navboxes",
"Template:Cite press release",
"Template:S-start",
"Template:S-end",
"Template:CFB Yearly Record Subtotal",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Bill_Walsh_(American_football_coach) |
4,168 | Utility knife | A utility knife is any type of knife used for general manual work purposes. Such knives were originally fixed-blade knives with durable cutting edges suitable for rough work such as cutting cordage, cutting/scraping hides, butchering animals, cleaning fish scales, reshaping timber, and other tasks. Craft knives are small utility knives used as precision-oriented tools for finer, more delicate tasks such as carving and papercutting.
Today, the term "utility knife" also includes small folding-, retractable- and/or replaceable-razor blade knives suited for use in the general workplace or in the construction industry. The latter type is sometimes generically called a Stanley knife, after a prominent brand.
There is also a utility knife for kitchen use, which is sized between a chef's knife and paring knife.
The fixed-blade utility knife was developed some 500,000 years ago, when human ancestors began to make stone knives. These knives were general-purpose tools, designed for cutting and shaping wooden implements, scraping hides, preparing food, and for other utilitarian purposes.
By the 19th century the fixed-blade utility knife had evolved into a steel-bladed outdoors field knife capable of butchering game, cutting wood, and preparing campfires and meals. With the invention of the backspring, pocket-size utility knives were introduced with folding blades and other folding tools designed to increase the utility of the overall design. The folding pocketknife and utility tool is typified by the Camper or Boy Scout pocketknife, the Swiss Army Knife, and by multi-tools fitted with knife blades. The development of stronger locking blade mechanisms for folding knives—as with the Spanish navaja, the Opinel, and the Buck 110 Folding Hunter—significantly increased the utility of such knives when employed for heavy-duty tasks such as preparing game or cutting through dense or tough materials.
The fixed or folding blade utility knife is popular for both indoor and outdoor use. One of the most popular types of workplace utility knife is the retractable or folding utility knife (also known as a Stanley knife, box cutter, or by various other names). These types of utility knives are designed as multi-purpose cutting tools for use in a variety of trades and crafts. Designed to be lightweight and easy to carry and use, utility knives are commonly used in factories, warehouses, construction projects, and other situations where a tool is routinely needed to mark cut lines, trim plastic or wood materials, or to cut tape, cord, strapping, cardboard, or other packaging material.
In British, Australian and New Zealand English, along with Dutch, Danish and Austrian German, a utility knife frequently used in the construction industry is known as a Stanley knife. This name is a generic trademark named after Stanley Works, a manufacturer of such knives. In Israel and Switzerland, these knives are known as Japanese knives. In Brazil they are known as estiletes or cortadores Olfa (the latter, being another genericised trademark). In Portugal, Panama and Canada they are also known as X-Acto (yet another genericised trademark ). In India, Russia, the Philippines, France, Iraq, Italy, Egypt, and Germany, they are simply called cutter. In the Flemish region of Belgium it is called cuttermes(je) (cutter knife). In general Spanish, they are known as cortaplumas (penknife, when it comes to folding blades); in Spain, Mexico, and Costa Rica, they are colloquially known as cutters; in Argentina and Uruguay the segmented fixed-blade knives are known as "Trinchetas". In Turkey, they are known as maket bıçağı (which literally translates as model knife).
Other names for the tool are box cutter or boxcutter, razor blade knife, razor knife, carpet knife, pen knife, stationery knife, sheetrock knife, or drywall knife.
Utility knives may use fixed, folding, or retractable or replaceable blades, and come in a wide variety of lengths and styles suited to the particular set of tasks they are designed to perform. Thus, an outdoors utility knife suited for camping or hunting might use a broad 75 to 130 millimetres (3–5 in) fixed blade, while a utility knife designed for the construction industry might feature a replaceable utility or razor blade for cutting packaging, cutting shingles, marking cut lines, or scraping paint.
Large fixed-blade utility knives are most often employed in an outdoors context, such as fishing, camping, or hunting. Outdoor utility knives typically feature sturdy blades from 100 to 150 millimetres (4–6 in) in length, with edge geometry designed to resist chipping and breakage.
The term "utility knife" may also refer to small fixed-blade knives used for crafts, model-making and other artisanal projects. These small knives feature light-duty blades best suited for cutting thin, lightweight materials. The small, thin blade and specialized handle permit cuts requiring a high degree of precision and control.
The largest construction or workplace utility knives typically feature retractable and replaceable blades, and are made of either die-cast metal or molded plastic. Some use standard razor blades, others specialized double-ended utility blades. The user can adjust how far the blade extends from the handle, so that, for example, the knife can be used to cut the tape sealing a package without damaging the contents of the package. When the blade becomes dull, it can be quickly reversed or switched for a new one. Spare or used blades are stored in the hollow handle of some models, and can be accessed by removing a screw and opening the handle. Other models feature a quick-change mechanism that allows replacing the blade without tools, as well as a flip-out blade storage tray. The blades for this type of utility knife come in both double- and single-ended versions, and are interchangeable with many, but not all, of the later copies. Specialized blades also exist for cutting string, linoleum, and other materials.
Another style is a snap-off utility knife that contains a long, segmented blade that slides out from it. As the endmost edge becomes dull, it can be broken off the remaining blade, exposing the next section, which is sharp and ready for use. The snapping is best accomplished with a blade snapper that is often built-in, or a pair of pliers, and the break occurs at the score lines, where the metal is thinnest. When all of the individual segments are used, the knife may be thrown away, or, more often, refilled with a replacement blade. This design was introduced by Japanese manufacturer Olfa Corporation in 1956 as the world's first snap-off blade and was inspired from analyzing the sharp cutting edge produced when glass is broken and how pieces of a chocolate bar break into segments. The sharp cutting edge on these knives is not on the edge where the blade is snapped off; rather one long edge of the whole blade is sharpened, and there are scored diagonal breakoff lines at intervals down the blade. Thus each snapped-off piece is roughly a parallelogram, with each long edge being a breaking edge, and one or both of the short ends being a sharpened edge.
Another utility knife often used for cutting open boxes consists of a simple sleeve around a rectangular handle into which single-edge utility blades can be inserted. The sleeve slides up and down on the handle, holding the blade in place during use and covering the blade when not in use. The blade holder may either retract or fold into the handle, much like a folding-blade pocketknife. The blade holder is designed to expose just enough edge to cut through one layer of corrugated fibreboard, to minimize chances of damaging contents of cardboard boxes.
Most utility knives are not well suited to use as offensive weapons, with the exception of some outdoor-type utility knives employing longer blades. However, even small razor-blade type utility knives may sometimes find use as slashing weapons. The 9-11 commission report stated passengers in cell phone calls reported knives or "box-cutters" were used as weapons (also Mace or a bomb) in hijacking airplanes in the September 11, 2001 terrorist attacks against the United States, though the exact design of the knives used is unknown. Two of the hijackers were known to have purchased Leatherman knives, which feature a (4 in (100 mm) slip-joint blade which were not prohibited on U.S. flights at the time. Those knives were not found in the possessions the two hijackers left behind. Similar cutters, including paper cutters, have also been known to be used as a lethal weapon.
Small work-type utility knives have also been used to commit robbery and other crimes. In June 2004, a Japanese student was slashed to death with a segmented-type utility knife.
In the United Kingdom, the law was changed (effective 1 October 2007) to raise the age limit for purchasing knives, including utility knives, from 16 to 18, and to make it illegal to carry a utility knife in public without a good reason. | [
{
"paragraph_id": 0,
"text": "A utility knife is any type of knife used for general manual work purposes. Such knives were originally fixed-blade knives with durable cutting edges suitable for rough work such as cutting cordage, cutting/scraping hides, butchering animals, cleaning fish scales, reshaping timber, and other tasks. Craft knives are small utility knives used as precision-oriented tools for finer, more delicate tasks such as carving and papercutting.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Today, the term \"utility knife\" also includes small folding-, retractable- and/or replaceable-razor blade knives suited for use in the general workplace or in the construction industry. The latter type is sometimes generically called a Stanley knife, after a prominent brand.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There is also a utility knife for kitchen use, which is sized between a chef's knife and paring knife.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The fixed-blade utility knife was developed some 500,000 years ago, when human ancestors began to make stone knives. These knives were general-purpose tools, designed for cutting and shaping wooden implements, scraping hides, preparing food, and for other utilitarian purposes.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "By the 19th century the fixed-blade utility knife had evolved into a steel-bladed outdoors field knife capable of butchering game, cutting wood, and preparing campfires and meals. With the invention of the backspring, pocket-size utility knives were introduced with folding blades and other folding tools designed to increase the utility of the overall design. The folding pocketknife and utility tool is typified by the Camper or Boy Scout pocketknife, the Swiss Army Knife, and by multi-tools fitted with knife blades. The development of stronger locking blade mechanisms for folding knives—as with the Spanish navaja, the Opinel, and the Buck 110 Folding Hunter—significantly increased the utility of such knives when employed for heavy-duty tasks such as preparing game or cutting through dense or tough materials.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The fixed or folding blade utility knife is popular for both indoor and outdoor use. One of the most popular types of workplace utility knife is the retractable or folding utility knife (also known as a Stanley knife, box cutter, or by various other names). These types of utility knives are designed as multi-purpose cutting tools for use in a variety of trades and crafts. Designed to be lightweight and easy to carry and use, utility knives are commonly used in factories, warehouses, construction projects, and other situations where a tool is routinely needed to mark cut lines, trim plastic or wood materials, or to cut tape, cord, strapping, cardboard, or other packaging material.",
"title": "Contemporary utility knives"
},
{
"paragraph_id": 6,
"text": "In British, Australian and New Zealand English, along with Dutch, Danish and Austrian German, a utility knife frequently used in the construction industry is known as a Stanley knife. This name is a generic trademark named after Stanley Works, a manufacturer of such knives. In Israel and Switzerland, these knives are known as Japanese knives. In Brazil they are known as estiletes or cortadores Olfa (the latter, being another genericised trademark). In Portugal, Panama and Canada they are also known as X-Acto (yet another genericised trademark ). In India, Russia, the Philippines, France, Iraq, Italy, Egypt, and Germany, they are simply called cutter. In the Flemish region of Belgium it is called cuttermes(je) (cutter knife). In general Spanish, they are known as cortaplumas (penknife, when it comes to folding blades); in Spain, Mexico, and Costa Rica, they are colloquially known as cutters; in Argentina and Uruguay the segmented fixed-blade knives are known as \"Trinchetas\". In Turkey, they are known as maket bıçağı (which literally translates as model knife).",
"title": "Names"
},
{
"paragraph_id": 7,
"text": "Other names for the tool are box cutter or boxcutter, razor blade knife, razor knife, carpet knife, pen knife, stationery knife, sheetrock knife, or drywall knife.",
"title": "Names"
},
{
"paragraph_id": 8,
"text": "Utility knives may use fixed, folding, or retractable or replaceable blades, and come in a wide variety of lengths and styles suited to the particular set of tasks they are designed to perform. Thus, an outdoors utility knife suited for camping or hunting might use a broad 75 to 130 millimetres (3–5 in) fixed blade, while a utility knife designed for the construction industry might feature a replaceable utility or razor blade for cutting packaging, cutting shingles, marking cut lines, or scraping paint.",
"title": "Design"
},
{
"paragraph_id": 9,
"text": "Large fixed-blade utility knives are most often employed in an outdoors context, such as fishing, camping, or hunting. Outdoor utility knives typically feature sturdy blades from 100 to 150 millimetres (4–6 in) in length, with edge geometry designed to resist chipping and breakage.",
"title": "Design"
},
{
"paragraph_id": 10,
"text": "The term \"utility knife\" may also refer to small fixed-blade knives used for crafts, model-making and other artisanal projects. These small knives feature light-duty blades best suited for cutting thin, lightweight materials. The small, thin blade and specialized handle permit cuts requiring a high degree of precision and control.",
"title": "Design"
},
{
"paragraph_id": 11,
"text": "The largest construction or workplace utility knives typically feature retractable and replaceable blades, and are made of either die-cast metal or molded plastic. Some use standard razor blades, others specialized double-ended utility blades. The user can adjust how far the blade extends from the handle, so that, for example, the knife can be used to cut the tape sealing a package without damaging the contents of the package. When the blade becomes dull, it can be quickly reversed or switched for a new one. Spare or used blades are stored in the hollow handle of some models, and can be accessed by removing a screw and opening the handle. Other models feature a quick-change mechanism that allows replacing the blade without tools, as well as a flip-out blade storage tray. The blades for this type of utility knife come in both double- and single-ended versions, and are interchangeable with many, but not all, of the later copies. Specialized blades also exist for cutting string, linoleum, and other materials.",
"title": "Design"
},
{
"paragraph_id": 12,
"text": "Another style is a snap-off utility knife that contains a long, segmented blade that slides out from it. As the endmost edge becomes dull, it can be broken off the remaining blade, exposing the next section, which is sharp and ready for use. The snapping is best accomplished with a blade snapper that is often built-in, or a pair of pliers, and the break occurs at the score lines, where the metal is thinnest. When all of the individual segments are used, the knife may be thrown away, or, more often, refilled with a replacement blade. This design was introduced by Japanese manufacturer Olfa Corporation in 1956 as the world's first snap-off blade and was inspired from analyzing the sharp cutting edge produced when glass is broken and how pieces of a chocolate bar break into segments. The sharp cutting edge on these knives is not on the edge where the blade is snapped off; rather one long edge of the whole blade is sharpened, and there are scored diagonal breakoff lines at intervals down the blade. Thus each snapped-off piece is roughly a parallelogram, with each long edge being a breaking edge, and one or both of the short ends being a sharpened edge.",
"title": "Design"
},
{
"paragraph_id": 13,
"text": "Another utility knife often used for cutting open boxes consists of a simple sleeve around a rectangular handle into which single-edge utility blades can be inserted. The sleeve slides up and down on the handle, holding the blade in place during use and covering the blade when not in use. The blade holder may either retract or fold into the handle, much like a folding-blade pocketknife. The blade holder is designed to expose just enough edge to cut through one layer of corrugated fibreboard, to minimize chances of damaging contents of cardboard boxes.",
"title": "Design"
},
{
"paragraph_id": 14,
"text": "Most utility knives are not well suited to use as offensive weapons, with the exception of some outdoor-type utility knives employing longer blades. However, even small razor-blade type utility knives may sometimes find use as slashing weapons. The 9-11 commission report stated passengers in cell phone calls reported knives or \"box-cutters\" were used as weapons (also Mace or a bomb) in hijacking airplanes in the September 11, 2001 terrorist attacks against the United States, though the exact design of the knives used is unknown. Two of the hijackers were known to have purchased Leatherman knives, which feature a (4 in (100 mm) slip-joint blade which were not prohibited on U.S. flights at the time. Those knives were not found in the possessions the two hijackers left behind. Similar cutters, including paper cutters, have also been known to be used as a lethal weapon.",
"title": "Use as weapon"
},
{
"paragraph_id": 15,
"text": "Small work-type utility knives have also been used to commit robbery and other crimes. In June 2004, a Japanese student was slashed to death with a segmented-type utility knife.",
"title": "Use as weapon"
},
{
"paragraph_id": 16,
"text": "In the United Kingdom, the law was changed (effective 1 October 2007) to raise the age limit for purchasing knives, including utility knives, from 16 to 18, and to make it illegal to carry a utility knife in public without a good reason.",
"title": "Use as weapon"
}
] | A utility knife is any type of knife used for general manual work purposes. Such knives were originally fixed-blade knives with durable cutting edges suitable for rough work such as cutting cordage, cutting/scraping hides, butchering animals, cleaning fish scales, reshaping timber, and other tasks. Craft knives are small utility knives used as precision-oriented tools for finer, more delicate tasks such as carving and papercutting. Today, the term "utility knife" also includes small folding-, retractable- and/or replaceable-razor blade knives suited for use in the general workplace or in the construction industry. The latter type is sometimes generically called a Stanley knife, after a prominent brand. There is also a utility knife for kitchen use, which is sized between a chef's knife and paring knife. | 2001-09-27T20:57:40Z | 2023-11-29T01:22:05Z | [
"Template:Citation needed",
"Template:Unreferenced section",
"Template:Convert",
"Template:Cite news",
"Template:Commons category",
"Template:Knives",
"Template:Short description",
"Template:Reflist",
"Template:ISBN",
"Template:See also",
"Template:Use dmy dates",
"Template:Cite web",
"Template:Redirect2",
"Template:For",
"Template:Cutting and abrasive tools",
"Template:When"
] | https://en.wikipedia.org/wiki/Utility_knife |
4,169 | Bronze | Bronze is an alloy consisting primarily of copper, commonly with about 12–12.5% tin and often with the addition of other metals (including aluminium, manganese, nickel, or zinc) and sometimes non-metals, such as phosphorus, or metalloids such as arsenic or silicon. These additions produce a range of alloys that may be harder than copper alone, or have other useful properties, such as strength, ductility, or machinability.
The archaeological period in which bronze was the hardest metal in widespread use is known as the Bronze Age. The beginning of the Bronze Age in western Eurasia and India is conventionally dated to the mid-4th millennium BC (~3500 BC), and to the early 2nd millennium BC in China; elsewhere it gradually spread across regions. The Bronze Age was followed by the Iron Age starting about 1300 BC and reaching most of Eurasia by about 500 BC, although bronze continued to be much more widely used than it is in modern times.
Because historical artworks were often made of brasses (copper and zinc) and bronzes with different compositions, modern museum and scholarly descriptions of older artworks increasingly use the generalized term "copper alloy" instead.
The word bronze (1730–1740) is borrowed from Middle French bronze (1511), itself borrowed from Italian bronzo 'bell metal, brass' (13th century, transcribed in Medieval Latin as bronzium) from either:
The discovery of bronze enabled people to create metal objects that were harder and more durable than previously possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper ("Chalcolithic") predecessors. Initially, bronze was made out of copper and arsenic, forming arsenic bronze, or from naturally or artificially mixed ores of copper and arsenic.
The earliest artifacts so far known come from the Iranian plateau, in the 5th millennium BC, and are smelted from native arsenical copper and copper-arsenides, such as algodonite and domeykite. The earliest tin-copper-alloy artifact has been dated to c. 4650 BC, in a Vinča culture site in Pločnik (Serbia), and believed to have been smelted from a natural tin-copper ore, stannite. Other early examples date to the late 4th millennium BC in Egypt, Susa (Iran) and some ancient sites in China, Luristan (Iran), Tepe Sialk (Iran), Mundigak (Afghanistan), and Mesopotamia (Iraq).
Tin bronze was superior to arsenic bronze in that the alloying process could be more easily controlled, and the resulting alloy was stronger and easier to cast. Also, unlike those of arsenic, metallic tin and fumes from tin refining are not toxic.
Tin became the major non-copper ingredient of bronze in the late 3rd millennium BC.
Ores of copper and the far rarer tin are not often found together (exceptions include Cornwall in the United Kingdom, one ancient site in Thailand and one in Iran), so serious bronze work has always involved trade. Tin sources and trade in ancient times had a major influence on the development of cultures. In Europe, a major source of tin was the British deposits of ore in Cornwall, which were traded as far as Phoenicia in the eastern Mediterranean.
In many parts of the world, large hoards of bronze artifacts are found, suggesting that bronze also represented a store of value and an indicator of social status. In Europe, large hoards of bronze tools, typically socketed axes (illustrated above), are found, which mostly show no signs of wear. With Chinese ritual bronzes, which are documented in the inscriptions they carry and from other sources, the case is clear. These were made in enormous quantities for elite burials, and also used by the living for ritual offerings.
Though bronze is generally harder than wrought iron, with Vickers hardness of 60–258 vs. 30–80, the Bronze Age gave way to the Iron Age after a serious disruption of the tin trade: the population migrations of around 1200–1100 BC reduced the shipping of tin around the Mediterranean and from Britain, limiting supplies and raising prices. As the art of working in iron improved, iron became cheaper and improved in quality. As cultures advanced from hand-wrought iron to machine-forged iron (typically made with trip hammers powered by water), blacksmiths learned how to make steel. Steel is stronger and harder than bronze and holds a sharper edge longer.
Bronze was still used during the Iron Age, and has continued in use for many purposes to the modern day.
There are many different bronze alloys, but typically modern bronze is 88% copper and 12% tin. Alpha bronze consists of the alpha solid solution of tin in copper. Alpha bronze alloys of 4–5% tin are used to make coins, springs, turbines and blades. Historical "bronzes" are highly variable in composition, as most metalworkers probably used whatever scrap was on hand; the metal of the 12th-century English Gloucester Candlestick is bronze containing a mixture of copper, zinc, tin, lead, nickel, iron, antimony, arsenic and an unusually large amount of silver – between 22.5% in the base and 5.76% in the pan below the candle. The proportions of this mixture suggest that the candlestick was made from a hoard of old coins. The 13th-century Benin Bronzes are in fact brass, and the 12th-century Romanesque Baptismal font at St Bartholomew's Church, Liège is described as both bronze and brass.
In the Bronze Age, two forms of bronze were commonly used: "classic bronze", about 10% tin, was used in casting; and "mild bronze", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were mostly cast from classic bronze, while helmets and armor were hammered from mild bronze.
Commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications.
Plastic bronze contains a significant quantity of lead, which makes for improved plasticity possibly used by the ancient Greeks in their ship construction.
Silicon bronze has a composition of Si: 2.80–3.80%, Mn: 0.50–1.30%, Fe: 0.80% max., Zn: 1.50% max., Pb: 0.05% max., Cu: balance.
Other bronze alloys include aluminium bronze, phosphor bronze, manganese bronze, bell metal, arsenical bronze, speculum metal, bismuth bronze, and cymbal alloys.
Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminum or silicon may be slightly less dense. Bronze is a better conductor of heat and electricity than most steels. The cost of copper-base alloys is generally higher than that of steels but lower than that of nickel-base alloys.
Bronzes are typically ductile alloys, considerably less brittle than cast iron. Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), resonant qualities of bell bronze (20% tin, 80% copper), and resistance to corrosion by seawater of several bronze alloys.
The melting point of bronze varies depending on the ratio of the alloy components and is about 950 °C (1,742 °F). Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties.
Typically bronze oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. If copper chlorides are formed, a corrosion-mode called "bronze disease" will eventually completely destroy it.
Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings.
In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows color-matched repair of defects in castings. Aluminum is also used for the structural metal aluminum bronze.
Bronze parts are tough and typically used for bearings, clips, electrical connectors and springs.
Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings, automobile transmission pilot bearings, and similar fittings, and is particularly common in the bearings of small electric motors. Phosphor bronze is particularly suited to precision-grade bearings and springs. It is also used in guitar and piano strings.
Unlike steel, bronze struck against a hard surface will not generate sparks, so it (along with beryllium copper) is used to make hammers, mallets, wrenches and other durable tools to be used in explosive atmospheres or in the presence of flammable vapors. Bronze is used to make bronze wool for woodworking applications where steel wool would discolor oak.
Phosphor bronze is used for ships' propellers, musical instruments, and electrical contacts. Bearings are often made of bronze for its friction properties. It can be impregnated with oil to make the proprietary Oilite and similar material for bearings. Aluminum bronze is hard and wear-resistant, and is used for bearings and machine tool ways.
Bronze is widely used for casting bronze sculptures. Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Then, as the bronze cools, it shrinks a little, making it easier to separate from the mould.
The Assyrian king Sennacherib (704–681 BC) claims to have been the first to cast monumental bronze statues (of up to 30 tonnes) using two-part moulds instead of the lost-wax method.
Bronze statues were regarded as the highest form of sculpture in Ancient Greek art, though survivals are few, as bronze was a valuable material in short supply in the Late Antique and medieval periods. Many of the most famous Greek bronze sculptures are known through Roman copies in marble, which were more likely to survive.
In India, bronze sculptures from the Kushana (Chausa hoard) and Gupta periods (Brahma from Mirpur-Khas, Akota Hoard, Sultanganj Buddha) and later periods (Hansi Hoard) have been found. Indian Hindu artisans from the period of the Chola empire in Tamil Nadu used bronze to create intricate statues via the lost-wax casting method with ornate detailing depicting the deities of Hinduism. The art form survives to this day, with many silpis, craftsmen, working in the areas of Swamimalai and Chennai.
In antiquity other cultures also produced works of high art using bronze. For example: in Africa, the bronze heads of the Kingdom of Benin; in Europe, Grecian bronzes typically of figures from Greek mythology; in east Asia, Chinese ritual bronzes of the Shang and Zhou dynasty—more often ceremonial vessels but including some figurine examples.
Bronze continues into modern times as one of the materials of choice for monumental statuary.
Before it became possible to produce glass with acceptably flat surfaces, bronze was a standard material for mirrors. Bronze was used for this purpose in many parts of the world, probably based on independent discoveries.
Bronze mirrors survive from the Egyptian Middle Kingdom (2040–1750 BC), and China from at least c. 550 BC. In Europe, the Etruscans were making bronze mirrors in the sixth century BC, and Greek and Roman mirrors followed the same pattern. Although other materials such as speculum metal had come into use, and Western glass mirrors had largely taken over, bronze mirrors were still being made in Japan and elsewhere in the eighteenth century, and are still made on a small scale in Kerala, India.
Bronze is the preferred metal for bells in the form of a high tin bronze alloy known as bell metal, which is typically about 23% tin.
Nearly all professional cymbals are made from bronze, which gives a desirable balance of durability and timbre. Several types of bronze are used, commonly B20 bronze, which is roughly 20% tin, 80% copper, with traces of silver, or the tougher B8 bronze made from 8% tin and 92% copper. As the tin content in a bell or cymbal rises, the timbre drops.
Bronze is also used for the windings of steel and nylon strings of various stringed instruments such as the double bass, piano, harpsichord, and guitar. Bronze strings are commonly reserved on pianoforte for the lower pitch tones, as they possess a superior sustain quality to that of high-tensile steel.
Bronzes of various metallurgical properties are widely used in struck idiophones around the world, notably bells, singing bowls, gongs, cymbals, and other idiophones from Asia. Examples include Tibetan singing bowls, temple bells of many sizes and shapes, Javanese gamelan, and other bronze musical instruments. The earliest bronze archeological finds in Indonesia date from 1–2 BC, including flat plates probably suspended and struck by a wooden or bone mallet. Ancient bronze drums from Thailand and Vietnam date back 2,000 years. Bronze bells from Thailand and Cambodia date back to 3600 BC.
Some companies are now making saxophones from phosphor bronze (3.5 to 10% tin and up to 1% phosphorus content). Bell bronze/B20 is used to make the tone rings of many professional model banjos. The tone ring is a heavy (usually 3 lb; 1.4 kg) folded or arched metal ring attached to a thick wood rim, over which a skin, or most often, a plastic membrane (or head) is stretched – it is the bell bronze that gives the banjo a crisp powerful lower register and clear bell-like treble register.
There are over 125 references to bronze ('nehoshet'), which appears to be the Hebrew word used for copper and any of its alloys. However, the Old Testament era Hebrews are not thought to have had the capability to manufacture zinc (needed to make brass) and so it is likely that 'nehoshet' refers to copper and its alloys with tin, now called bronze. In the King James Version, there is no use of the word 'bronze' and 'nehoshet' was translated as 'brass'. Modern translations use 'bronze'. Bronze (nehoshet) was used widely in the Tabernacle for items such as the bronze altar (Exodus Ch.27), bronze laver (Exodus Ch.30), utensils, and mirror (Exodus Ch.38). It was mentioned in the account of Moses holding up a bronze snake on a pole in Numbers Ch.21. In First Kings, it is mentioned that Hiram was very skilled in working with bronze, and he made many furnishings for Solomon's Temple including pillars, capitals, stands, wheels, bowls, and plates, some of which were highly decorative (see I Kings 7:13-47). Bronze was also widely used as battle armor and helmet, as in the battle of David and Goliath in I Samuel 17:5-6;38 (also see II Chron. 12:10).
Bronze has also been used in coins; most "copper" coins are actually bronze, with about 4 percent tin and 1 percent zinc.
As with coins, bronze has been used in the manufacture of various types of medals for centuries, and "bronze medals" are known in contemporary times for being awarded for third place in sporting competitions and other events. The term is now often used for third place even when no actual bronze medal is awarded. The usage in part arose from the trio of gold, silver and bronze to represent the first three Ages of Man in Greek mythology: the Golden Age, when men lived among the gods; the Silver age, where youth lasted a hundred years; and the Bronze Age, the era of heroes. It was first adopted for a sports event at the 1904 Summer Olympics. At the 1896 event, silver was awarded to winners and bronze to runners-up, while at 1900 other prizes were given rather than medals.
Bronze is the normal material for the related form of the plaquette, normally a rectangular work of art with a scene in relief, for a collectors' market. | [
{
"paragraph_id": 0,
"text": "Bronze is an alloy consisting primarily of copper, commonly with about 12–12.5% tin and often with the addition of other metals (including aluminium, manganese, nickel, or zinc) and sometimes non-metals, such as phosphorus, or metalloids such as arsenic or silicon. These additions produce a range of alloys that may be harder than copper alone, or have other useful properties, such as strength, ductility, or machinability.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The archaeological period in which bronze was the hardest metal in widespread use is known as the Bronze Age. The beginning of the Bronze Age in western Eurasia and India is conventionally dated to the mid-4th millennium BC (~3500 BC), and to the early 2nd millennium BC in China; elsewhere it gradually spread across regions. The Bronze Age was followed by the Iron Age starting about 1300 BC and reaching most of Eurasia by about 500 BC, although bronze continued to be much more widely used than it is in modern times.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Because historical artworks were often made of brasses (copper and zinc) and bronzes with different compositions, modern museum and scholarly descriptions of older artworks increasingly use the generalized term \"copper alloy\" instead.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word bronze (1730–1740) is borrowed from Middle French bronze (1511), itself borrowed from Italian bronzo 'bell metal, brass' (13th century, transcribed in Medieval Latin as bronzium) from either:",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "The discovery of bronze enabled people to create metal objects that were harder and more durable than previously possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper (\"Chalcolithic\") predecessors. Initially, bronze was made out of copper and arsenic, forming arsenic bronze, or from naturally or artificially mixed ores of copper and arsenic.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The earliest artifacts so far known come from the Iranian plateau, in the 5th millennium BC, and are smelted from native arsenical copper and copper-arsenides, such as algodonite and domeykite. The earliest tin-copper-alloy artifact has been dated to c. 4650 BC, in a Vinča culture site in Pločnik (Serbia), and believed to have been smelted from a natural tin-copper ore, stannite. Other early examples date to the late 4th millennium BC in Egypt, Susa (Iran) and some ancient sites in China, Luristan (Iran), Tepe Sialk (Iran), Mundigak (Afghanistan), and Mesopotamia (Iraq).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Tin bronze was superior to arsenic bronze in that the alloying process could be more easily controlled, and the resulting alloy was stronger and easier to cast. Also, unlike those of arsenic, metallic tin and fumes from tin refining are not toxic.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Tin became the major non-copper ingredient of bronze in the late 3rd millennium BC.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Ores of copper and the far rarer tin are not often found together (exceptions include Cornwall in the United Kingdom, one ancient site in Thailand and one in Iran), so serious bronze work has always involved trade. Tin sources and trade in ancient times had a major influence on the development of cultures. In Europe, a major source of tin was the British deposits of ore in Cornwall, which were traded as far as Phoenicia in the eastern Mediterranean.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In many parts of the world, large hoards of bronze artifacts are found, suggesting that bronze also represented a store of value and an indicator of social status. In Europe, large hoards of bronze tools, typically socketed axes (illustrated above), are found, which mostly show no signs of wear. With Chinese ritual bronzes, which are documented in the inscriptions they carry and from other sources, the case is clear. These were made in enormous quantities for elite burials, and also used by the living for ritual offerings.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Though bronze is generally harder than wrought iron, with Vickers hardness of 60–258 vs. 30–80, the Bronze Age gave way to the Iron Age after a serious disruption of the tin trade: the population migrations of around 1200–1100 BC reduced the shipping of tin around the Mediterranean and from Britain, limiting supplies and raising prices. As the art of working in iron improved, iron became cheaper and improved in quality. As cultures advanced from hand-wrought iron to machine-forged iron (typically made with trip hammers powered by water), blacksmiths learned how to make steel. Steel is stronger and harder than bronze and holds a sharper edge longer.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Bronze was still used during the Iron Age, and has continued in use for many purposes to the modern day.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "There are many different bronze alloys, but typically modern bronze is 88% copper and 12% tin. Alpha bronze consists of the alpha solid solution of tin in copper. Alpha bronze alloys of 4–5% tin are used to make coins, springs, turbines and blades. Historical \"bronzes\" are highly variable in composition, as most metalworkers probably used whatever scrap was on hand; the metal of the 12th-century English Gloucester Candlestick is bronze containing a mixture of copper, zinc, tin, lead, nickel, iron, antimony, arsenic and an unusually large amount of silver – between 22.5% in the base and 5.76% in the pan below the candle. The proportions of this mixture suggest that the candlestick was made from a hoard of old coins. The 13th-century Benin Bronzes are in fact brass, and the 12th-century Romanesque Baptismal font at St Bartholomew's Church, Liège is described as both bronze and brass.",
"title": "Composition"
},
{
"paragraph_id": 13,
"text": "In the Bronze Age, two forms of bronze were commonly used: \"classic bronze\", about 10% tin, was used in casting; and \"mild bronze\", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were mostly cast from classic bronze, while helmets and armor were hammered from mild bronze.",
"title": "Composition"
},
{
"paragraph_id": 14,
"text": "Commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications.",
"title": "Composition"
},
{
"paragraph_id": 15,
"text": "Plastic bronze contains a significant quantity of lead, which makes for improved plasticity possibly used by the ancient Greeks in their ship construction.",
"title": "Composition"
},
{
"paragraph_id": 16,
"text": "Silicon bronze has a composition of Si: 2.80–3.80%, Mn: 0.50–1.30%, Fe: 0.80% max., Zn: 1.50% max., Pb: 0.05% max., Cu: balance.",
"title": "Composition"
},
{
"paragraph_id": 17,
"text": "Other bronze alloys include aluminium bronze, phosphor bronze, manganese bronze, bell metal, arsenical bronze, speculum metal, bismuth bronze, and cymbal alloys.",
"title": "Composition"
},
{
"paragraph_id": 18,
"text": "Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminum or silicon may be slightly less dense. Bronze is a better conductor of heat and electricity than most steels. The cost of copper-base alloys is generally higher than that of steels but lower than that of nickel-base alloys.",
"title": "Properties"
},
{
"paragraph_id": 19,
"text": "Bronzes are typically ductile alloys, considerably less brittle than cast iron. Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), resonant qualities of bell bronze (20% tin, 80% copper), and resistance to corrosion by seawater of several bronze alloys.",
"title": "Properties"
},
{
"paragraph_id": 20,
"text": "The melting point of bronze varies depending on the ratio of the alloy components and is about 950 °C (1,742 °F). Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties.",
"title": "Properties"
},
{
"paragraph_id": 21,
"text": "Typically bronze oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. If copper chlorides are formed, a corrosion-mode called \"bronze disease\" will eventually completely destroy it.",
"title": "Properties"
},
{
"paragraph_id": 22,
"text": "Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings.",
"title": "Uses"
},
{
"paragraph_id": 23,
"text": "In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows color-matched repair of defects in castings. Aluminum is also used for the structural metal aluminum bronze.",
"title": "Uses"
},
{
"paragraph_id": 24,
"text": "Bronze parts are tough and typically used for bearings, clips, electrical connectors and springs.",
"title": "Uses"
},
{
"paragraph_id": 25,
"text": "Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings, automobile transmission pilot bearings, and similar fittings, and is particularly common in the bearings of small electric motors. Phosphor bronze is particularly suited to precision-grade bearings and springs. It is also used in guitar and piano strings.",
"title": "Uses"
},
{
"paragraph_id": 26,
"text": "Unlike steel, bronze struck against a hard surface will not generate sparks, so it (along with beryllium copper) is used to make hammers, mallets, wrenches and other durable tools to be used in explosive atmospheres or in the presence of flammable vapors. Bronze is used to make bronze wool for woodworking applications where steel wool would discolor oak.",
"title": "Uses"
},
{
"paragraph_id": 27,
"text": "Phosphor bronze is used for ships' propellers, musical instruments, and electrical contacts. Bearings are often made of bronze for its friction properties. It can be impregnated with oil to make the proprietary Oilite and similar material for bearings. Aluminum bronze is hard and wear-resistant, and is used for bearings and machine tool ways.",
"title": "Uses"
},
{
"paragraph_id": 28,
"text": "Bronze is widely used for casting bronze sculptures. Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Then, as the bronze cools, it shrinks a little, making it easier to separate from the mould.",
"title": "Uses"
},
{
"paragraph_id": 29,
"text": "The Assyrian king Sennacherib (704–681 BC) claims to have been the first to cast monumental bronze statues (of up to 30 tonnes) using two-part moulds instead of the lost-wax method.",
"title": "Uses"
},
{
"paragraph_id": 30,
"text": "Bronze statues were regarded as the highest form of sculpture in Ancient Greek art, though survivals are few, as bronze was a valuable material in short supply in the Late Antique and medieval periods. Many of the most famous Greek bronze sculptures are known through Roman copies in marble, which were more likely to survive.",
"title": "Uses"
},
{
"paragraph_id": 31,
"text": "In India, bronze sculptures from the Kushana (Chausa hoard) and Gupta periods (Brahma from Mirpur-Khas, Akota Hoard, Sultanganj Buddha) and later periods (Hansi Hoard) have been found. Indian Hindu artisans from the period of the Chola empire in Tamil Nadu used bronze to create intricate statues via the lost-wax casting method with ornate detailing depicting the deities of Hinduism. The art form survives to this day, with many silpis, craftsmen, working in the areas of Swamimalai and Chennai.",
"title": "Uses"
},
{
"paragraph_id": 32,
"text": "In antiquity other cultures also produced works of high art using bronze. For example: in Africa, the bronze heads of the Kingdom of Benin; in Europe, Grecian bronzes typically of figures from Greek mythology; in east Asia, Chinese ritual bronzes of the Shang and Zhou dynasty—more often ceremonial vessels but including some figurine examples.",
"title": "Uses"
},
{
"paragraph_id": 33,
"text": "Bronze continues into modern times as one of the materials of choice for monumental statuary.",
"title": "Uses"
},
{
"paragraph_id": 34,
"text": "Before it became possible to produce glass with acceptably flat surfaces, bronze was a standard material for mirrors. Bronze was used for this purpose in many parts of the world, probably based on independent discoveries.",
"title": "Uses"
},
{
"paragraph_id": 35,
"text": "Bronze mirrors survive from the Egyptian Middle Kingdom (2040–1750 BC), and China from at least c. 550 BC. In Europe, the Etruscans were making bronze mirrors in the sixth century BC, and Greek and Roman mirrors followed the same pattern. Although other materials such as speculum metal had come into use, and Western glass mirrors had largely taken over, bronze mirrors were still being made in Japan and elsewhere in the eighteenth century, and are still made on a small scale in Kerala, India.",
"title": "Uses"
},
{
"paragraph_id": 36,
"text": "Bronze is the preferred metal for bells in the form of a high tin bronze alloy known as bell metal, which is typically about 23% tin.",
"title": "Uses"
},
{
"paragraph_id": 37,
"text": "Nearly all professional cymbals are made from bronze, which gives a desirable balance of durability and timbre. Several types of bronze are used, commonly B20 bronze, which is roughly 20% tin, 80% copper, with traces of silver, or the tougher B8 bronze made from 8% tin and 92% copper. As the tin content in a bell or cymbal rises, the timbre drops.",
"title": "Uses"
},
{
"paragraph_id": 38,
"text": "Bronze is also used for the windings of steel and nylon strings of various stringed instruments such as the double bass, piano, harpsichord, and guitar. Bronze strings are commonly reserved on pianoforte for the lower pitch tones, as they possess a superior sustain quality to that of high-tensile steel.",
"title": "Uses"
},
{
"paragraph_id": 39,
"text": "Bronzes of various metallurgical properties are widely used in struck idiophones around the world, notably bells, singing bowls, gongs, cymbals, and other idiophones from Asia. Examples include Tibetan singing bowls, temple bells of many sizes and shapes, Javanese gamelan, and other bronze musical instruments. The earliest bronze archeological finds in Indonesia date from 1–2 BC, including flat plates probably suspended and struck by a wooden or bone mallet. Ancient bronze drums from Thailand and Vietnam date back 2,000 years. Bronze bells from Thailand and Cambodia date back to 3600 BC.",
"title": "Uses"
},
{
"paragraph_id": 40,
"text": "Some companies are now making saxophones from phosphor bronze (3.5 to 10% tin and up to 1% phosphorus content). Bell bronze/B20 is used to make the tone rings of many professional model banjos. The tone ring is a heavy (usually 3 lb; 1.4 kg) folded or arched metal ring attached to a thick wood rim, over which a skin, or most often, a plastic membrane (or head) is stretched – it is the bell bronze that gives the banjo a crisp powerful lower register and clear bell-like treble register.",
"title": "Uses"
},
{
"paragraph_id": 41,
"text": "There are over 125 references to bronze ('nehoshet'), which appears to be the Hebrew word used for copper and any of its alloys. However, the Old Testament era Hebrews are not thought to have had the capability to manufacture zinc (needed to make brass) and so it is likely that 'nehoshet' refers to copper and its alloys with tin, now called bronze. In the King James Version, there is no use of the word 'bronze' and 'nehoshet' was translated as 'brass'. Modern translations use 'bronze'. Bronze (nehoshet) was used widely in the Tabernacle for items such as the bronze altar (Exodus Ch.27), bronze laver (Exodus Ch.30), utensils, and mirror (Exodus Ch.38). It was mentioned in the account of Moses holding up a bronze snake on a pole in Numbers Ch.21. In First Kings, it is mentioned that Hiram was very skilled in working with bronze, and he made many furnishings for Solomon's Temple including pillars, capitals, stands, wheels, bowls, and plates, some of which were highly decorative (see I Kings 7:13-47). Bronze was also widely used as battle armor and helmet, as in the battle of David and Goliath in I Samuel 17:5-6;38 (also see II Chron. 12:10).",
"title": "Uses"
},
{
"paragraph_id": 42,
"text": "Bronze has also been used in coins; most \"copper\" coins are actually bronze, with about 4 percent tin and 1 percent zinc.",
"title": "Uses"
},
{
"paragraph_id": 43,
"text": "As with coins, bronze has been used in the manufacture of various types of medals for centuries, and \"bronze medals\" are known in contemporary times for being awarded for third place in sporting competitions and other events. The term is now often used for third place even when no actual bronze medal is awarded. The usage in part arose from the trio of gold, silver and bronze to represent the first three Ages of Man in Greek mythology: the Golden Age, when men lived among the gods; the Silver age, where youth lasted a hundred years; and the Bronze Age, the era of heroes. It was first adopted for a sports event at the 1904 Summer Olympics. At the 1896 event, silver was awarded to winners and bronze to runners-up, while at 1900 other prizes were given rather than medals.",
"title": "Uses"
},
{
"paragraph_id": 44,
"text": "Bronze is the normal material for the related form of the plaquette, normally a rectangular work of art with a scene in relief, for a collectors' market.",
"title": "Uses"
}
] | Bronze is an alloy consisting primarily of copper, commonly with about 12–12.5% tin and often with the addition of other metals and sometimes non-metals, such as phosphorus, or metalloids such as arsenic or silicon. These additions produce a range of alloys that may be harder than copper alone, or have other useful properties, such as strength, ductility, or machinability. The archaeological period in which bronze was the hardest metal in widespread use is known as the Bronze Age. The beginning of the Bronze Age in western Eurasia and India is conventionally dated to the mid-4th millennium BC, and to the early 2nd millennium BC in China; elsewhere it gradually spread across regions. The Bronze Age was followed by the Iron Age starting about 1300 BC and reaching most of Eurasia by about 500 BC, although bronze continued to be much more widely used than it is in modern times. Because historical artworks were often made of brasses and bronzes with different compositions, modern museum and scholarly descriptions of older artworks increasingly use the generalized term "copper alloy" instead. | 2001-09-12T11:17:19Z | 2023-12-26T18:01:28Z | [
"Template:Cvt",
"Template:-",
"Template:Cite web",
"Template:Curlie",
"Template:About",
"Template:Zh",
"Template:Vanchor",
"Template:See also",
"Template:Reflist",
"Template:Page needed",
"Template:Cite journal",
"Template:Cite encyclopedia",
"Template:Multiple image",
"Template:Circa",
"Template:Convert",
"Template:Sister project links",
"Template:Short description",
"Template:Jewellery",
"Template:Main",
"Template:Columns-list",
"Template:Cite book",
"Template:Authority control",
"Template:Lang",
"Template:Gloss",
"Template:Citation needed"
] | https://en.wikipedia.org/wiki/Bronze |
4,170 | Benelux | The Benelux Union (Dutch: Benelux Unie; French: Union Benelux; Luxembourgish: Benelux-Unioun) or Benelux is a politico-economic union and formal international intergovernmental cooperation of three neighbouring states in western Europe: Belgium, the Netherlands, and Luxembourg. The name is a portmanteau formed from joining the first few letters of each country's name and was first used to name the customs agreement that initiated the union (signed in 1944). It is now used more generally to refer to the geographic, economic, and cultural grouping of the three countries.
The Benelux is an economically dynamic and densely populated region, with 5.6% of the European population (29.55 million residents) and 7.9% of the joint EU GDP (€36,000/resident) on 1.7% of the whole surface of the EU. Currently 37% of the total number of EU frontier workers work in the Benelux and surrounding areas. 35,000 Belgian citizens work in Luxembourg, while 37,000 Belgian citizens cross the border to work in the Netherlands each day. In addition, 12,000 Dutch and close to a thousand Luxembourg residents work in Belgium.
The main institutions of the Union are the Committee of Ministers, the Council of the Union, the General Secretariat, the Interparliamentary Consultative Council and the Benelux Court of Justice while the Benelux Office for Intellectual Property covers the same land but is not part of the Benelux Union.
The Benelux General Secretariat is located in Brussels. It is the central platform of the Benelux Union cooperation. It handles the secretariat of the Committee of Ministers, the Council of Benelux Union and the sundry committees and working parties. The General Secretariat provides day-to-day support for the Benelux cooperation on the substantive, procedural, diplomatic and logistical levels. The Secretary-General is Frans Weekers from the Netherlands and there are two deputies: Deputy Secretary-General Michel-Etienne Tilemans from Belgium and Deputy Secretary-General Jean-Claude Meyer from Luxembourg.
The presidency of the Benelux is held in turn by the three countries for a period of one year. The Netherlands holds the presidency for 2023.
In 1944, exiled representatives of the three countries signed the London Customs Convention, the treaty that established the Benelux Customs Union. Ratified in 1947, the treaty was in force from 1948 until it was superseded by the Benelux Economic Union. The initial form of economic cooperation expanded steadily over time, leading to the signing of the treaty establishing the Benelux Economic Union (Benelux Economische Unie, Union Économique Benelux) on 3 February 1958 in The Hague, which came into force on 1 November 1960. Initially, the purpose of cooperation among the three partners was to put an end to customs barriers at their borders and ensure free movement of persons, capital, services, and goods between the three countries. This treaty was the first example of international economic integration in Europe since the Second World War.
The three countries therefore foreshadowed and provided the model for future European integration, such as the European Coal and Steel Community, the European Economic Community (EEC), and the European Community–European Union (EC–EU). The three partners also launched the Schengen process, which came into operation in 1985. Benelux cooperation has been constantly adapted and now goes much further than mere economic cooperation, extending to new and topical policy areas connected with security, sustainable development, and the economy.
In 1965, the treaty establishing a Benelux Court of Justice was signed. It entered into force in 1974. The court, composed of judges from the highest courts of the three states, has to guarantee the uniform interpretation of common legal rules. This international judicial institution is located in Luxembourg.
The 1958 Treaty between the Benelux countries establishing the Benelux Economic Union was limited to a period of 50 years. During the following years, and even more so after the creation of the European Union, the Benelux cooperation focused on developing other fields of activity within a constantly changing international context.
At the end of the 50 years, the governments of the three Benelux countries decided to renew the agreement, taking into account the new aspects of the Benelux-cooperation – such as security – and the new federal government structure of Belgium. The original establishing treaty, set to expire in 2010, was replaced by a new legal framework (called the Treaty revising the Treaty establishing the Benelux Economic Union), which was signed on 17 June 2008.
The new treaty has no set time limit and the name of the Benelux Economic Union changed to Benelux Union to reflect the broad scope on the union. The main objectives of the treaty are the continuation and enlargement of the cooperation between the three member states within a larger European context. The renewed treaty explicitly foresees the possibility that the Benelux countries will cooperate with other European member states or with regional cooperation structures. The new Benelux cooperation focuses on three main topics: internal market and economic union, sustainability, justice and internal affairs. The number of structures in the renewed Treaty has been reduced and thus simplified.
Benelux seeks region-to-region cooperation, be it with France and Germany (North Rhine-Westphalia) or beyond with the Baltic States, the Nordic Council, the Visegrad countries, or even further. In 2018 a renewed political declaration was adopted between Benelux and North Rhine-Westphalia to give cooperation a further impetus.
The Benelux is particularly active in the field of intellectual property. The three countries established a Benelux Trademarks Office and a Benelux Designs Office, both situated in The Hague. In 2005, they concluded a treaty establishing the Benelux Office for Intellectual Property, which replaced both offices upon its entry into force on 1 September 2006. This organisation is the official body for the registration of trademarks and designs in the Benelux. In addition, it offers the possibility to formally record the existence of ideas, concepts, designs, prototypes and the like.
Some examples of recent Benelux initiatives include: automatic level recognition of diplomas and degrees within the Benelux for bachelor's and master's programs in 2015, and for all other degrees in 2018; common road inspections in 2014; and a Benelux pilot with digital consignment notes (e-CMR) in 2017; a new Benelux Treaty on Police Cooperation in 2018, providing for direct access to each other's police databases and population registers within the limits of national legislation, and allowing some police forces to cross borders in some situations. The Benelux is also committed to working together on adaptation to climate change. A joint political declaration in July 2020 called on the European Commission to prioritise cycling in European climate policy and Sustainable Transport strategies, to co-finance the construction of cycling infrastructure, and to provide funds to stimulate cycling policy.
On 5 June 2018 the Benelux Treaty celebrated its 60 years of existence. In 2018, a Benelux Youth Parliament was created.
In addition to cooperation based on a Treaty, there is also political cooperation in the Benelux context, including summits of the Benelux government leaders. In 2019 a Benelux summit was held in Luxembourg. In 2020, a Benelux summit was held – online, due to the COVID-19 pandemic – under Dutch Presidency on 7 October between the prime ministers.
As of 1st January 2017, a new arrangement for NATO Air Policing started for the airspace of Belgium, the Netherlands and Luxemburg (Benelux). The Belgian Air Component and the Royal Netherlands Air Force will take four-month turns to ensure that Quick Reaction Alert (QRA) fighter jets are available at all times to be launched under NATO control.
The Benelux countries also work together in the so-called Pentalateral Energy Forum, a regional cooperation group formed of five members—the Benelux states, France, Germany, Austria, and Switzerland. Formed on 6 June 2007, the ministers for energy from the various countries represent a total of 200 million residents and 40% of the European electricity network.
In 2017 the members of the Benelux, the Baltic Assembly, three members of the Nordic Council (Sweden, Denmark and Finland), and all the other countries EU member states, sought to increase cooperation in the Digital Single Market, as well as discussing social matters, the Economic and Monetary Union of the European Union, immigration and defence cooperation. Foreign relations in the wake of Russia's annexation of Crimea and the 2017 Turkish constitutional referendum were also on the agenda.
Since 2008 the Benelux Union works together with the German Land (state) North Rhine-Westphalia.
In 2018 Benelux Union signed a declaration with France to strengthen cross-border cooperation.
Under the 2008 treaty there are five Benelux institutions: the Benelux Committee of Ministers, the Benelux Council, the Benelux Parliament, the Benelux Court of Justice, the Benelux Secretariat General. Beside these five institutions, the Benelux Organisation for Intellectual Property is also an independent organisation.
Benelux Committee of Ministers:
The Committee of Ministers is the supreme decision-making body of the Benelux. It includes at least one representative at ministerial level from the three countries. Its composition varies according to its agenda. The ministers determine the orientations and priorities of Benelux cooperation. The presidency of the Committee rotates between the three countries on an annual basis.
Benelux Council:
The council is composed of senior officials from the relevant ministries. Its composition varies according to its agenda. The council's main task is to prepare the dossiers for the ministers.
Benelux InterParliamentary Consultative Council: The Benelux Parliament (officially referred to as an "Interparliamentary Consultative Council") was created in 1955. This parliamentary assembly is composed of 49 members from the respective national parliaments (21 members of the Dutch parliament, 21 members of the Belgian national and regional parliaments, and 7 members of the Luxembourg parliament). Its members inform and advise their respective governments on all Benelux matters. On 20 January 2015, the governments of the three countries, including, as far as Belgium is concerned, the community and regional governments, signed in Brussels the Treaty of the Benelux Interparliamentary Assembly. This treaty entered into force on 1 August 2019. This superseded the 1955 Convention on the Consultative Interparliamentary Council for the Benelux. The official name has been largely obsolete in daily practice for a number of years: both internally in the Benelux and in external references, the name Benelux Parliament has been used de facto for a number of years now.
Benelux Court of Justice:
The Benelux Court of Justice is an international court. Its mission is to promote uniformity in the application of Benelux legislation. When faced with difficulty interpreting a common Benelux legal rule, national courts must seek an interpretive ruling from the Benelux Court, which subsequently renders a binding decision. The members of the Court are appointed from among the judges of the 'Cour de cassation' of Belgium, the 'Hoge Raad of the Netherlands' and the 'Cour de cassation' of Luxembourg.
Benelux General Secretariat:
The General Secretariat, which is based in Brussels, forms the cooperation platform of the Benelux Union. It acts as the secretariat of the Committee of Ministers, the council and various commissions and working groups. The General Secretariat has years of expertise in the area of Benelux cooperation and is familiar with the policy agreements and differences between the three countries. Building on what already been achieved, the General Secretariat puts its knowledge, network and experience at the service of partners and stakeholders who endorse its mission. It initiates, supports and monitors cooperation results in the areas of economy, sustainability and security.
Benelux works together on the basis of an annual plan embedded in a four-year joint work programme.
The Benelux Union involves intergovernmental cooperation.
The Treaty establishing the Benelux Union explicitly provides that the Benelux Committee of Ministers can resort to four legal instruments (art. 6, paragraph 2, under a), f), g) and h)):
1. Decisions
Decisions are legally binding regulations for implementing the Treaty establishing the Benelux Union or other Benelux treaties.
Their legally binding force concerns the Benelux states (and their sub-state entities), which have to implement them. However, they have no direct effect towards individual citizens or companies (notwithstanding any indirect protection of their rights based on such decisions as a source of international law). Only national provisions implementing a decision can directly create rights and obligations for citizens or companies.
2. Agreements
The Committee of Ministers can draw up agreements, which are then submitted to the Benelux states (and/or their sub-state entities) for signature and subsequent parliamentary ratification. These agreements can deal with any subject matter, also in policy areas that are not yet covered by cooperation in the framework of the Benelux Union.
These are in fact traditional treaties, with the same direct legally binding force towards both authorities and citizens or companies. The negotiations do however take place in the established context of the Benelux working groups and institutions, rather than on an ad hoc basis.
3. Recommendations
Recommendations are non-binding orientations, adopted at ministerial level, which underpin the functioning of the Benelux Union. These (policy) orientations may not be legally binding, but given their adoption at the highest political level and their legal basis vested directly in the Treaty, they do entail a strong moral obligation for any authority concerned in the Benelux countries.
4. Directives
Directives of the Committee of Ministers are mere inter-institutional instructions towards the Benelux Council and/or the Secretariat-General, for which they are binding. This instrument has so far only been used occasionally, basically in order to organize certain activities within a Benelux working group or to give them impetus.
All four instruments require the unanimous approval of the members of the Committee of Ministers (and, in the case of agreements, subsequent signature and ratification at national level). | [
{
"paragraph_id": 0,
"text": "The Benelux Union (Dutch: Benelux Unie; French: Union Benelux; Luxembourgish: Benelux-Unioun) or Benelux is a politico-economic union and formal international intergovernmental cooperation of three neighbouring states in western Europe: Belgium, the Netherlands, and Luxembourg. The name is a portmanteau formed from joining the first few letters of each country's name and was first used to name the customs agreement that initiated the union (signed in 1944). It is now used more generally to refer to the geographic, economic, and cultural grouping of the three countries.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Benelux is an economically dynamic and densely populated region, with 5.6% of the European population (29.55 million residents) and 7.9% of the joint EU GDP (€36,000/resident) on 1.7% of the whole surface of the EU. Currently 37% of the total number of EU frontier workers work in the Benelux and surrounding areas. 35,000 Belgian citizens work in Luxembourg, while 37,000 Belgian citizens cross the border to work in the Netherlands each day. In addition, 12,000 Dutch and close to a thousand Luxembourg residents work in Belgium.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The main institutions of the Union are the Committee of Ministers, the Council of the Union, the General Secretariat, the Interparliamentary Consultative Council and the Benelux Court of Justice while the Benelux Office for Intellectual Property covers the same land but is not part of the Benelux Union.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Benelux General Secretariat is located in Brussels. It is the central platform of the Benelux Union cooperation. It handles the secretariat of the Committee of Ministers, the Council of Benelux Union and the sundry committees and working parties. The General Secretariat provides day-to-day support for the Benelux cooperation on the substantive, procedural, diplomatic and logistical levels. The Secretary-General is Frans Weekers from the Netherlands and there are two deputies: Deputy Secretary-General Michel-Etienne Tilemans from Belgium and Deputy Secretary-General Jean-Claude Meyer from Luxembourg.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The presidency of the Benelux is held in turn by the three countries for a period of one year. The Netherlands holds the presidency for 2023.",
"title": ""
},
{
"paragraph_id": 5,
"text": "In 1944, exiled representatives of the three countries signed the London Customs Convention, the treaty that established the Benelux Customs Union. Ratified in 1947, the treaty was in force from 1948 until it was superseded by the Benelux Economic Union. The initial form of economic cooperation expanded steadily over time, leading to the signing of the treaty establishing the Benelux Economic Union (Benelux Economische Unie, Union Économique Benelux) on 3 February 1958 in The Hague, which came into force on 1 November 1960. Initially, the purpose of cooperation among the three partners was to put an end to customs barriers at their borders and ensure free movement of persons, capital, services, and goods between the three countries. This treaty was the first example of international economic integration in Europe since the Second World War.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The three countries therefore foreshadowed and provided the model for future European integration, such as the European Coal and Steel Community, the European Economic Community (EEC), and the European Community–European Union (EC–EU). The three partners also launched the Schengen process, which came into operation in 1985. Benelux cooperation has been constantly adapted and now goes much further than mere economic cooperation, extending to new and topical policy areas connected with security, sustainable development, and the economy.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1965, the treaty establishing a Benelux Court of Justice was signed. It entered into force in 1974. The court, composed of judges from the highest courts of the three states, has to guarantee the uniform interpretation of common legal rules. This international judicial institution is located in Luxembourg.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The 1958 Treaty between the Benelux countries establishing the Benelux Economic Union was limited to a period of 50 years. During the following years, and even more so after the creation of the European Union, the Benelux cooperation focused on developing other fields of activity within a constantly changing international context.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "At the end of the 50 years, the governments of the three Benelux countries decided to renew the agreement, taking into account the new aspects of the Benelux-cooperation – such as security – and the new federal government structure of Belgium. The original establishing treaty, set to expire in 2010, was replaced by a new legal framework (called the Treaty revising the Treaty establishing the Benelux Economic Union), which was signed on 17 June 2008.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The new treaty has no set time limit and the name of the Benelux Economic Union changed to Benelux Union to reflect the broad scope on the union. The main objectives of the treaty are the continuation and enlargement of the cooperation between the three member states within a larger European context. The renewed treaty explicitly foresees the possibility that the Benelux countries will cooperate with other European member states or with regional cooperation structures. The new Benelux cooperation focuses on three main topics: internal market and economic union, sustainability, justice and internal affairs. The number of structures in the renewed Treaty has been reduced and thus simplified.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Benelux seeks region-to-region cooperation, be it with France and Germany (North Rhine-Westphalia) or beyond with the Baltic States, the Nordic Council, the Visegrad countries, or even further. In 2018 a renewed political declaration was adopted between Benelux and North Rhine-Westphalia to give cooperation a further impetus.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Benelux is particularly active in the field of intellectual property. The three countries established a Benelux Trademarks Office and a Benelux Designs Office, both situated in The Hague. In 2005, they concluded a treaty establishing the Benelux Office for Intellectual Property, which replaced both offices upon its entry into force on 1 September 2006. This organisation is the official body for the registration of trademarks and designs in the Benelux. In addition, it offers the possibility to formally record the existence of ideas, concepts, designs, prototypes and the like.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Some examples of recent Benelux initiatives include: automatic level recognition of diplomas and degrees within the Benelux for bachelor's and master's programs in 2015, and for all other degrees in 2018; common road inspections in 2014; and a Benelux pilot with digital consignment notes (e-CMR) in 2017; a new Benelux Treaty on Police Cooperation in 2018, providing for direct access to each other's police databases and population registers within the limits of national legislation, and allowing some police forces to cross borders in some situations. The Benelux is also committed to working together on adaptation to climate change. A joint political declaration in July 2020 called on the European Commission to prioritise cycling in European climate policy and Sustainable Transport strategies, to co-finance the construction of cycling infrastructure, and to provide funds to stimulate cycling policy.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "On 5 June 2018 the Benelux Treaty celebrated its 60 years of existence. In 2018, a Benelux Youth Parliament was created.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In addition to cooperation based on a Treaty, there is also political cooperation in the Benelux context, including summits of the Benelux government leaders. In 2019 a Benelux summit was held in Luxembourg. In 2020, a Benelux summit was held – online, due to the COVID-19 pandemic – under Dutch Presidency on 7 October between the prime ministers.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "As of 1st January 2017, a new arrangement for NATO Air Policing started for the airspace of Belgium, the Netherlands and Luxemburg (Benelux). The Belgian Air Component and the Royal Netherlands Air Force will take four-month turns to ensure that Quick Reaction Alert (QRA) fighter jets are available at all times to be launched under NATO control.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The Benelux countries also work together in the so-called Pentalateral Energy Forum, a regional cooperation group formed of five members—the Benelux states, France, Germany, Austria, and Switzerland. Formed on 6 June 2007, the ministers for energy from the various countries represent a total of 200 million residents and 40% of the European electricity network.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 2017 the members of the Benelux, the Baltic Assembly, three members of the Nordic Council (Sweden, Denmark and Finland), and all the other countries EU member states, sought to increase cooperation in the Digital Single Market, as well as discussing social matters, the Economic and Monetary Union of the European Union, immigration and defence cooperation. Foreign relations in the wake of Russia's annexation of Crimea and the 2017 Turkish constitutional referendum were also on the agenda.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Since 2008 the Benelux Union works together with the German Land (state) North Rhine-Westphalia.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 2018 Benelux Union signed a declaration with France to strengthen cross-border cooperation.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Under the 2008 treaty there are five Benelux institutions: the Benelux Committee of Ministers, the Benelux Council, the Benelux Parliament, the Benelux Court of Justice, the Benelux Secretariat General. Beside these five institutions, the Benelux Organisation for Intellectual Property is also an independent organisation.",
"title": "Politics"
},
{
"paragraph_id": 22,
"text": "Benelux Committee of Ministers:",
"title": "Politics"
},
{
"paragraph_id": 23,
"text": "The Committee of Ministers is the supreme decision-making body of the Benelux. It includes at least one representative at ministerial level from the three countries. Its composition varies according to its agenda. The ministers determine the orientations and priorities of Benelux cooperation. The presidency of the Committee rotates between the three countries on an annual basis.",
"title": "Politics"
},
{
"paragraph_id": 24,
"text": "Benelux Council:",
"title": "Politics"
},
{
"paragraph_id": 25,
"text": "The council is composed of senior officials from the relevant ministries. Its composition varies according to its agenda. The council's main task is to prepare the dossiers for the ministers.",
"title": "Politics"
},
{
"paragraph_id": 26,
"text": "Benelux InterParliamentary Consultative Council: The Benelux Parliament (officially referred to as an \"Interparliamentary Consultative Council\") was created in 1955. This parliamentary assembly is composed of 49 members from the respective national parliaments (21 members of the Dutch parliament, 21 members of the Belgian national and regional parliaments, and 7 members of the Luxembourg parliament). Its members inform and advise their respective governments on all Benelux matters. On 20 January 2015, the governments of the three countries, including, as far as Belgium is concerned, the community and regional governments, signed in Brussels the Treaty of the Benelux Interparliamentary Assembly. This treaty entered into force on 1 August 2019. This superseded the 1955 Convention on the Consultative Interparliamentary Council for the Benelux. The official name has been largely obsolete in daily practice for a number of years: both internally in the Benelux and in external references, the name Benelux Parliament has been used de facto for a number of years now.",
"title": "Politics"
},
{
"paragraph_id": 27,
"text": "Benelux Court of Justice:",
"title": "Politics"
},
{
"paragraph_id": 28,
"text": "The Benelux Court of Justice is an international court. Its mission is to promote uniformity in the application of Benelux legislation. When faced with difficulty interpreting a common Benelux legal rule, national courts must seek an interpretive ruling from the Benelux Court, which subsequently renders a binding decision. The members of the Court are appointed from among the judges of the 'Cour de cassation' of Belgium, the 'Hoge Raad of the Netherlands' and the 'Cour de cassation' of Luxembourg.",
"title": "Politics"
},
{
"paragraph_id": 29,
"text": "Benelux General Secretariat:",
"title": "Politics"
},
{
"paragraph_id": 30,
"text": "The General Secretariat, which is based in Brussels, forms the cooperation platform of the Benelux Union. It acts as the secretariat of the Committee of Ministers, the council and various commissions and working groups. The General Secretariat has years of expertise in the area of Benelux cooperation and is familiar with the policy agreements and differences between the three countries. Building on what already been achieved, the General Secretariat puts its knowledge, network and experience at the service of partners and stakeholders who endorse its mission. It initiates, supports and monitors cooperation results in the areas of economy, sustainability and security.",
"title": "Politics"
},
{
"paragraph_id": 31,
"text": "Benelux works together on the basis of an annual plan embedded in a four-year joint work programme.",
"title": "Politics"
},
{
"paragraph_id": 32,
"text": "The Benelux Union involves intergovernmental cooperation.",
"title": "Politics"
},
{
"paragraph_id": 33,
"text": "The Treaty establishing the Benelux Union explicitly provides that the Benelux Committee of Ministers can resort to four legal instruments (art. 6, paragraph 2, under a), f), g) and h)):",
"title": "Politics"
},
{
"paragraph_id": 34,
"text": "1. Decisions",
"title": "Politics"
},
{
"paragraph_id": 35,
"text": "Decisions are legally binding regulations for implementing the Treaty establishing the Benelux Union or other Benelux treaties.",
"title": "Politics"
},
{
"paragraph_id": 36,
"text": "Their legally binding force concerns the Benelux states (and their sub-state entities), which have to implement them. However, they have no direct effect towards individual citizens or companies (notwithstanding any indirect protection of their rights based on such decisions as a source of international law). Only national provisions implementing a decision can directly create rights and obligations for citizens or companies.",
"title": "Politics"
},
{
"paragraph_id": 37,
"text": "2. Agreements",
"title": "Politics"
},
{
"paragraph_id": 38,
"text": "The Committee of Ministers can draw up agreements, which are then submitted to the Benelux states (and/or their sub-state entities) for signature and subsequent parliamentary ratification. These agreements can deal with any subject matter, also in policy areas that are not yet covered by cooperation in the framework of the Benelux Union.",
"title": "Politics"
},
{
"paragraph_id": 39,
"text": "These are in fact traditional treaties, with the same direct legally binding force towards both authorities and citizens or companies. The negotiations do however take place in the established context of the Benelux working groups and institutions, rather than on an ad hoc basis.",
"title": "Politics"
},
{
"paragraph_id": 40,
"text": "3. Recommendations",
"title": "Politics"
},
{
"paragraph_id": 41,
"text": "Recommendations are non-binding orientations, adopted at ministerial level, which underpin the functioning of the Benelux Union. These (policy) orientations may not be legally binding, but given their adoption at the highest political level and their legal basis vested directly in the Treaty, they do entail a strong moral obligation for any authority concerned in the Benelux countries.",
"title": "Politics"
},
{
"paragraph_id": 42,
"text": "4. Directives",
"title": "Politics"
},
{
"paragraph_id": 43,
"text": "Directives of the Committee of Ministers are mere inter-institutional instructions towards the Benelux Council and/or the Secretariat-General, for which they are binding. This instrument has so far only been used occasionally, basically in order to organize certain activities within a Benelux working group or to give them impetus.",
"title": "Politics"
},
{
"paragraph_id": 44,
"text": "All four instruments require the unanimous approval of the members of the Committee of Ministers (and, in the case of agreements, subsequent signature and ratification at national level).",
"title": "Politics"
}
] | The Benelux Union or Benelux is a politico-economic union and formal international intergovernmental cooperation of three neighbouring states in western Europe: Belgium, the Netherlands, and Luxembourg. The name is a portmanteau formed from joining the first few letters of each country's name and was first used to name the customs agreement that initiated the union. It is now used more generally to refer to the geographic, economic, and cultural grouping of the three countries. The Benelux is an economically dynamic and densely populated region, with 5.6% of the European population and 7.9% of the joint EU GDP (€36,000/resident) on 1.7% of the whole surface of the EU. Currently 37% of the total number of EU frontier workers work in the Benelux and surrounding areas. 35,000 Belgian citizens work in Luxembourg, while 37,000 Belgian citizens cross the border to work in the Netherlands each day. In addition, 12,000 Dutch and close to a thousand Luxembourg residents work in Belgium. The main institutions of the Union are the Committee of Ministers, the Council of the Union, the General Secretariat, the Interparliamentary Consultative Council and the Benelux Court of Justice while the Benelux Office for Intellectual Property covers the same land but is not part of the Benelux Union. The Benelux General Secretariat is located in Brussels. It is the central platform of the Benelux Union cooperation. It handles the secretariat of the Committee of Ministers, the Council of Benelux Union and the sundry committees and working parties. The General Secretariat provides day-to-day support for the Benelux cooperation on the substantive, procedural, diplomatic and logistical levels. The Secretary-General is Frans Weekers from the Netherlands and there are two deputies: Deputy Secretary-General Michel-Etienne Tilemans from Belgium and Deputy Secretary-General Jean-Claude Meyer from Luxembourg. The presidency of the Benelux is held in turn by the three countries for a period of one year. The Netherlands holds the presidency for 2023. | 2001-09-13T06:22:13Z | 2023-12-14T11:48:30Z | [
"Template:Lang-fr",
"Template:When",
"Template:Cite web",
"Template:Infobox Geopolitical organization",
"Template:ISBN",
"Template:Official website",
"Template:Authority control",
"Template:Reflist",
"Template:Cite book",
"Template:About",
"Template:Use dmy dates",
"Template:Lang-nl",
"Template:Citation needed",
"Template:Lang",
"Template:Refn",
"Template:Commons category",
"Template:Webarchive",
"Template:Wikivoyage",
"Template:Short description",
"Template:Lang-lb",
"Template:UN Population",
"Template:Cite news",
"Template:Cite press release",
"Template:Cite CIA World Factbook",
"Template:Benelux countries"
] | https://en.wikipedia.org/wiki/Benelux |
4,171 | Boston Herald | The Boston Herald is an American daily newspaper whose primary market is Boston, Massachusetts, and its surrounding area. It was founded in 1846 and is one of the oldest daily newspapers in the United States. It has been awarded eight Pulitzer Prizes in its history, including four for editorial writing and three for photography before it was converted to tabloid format in 1981. The Herald was named one of the "10 Newspapers That 'Do It Right'" in 2012 by Editor & Publisher.
In December 2017, the Herald filed for bankruptcy. On February 14, 2018, Digital First Media successfully bid $11.9 million to purchase the company in a bankruptcy auction; the acquisition was completed on March 19, 2018. As of August 2018, the paper had approximately 110 total employees, compared to about 225 before the sale.
The Herald's history traces back through two lineages, the Daily Advertiser and the old Boston Herald, and two media moguls, William Randolph Hearst and Rupert Murdoch.
The original Boston Herald was founded in 1846 by a group of Boston printers jointly under the name of John A. French & Company. The paper was published as a single two-sided sheet, selling for one cent. Its first editor, William O. Eaton, just 22 years old, said "The Herald will be independent in politics and religion; liberal, industrious, enterprising, critically concerned with literacy and dramatic matters, and diligent in its mission to report and analyze the news, local and global."
In 1847, the Boston Herald absorbed the Boston American Eagle and the Boston Daily Times.
In October 1917, John H. Higgins, the publisher and treasurer of the Boston Herald bought out its next door neighbor The Boston Journal and created The Boston Herald and Boston Journal
Even earlier than the Herald, the weekly American Traveler was founded in 1825 as a bulletin for stagecoach listings.
The Boston Evening Traveler was founded in 1845. The Boston Evening Traveler was the successor to the weekly American Traveler and the semi-weekly Boston Traveler. In 1912, the Herald acquired the Traveler, continuing to publish both under their own names. For many years, the newspaper was controlled by many of the investors in United Shoe Machinery Corporation. After a newspaper strike in 1967, Herald-Traveler Corp. suspended the afternoon Traveler and absorbed the evening edition into the Herald to create the Boston Herald Traveler.
The Boston Daily Advertiser was established in 1813 in Boston by Nathan Hale. The paper grew to prominence throughout the 19th century, taking over other Boston area papers. In 1832 The Advertiser took over control of The Boston Patriot, and then in 1840 it took over and absorbed The Boston Gazette. The paper was purchased by William Randolph Hearst in 1917. In 1920 the Advertiser was merged with The Boston Record, initially the combined newspaper was called the Boston Advertiser however when the combined newspaper became an illustrated tabloid in 1921 it was renamed The Boston American. Hearst Corp. continued using the name Advertiser for its Sunday paper until the early 1970s.
On September 3, 1884, The Boston Evening Record was started by the Boston Advertiser as a campaign newspaper. The Record was so popular that it was made a permanent publication.
In 1904, William Randolph Hearst began publishing his own newspaper in Boston called The American. Hearst ultimately ended up purchasing the Daily Advertiser in 1917. By 1938, the Daily Advertiser had changed to the Daily Record, and The American had become the Sunday Advertiser. A third paper owned by Hearst, called the Afternoon Record, which had been renamed the Evening American, merged in 1961 with the Daily Record to form the Record American. The Sunday Advertiser and Record American would ultimately be merged in 1972 into The Boston Herald Traveler a line of newspapers that stretched back to the old Boston Herald.
In 1946, Herald-Traveler Corporation acquired Boston radio station WHDH. Two years later, WHDH-FM was licensed, and on November 26, 1957, WHDH-TV made its debut as an ABC affiliate on channel 5. In 1961, WHDH-TV's affiliation switched to CBS. The television station operated for years beginning some time after under temporary authority from the Federal Communications Commission. Controversy arose over luncheon meetings the newspaper's chief executive purportedly had with John C. Doerfer, chairman of the FCC between 1957 and 1960, who served as a commissioner during the original licensing process. (Some Boston broadcast historians accuse The Boston Globe of being covertly behind the proceeding as a sort of vendetta for not getting a license—The Herald Traveler was Republican in sympathies, and the Globe then had a firm policy of not endorsing political candidates, although Doerfer's history at the FCC also lent suspicions.) The FCC ordered comparative hearings, and in 1969 a competing applicant, Boston Broadcasters, Inc., was granted a construction permit to replace WHDH-TV on channel 5. Herald-Traveler Corporation fought the decision in court—by this time, revenues from channel 5 were all but keeping the newspaper afloat—but lost its final appeal. On March 19, 1972, WHDH-TV was forced to surrender channel 5 to the new WCVB-TV.
Without a television station to subsidize the newspaper, the Herald Traveler was no longer able to remain in business, and the newspaper was sold to Hearst Corporation, which published the rival all-day newspaper, the Record American. The two papers were merged to become an all-day paper called the Boston Herald Traveler and Record American in the morning and Record-American and Boston Herald Traveler in the afternoon. The first editions published under the new combined name were those of June 19, 1972. The afternoon edition was soon dropped and the unwieldy name shortened to Boston Herald American, with the Sunday edition called the Sunday Herald Advertiser. The Herald American was printed in broadsheet format, and failed to target a particular readership; where the Record American had been a typical city tabloid, the Herald Traveler was a Republican paper.
The Herald American converted to tabloid format in September 1981, but Hearst faced steep declines in circulation and advertising. The company announced it would close the Herald American—making Boston a one-newspaper town—on December 3, 1982. When the deadline came, Australian-born media baron Rupert Murdoch was negotiating to buy the paper and save it. He closed on the deal after 30 hours of talks with Hearst and newspaper unions—and five hours after Hearst had sent out notices to newsroom employees telling them they were terminated. The newspaper announced its own survival the next day with a full-page headline: "You Bet We're Alive!"
Murdoch changed the paper's name back to the Boston Herald. The Herald continued to grow, expanding its coverage and increasing its circulation until 2001, when nearly all newspapers fell victim to declining circulations and revenue.
In February 1994, Murdoch's News Corporation was forced to sell the paper, in order that its subsidiary Fox Television Stations could legally consummate its purchase of Fox affiliate WFXT (Channel 25) because Massachusetts Senator Ted Kennedy included language in an appropriations barring one company from owning a newspaper and television station in the same market. Patrick J. Purcell, who was the publisher of the Boston Herald and a former News Corporation executive, purchased the Herald and established it as an independent newspaper. Several years later, Purcell would give the Herald a suburban presence it never had by purchasing the money-losing Community Newspaper Company from Fidelity Investments. Although the companies merged under the banner of Herald Media, Inc., the suburban papers maintained their distinct editorial and marketing identity.
After years of operating profits at Community Newspaper and losses at the Herald, Purcell in 2006 sold the suburban chain to newspaper conglomerate Liberty Group Publishing of Illinois, which soon after changed its name to GateHouse Media. The deal, which also saw GateHouse acquiring The Patriot Ledger and The Enterprise respectively in south suburban Quincy and Brockton, netted $225 million for Purcell, who vowed to use the funds to clear the Herald's debt and reinvest in the Paper.
On August 5, 2013, the Herald launched an internet radio station named Boston Herald Radio which includes radio shows by much of the Herald staff. The station's morning lineup is simulcast on 830 AM WCRN from 10 am Eastern time to 12 noon Eastern time.
In December 2017, the Herald announced plans to sell itself to GateHouse Media after filing for chapter 11 bankruptcy protection. The deal was scheduled to be completed by February 2018, with the new company streamlining and having layoffs in coming months. However, in early January 2018, another potential buyer, Revolution Capital Group of Los Angeles, filed a bid with the federal bankruptcy court; the Herald reported in a press release that "the court requires BHI [Boston Herald, Inc.] to hold an auction to allow all potential buyers an opportunity to submit competing offers."
In February 2018, acquisition of the Herald by Digital First Media for almost $12 million was approved by the bankruptcy court judge in Delaware. The new owner, DFM, said they would be keeping 175 of the approximately 240 employees the Herald had when it sought bankruptcy protection in December 2017. The acquisition was completed on March 19, 2018.
The Herald and parent DFM were criticized for ending the ten-year printing contract with competitor The Boston Globe, moving printing from Taunton, Massachusetts, to Rhode Island and its "dehumanizing cost-cutting efforts" in personnel. In June, some design and advertising layoffs were expected, with work moving to a sister paper, The Denver Post. The "consolidation" took effect in August, with nine jobs eliminated.
In late August 2018, it was announced that the Herald would move its offices from Boston's Seaport District to Braintree, Massachusetts, in late November or early December.
On October 27, 2020, the Boston Herald endorsed Donald Trump for the 2020 U.S. Presidential Election.
Boston Herald July 29, 1998 | [
{
"paragraph_id": 0,
"text": "The Boston Herald is an American daily newspaper whose primary market is Boston, Massachusetts, and its surrounding area. It was founded in 1846 and is one of the oldest daily newspapers in the United States. It has been awarded eight Pulitzer Prizes in its history, including four for editorial writing and three for photography before it was converted to tabloid format in 1981. The Herald was named one of the \"10 Newspapers That 'Do It Right'\" in 2012 by Editor & Publisher.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In December 2017, the Herald filed for bankruptcy. On February 14, 2018, Digital First Media successfully bid $11.9 million to purchase the company in a bankruptcy auction; the acquisition was completed on March 19, 2018. As of August 2018, the paper had approximately 110 total employees, compared to about 225 before the sale.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Herald's history traces back through two lineages, the Daily Advertiser and the old Boston Herald, and two media moguls, William Randolph Hearst and Rupert Murdoch.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The original Boston Herald was founded in 1846 by a group of Boston printers jointly under the name of John A. French & Company. The paper was published as a single two-sided sheet, selling for one cent. Its first editor, William O. Eaton, just 22 years old, said \"The Herald will be independent in politics and religion; liberal, industrious, enterprising, critically concerned with literacy and dramatic matters, and diligent in its mission to report and analyze the news, local and global.\"",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1847, the Boston Herald absorbed the Boston American Eagle and the Boston Daily Times.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In October 1917, John H. Higgins, the publisher and treasurer of the Boston Herald bought out its next door neighbor The Boston Journal and created The Boston Herald and Boston Journal",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Even earlier than the Herald, the weekly American Traveler was founded in 1825 as a bulletin for stagecoach listings.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Boston Evening Traveler was founded in 1845. The Boston Evening Traveler was the successor to the weekly American Traveler and the semi-weekly Boston Traveler. In 1912, the Herald acquired the Traveler, continuing to publish both under their own names. For many years, the newspaper was controlled by many of the investors in United Shoe Machinery Corporation. After a newspaper strike in 1967, Herald-Traveler Corp. suspended the afternoon Traveler and absorbed the evening edition into the Herald to create the Boston Herald Traveler.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Boston Daily Advertiser was established in 1813 in Boston by Nathan Hale. The paper grew to prominence throughout the 19th century, taking over other Boston area papers. In 1832 The Advertiser took over control of The Boston Patriot, and then in 1840 it took over and absorbed The Boston Gazette. The paper was purchased by William Randolph Hearst in 1917. In 1920 the Advertiser was merged with The Boston Record, initially the combined newspaper was called the Boston Advertiser however when the combined newspaper became an illustrated tabloid in 1921 it was renamed The Boston American. Hearst Corp. continued using the name Advertiser for its Sunday paper until the early 1970s.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "On September 3, 1884, The Boston Evening Record was started by the Boston Advertiser as a campaign newspaper. The Record was so popular that it was made a permanent publication.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1904, William Randolph Hearst began publishing his own newspaper in Boston called The American. Hearst ultimately ended up purchasing the Daily Advertiser in 1917. By 1938, the Daily Advertiser had changed to the Daily Record, and The American had become the Sunday Advertiser. A third paper owned by Hearst, called the Afternoon Record, which had been renamed the Evening American, merged in 1961 with the Daily Record to form the Record American. The Sunday Advertiser and Record American would ultimately be merged in 1972 into The Boston Herald Traveler a line of newspapers that stretched back to the old Boston Herald.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1946, Herald-Traveler Corporation acquired Boston radio station WHDH. Two years later, WHDH-FM was licensed, and on November 26, 1957, WHDH-TV made its debut as an ABC affiliate on channel 5. In 1961, WHDH-TV's affiliation switched to CBS. The television station operated for years beginning some time after under temporary authority from the Federal Communications Commission. Controversy arose over luncheon meetings the newspaper's chief executive purportedly had with John C. Doerfer, chairman of the FCC between 1957 and 1960, who served as a commissioner during the original licensing process. (Some Boston broadcast historians accuse The Boston Globe of being covertly behind the proceeding as a sort of vendetta for not getting a license—The Herald Traveler was Republican in sympathies, and the Globe then had a firm policy of not endorsing political candidates, although Doerfer's history at the FCC also lent suspicions.) The FCC ordered comparative hearings, and in 1969 a competing applicant, Boston Broadcasters, Inc., was granted a construction permit to replace WHDH-TV on channel 5. Herald-Traveler Corporation fought the decision in court—by this time, revenues from channel 5 were all but keeping the newspaper afloat—but lost its final appeal. On March 19, 1972, WHDH-TV was forced to surrender channel 5 to the new WCVB-TV.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Without a television station to subsidize the newspaper, the Herald Traveler was no longer able to remain in business, and the newspaper was sold to Hearst Corporation, which published the rival all-day newspaper, the Record American. The two papers were merged to become an all-day paper called the Boston Herald Traveler and Record American in the morning and Record-American and Boston Herald Traveler in the afternoon. The first editions published under the new combined name were those of June 19, 1972. The afternoon edition was soon dropped and the unwieldy name shortened to Boston Herald American, with the Sunday edition called the Sunday Herald Advertiser. The Herald American was printed in broadsheet format, and failed to target a particular readership; where the Record American had been a typical city tabloid, the Herald Traveler was a Republican paper.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The Herald American converted to tabloid format in September 1981, but Hearst faced steep declines in circulation and advertising. The company announced it would close the Herald American—making Boston a one-newspaper town—on December 3, 1982. When the deadline came, Australian-born media baron Rupert Murdoch was negotiating to buy the paper and save it. He closed on the deal after 30 hours of talks with Hearst and newspaper unions—and five hours after Hearst had sent out notices to newsroom employees telling them they were terminated. The newspaper announced its own survival the next day with a full-page headline: \"You Bet We're Alive!\"",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Murdoch changed the paper's name back to the Boston Herald. The Herald continued to grow, expanding its coverage and increasing its circulation until 2001, when nearly all newspapers fell victim to declining circulations and revenue.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In February 1994, Murdoch's News Corporation was forced to sell the paper, in order that its subsidiary Fox Television Stations could legally consummate its purchase of Fox affiliate WFXT (Channel 25) because Massachusetts Senator Ted Kennedy included language in an appropriations barring one company from owning a newspaper and television station in the same market. Patrick J. Purcell, who was the publisher of the Boston Herald and a former News Corporation executive, purchased the Herald and established it as an independent newspaper. Several years later, Purcell would give the Herald a suburban presence it never had by purchasing the money-losing Community Newspaper Company from Fidelity Investments. Although the companies merged under the banner of Herald Media, Inc., the suburban papers maintained their distinct editorial and marketing identity.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "After years of operating profits at Community Newspaper and losses at the Herald, Purcell in 2006 sold the suburban chain to newspaper conglomerate Liberty Group Publishing of Illinois, which soon after changed its name to GateHouse Media. The deal, which also saw GateHouse acquiring The Patriot Ledger and The Enterprise respectively in south suburban Quincy and Brockton, netted $225 million for Purcell, who vowed to use the funds to clear the Herald's debt and reinvest in the Paper.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "On August 5, 2013, the Herald launched an internet radio station named Boston Herald Radio which includes radio shows by much of the Herald staff. The station's morning lineup is simulcast on 830 AM WCRN from 10 am Eastern time to 12 noon Eastern time.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In December 2017, the Herald announced plans to sell itself to GateHouse Media after filing for chapter 11 bankruptcy protection. The deal was scheduled to be completed by February 2018, with the new company streamlining and having layoffs in coming months. However, in early January 2018, another potential buyer, Revolution Capital Group of Los Angeles, filed a bid with the federal bankruptcy court; the Herald reported in a press release that \"the court requires BHI [Boston Herald, Inc.] to hold an auction to allow all potential buyers an opportunity to submit competing offers.\"",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In February 2018, acquisition of the Herald by Digital First Media for almost $12 million was approved by the bankruptcy court judge in Delaware. The new owner, DFM, said they would be keeping 175 of the approximately 240 employees the Herald had when it sought bankruptcy protection in December 2017. The acquisition was completed on March 19, 2018.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The Herald and parent DFM were criticized for ending the ten-year printing contract with competitor The Boston Globe, moving printing from Taunton, Massachusetts, to Rhode Island and its \"dehumanizing cost-cutting efforts\" in personnel. In June, some design and advertising layoffs were expected, with work moving to a sister paper, The Denver Post. The \"consolidation\" took effect in August, with nine jobs eliminated.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In late August 2018, it was announced that the Herald would move its offices from Boston's Seaport District to Braintree, Massachusetts, in late November or early December.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "On October 27, 2020, the Boston Herald endorsed Donald Trump for the 2020 U.S. Presidential Election.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Boston Herald July 29, 1998",
"title": "References"
}
] | The Boston Herald is an American daily newspaper whose primary market is Boston, Massachusetts, and its surrounding area. It was founded in 1846 and is one of the oldest daily newspapers in the United States. It has been awarded eight Pulitzer Prizes in its history, including four for editorial writing and three for photography before it was converted to tabloid format in 1981. The Herald was named one of the "10 Newspapers That 'Do It Right'" in 2012 by Editor & Publisher. In December 2017, the Herald filed for bankruptcy. On February 14, 2018, Digital First Media successfully bid $11.9 million to purchase the company in a bankruptcy auction; the acquisition was completed on March 19, 2018. As of August 2018, the paper had approximately 110 total employees, compared to about 225 before the sale. | 2001-09-12T17:38:12Z | 2023-12-18T02:50:20Z | [
"Template:Commons category",
"Template:Official website",
"Template:ITunes Preview App",
"Template:Digital First Media",
"Template:Short description",
"Template:Use mdy dates",
"Template:Reflist",
"Template:Cite news",
"Template:Webarchive",
"Template:Main",
"Template:SS",
"Template:Cite EB1911",
"Template:Cite magazine",
"Template:Newspapers in Massachusetts",
"Template:Use American English",
"Template:'s",
"Template:Convert",
"Template:'",
"Template:Cite web",
"Template:Citation",
"Template:ISBN",
"Template:Authority control",
"Template:Infobox newspaper",
"Template:-\""
] | https://en.wikipedia.org/wiki/Boston_Herald |
4,173 | Babe Ruth | George Herman "Babe" Ruth (February 6, 1895 – August 16, 1948) was an American professional baseball player whose career in Major League Baseball (MLB) spanned 22 seasons, from 1914 through 1935. Nicknamed "the Bambino" and "the Sultan of Swat", he began his MLB career as a star left-handed pitcher for the Boston Red Sox, but achieved his greatest fame as a slugging outfielder for the New York Yankees. Ruth is regarded as one of the greatest sports heroes in American culture and is considered by many to be the greatest baseball player of all time. In 1936, Ruth was elected into the Baseball Hall of Fame as one of its "first five" inaugural members.
At age seven, Ruth was sent to St. Mary's Industrial School for Boys, a reformatory where he was mentored by Brother Matthias Boutlier of the Xaverian Brothers, the school's disciplinarian and a capable baseball player. In 1914, Ruth was signed to play Minor League baseball for the Baltimore Orioles but was soon sold to the Red Sox. By 1916, he had built a reputation as an outstanding pitcher who sometimes hit long home runs, a feat unusual for any player in the dead-ball era. Although Ruth twice won 23 games in a season as a pitcher and was a member of three World Series championship teams with the Red Sox, he wanted to play every day and was allowed to convert to an outfielder. With regular playing time, he broke the MLB single-season home run record in 1919 with 29.
After that season, Red Sox owner Harry Frazee sold Ruth to the Yankees amid controversy. The trade fueled Boston's subsequent 86-year championship drought and popularized the "Curse of the Bambino" superstition. In his 15 years with the Yankees, Ruth helped the team win seven American League (AL) pennants and four World Series championships. His big swing led to escalating home run totals that not only drew fans to the ballpark and boosted the sport's popularity but also helped usher in baseball's live-ball era, which evolved from a low-scoring game of strategy to a sport where the home run was a major factor. As part of the Yankees' vaunted "Murderers' Row" lineup of 1927, Ruth hit 60 home runs, which extended his own MLB single-season record by a single home run. Ruth's last season with the Yankees was 1934; he retired from the game the following year, after a short stint with the Boston Braves. In his career, he led the American League in home runs twelve times.
During Ruth's career, he was the target of intense press and public attention for his baseball exploits and off-field penchants for drinking and womanizing. After his retirement as a player, he was denied the opportunity to manage a major league club, most likely because of poor behavior during parts of his playing career. In his final years, Ruth made many public appearances, especially in support of American efforts in World War II. In 1946, he became ill with nasopharyngeal cancer and died from the disease two years later. Ruth remains a major figure in American culture.
George Herman Ruth Jr. was born on February 6, 1895, at 216 Emory Street in the Pigtown section of Baltimore, Maryland. Ruth's parents, Katherine (née Schamberger) and George Herman Ruth Sr., were both of German ancestry. According to the 1880 census, his parents were both born in Maryland. His paternal grandparents were from Prussia and Hanover, Germany. Ruth Sr. worked a series of jobs that included lightning rod salesman and streetcar operator. The elder Ruth then became a counterman in a family-owned combination grocery and saloon business on Frederick Street. George Ruth Jr. was born in the house of his maternal grandfather, Pius Schamberger, a German immigrant and trade unionist. Only one of young Ruth's seven siblings, his younger sister Mamie, survived infancy.
Many details of Ruth's childhood are unknown, including the date of his parents' marriage. As a child, Ruth spoke German. When Ruth was a toddler, the family moved to 339 South Woodyear Street, not far from the rail yards; by the time he was six years old, his father had a saloon with an upstairs apartment at 426 West Camden Street. Details are equally scanty about why Ruth was sent at the age of seven to St. Mary's Industrial School for Boys, a reformatory and orphanage. However, according to Julia Ruth Stevens' recount in 1999, because George Sr. was a saloon owner in Baltimore and had given Ruth little supervision growing up, he became a delinquent. Ruth was sent to St. Mary's because George Sr. ran out of ideas to discipline and mentor his son. As an adult, Ruth admitted that as a youth he ran the streets, rarely attended school, and drank beer when his father was not looking. Some accounts say that following a violent incident at his father's saloon, the city authorities decided that this environment was unsuitable for a small child. Ruth entered St. Mary's on June 13, 1902. He was recorded as "incorrigible" and spent much of the next 12 years there.
Although St. Mary's boys received an education, students were also expected to learn work skills and help operate the school, particularly once the boys turned 12. Ruth became a shirtmaker and was also proficient as a carpenter. He would adjust his own shirt collars, rather than having a tailor do so, even during his well-paid baseball career. The boys, aged 5 to 21, did most of the work around the facility, from cooking to shoemaking, and renovated St. Mary's in 1912. The food was simple, and the Xaverian Brothers who ran the school insisted on strict discipline; corporal punishment was common. Ruth's nickname there was "Niggerlips", as he had large facial features and was darker than most boys at the all-white reformatory.
Ruth was sometimes allowed to rejoin his family or was placed at St. James's Home, a supervised residence with work in the community, but he was always returned to St. Mary's. He was rarely visited by his family; his mother died when he was 12 and, by some accounts, he was permitted to leave St. Mary's only to attend the funeral. How Ruth came to play baseball there is uncertain: according to one account, his placement at St. Mary's was due in part to repeatedly breaking Baltimore's windows with long hits while playing street ball; by another, he was told to join a team on his first day at St. Mary's by the school's athletic director, Brother Herman, becoming a catcher even though left-handers rarely play that position. During his time there he also played third base and shortstop, again unusual for a left-hander, and was forced to wear mitts and gloves made for right-handers. He was encouraged in his pursuits by the school's Prefect of Discipline, Brother Matthias Boutlier, a native of Nova Scotia. A large man, Brother Matthias was greatly respected by the boys both for his strength and for his fairness. For the rest of his life, Ruth would praise Brother Matthias, and his running and hitting styles closely resembled his teacher's. Ruth stated, "I think I was born as a hitter the first day I ever saw him hit a baseball." The older man became a mentor and role model to Ruth; biographer Robert W. Creamer commented on the closeness between the two:
Ruth revered Brother Matthias ... which is remarkable, considering that Matthias was in charge of making boys behave and that Ruth was one of the great natural misbehavers of all time. ... George Ruth caught Brother Matthias' attention early, and the calm, considerable attention the big man gave the young hellraiser from the waterfront struck a spark of response in the boy's soul ... [that may have] blunted a few of the more savage teeth in the gross man whom I have heard at least a half-dozen of his baseball contemporaries describe with admiring awe and wonder as "an animal."
The school's influence remained with Ruth in other ways. He was a lifelong Catholic who would sometimes attend Mass after carousing all night, and he became a well-known member of the Knights of Columbus. He would visit orphanages, schools, and hospitals throughout his life, often avoiding publicity. He was generous to St. Mary's as he became famous and rich, donating money and his presence at fundraisers, and spending $5,000 to buy Brother Matthias a Cadillac in 1926—subsequently replacing it when it was destroyed in an accident. Nevertheless, his biographer Leigh Montville suggests that many of the off-the-field excesses of Ruth's career were driven by the deprivations of his time at St. Mary's.
Most of the boys at St. Mary's played baseball in organized leagues at different levels of proficiency. Ruth later estimated that he played 200 games a year as he steadily climbed the ladder of success. Although he played all positions at one time or another, he gained stardom as a pitcher. According to Brother Matthias, Ruth was standing to one side laughing at the bumbling pitching efforts of fellow students, and Matthias told him to go in and see if he could do better. Ruth had become the best pitcher at St. Mary's, and when he was 18 in 1913, he was allowed to leave the premises to play weekend games on teams that were drawn from the community. He was mentioned in several newspaper articles, for both his pitching prowess and ability to hit long home runs.
In early 1914, Ruth signed a professional baseball contract with Jack Dunn, who owned and managed the minor-league Baltimore Orioles, an International League team. The circumstances of Ruth's signing are not known with certainty. By some accounts, Dunn was urged to attend a game between an all-star team from St. Mary's and one from another Xaverian facility, Mount St. Mary's College. Some versions have Ruth running away before the eagerly awaited game, to return in time to be punished, and then pitching St. Mary's to victory as Dunn watched. Others have Washington Senators pitcher Joe Engel, a Mount St. Mary's graduate, pitching in an alumni game after watching a preliminary contest between the college's freshmen and a team from St. Mary's, including Ruth. Engel watched Ruth play, then told Dunn about him at a chance meeting in Washington. Ruth, in his autobiography, stated only that he worked out for Dunn for a half hour, and was signed. According to biographer Kal Wagenheim, there were legal difficulties to be straightened out as Ruth was supposed to remain at the school until he turned 21, though SportsCentury stated in a documentary that Ruth had already been discharged from St. Mary's when he turned 19, and earned a monthly salary of $100.
The train journey to spring training in Fayetteville, North Carolina, in early March was likely Ruth's first outside the Baltimore area. The rookie ballplayer was the subject of various pranks by veteran players, who were probably also the source of his famous nickname. There are various accounts of how Ruth came to be called "Babe", but most center on his being referred to as "Dunnie's babe" (or some variant). SportsCentury reported that his nickname was gained because he was the new "darling" or "project" of Dunn, not only because of Ruth's raw talent, but also because of his lack of knowledge of the proper etiquette of eating out in a restaurant, being in a hotel, or being on a train. "Babe" was, at that time, a common nickname in baseball, with perhaps the most famous to that point being Pittsburgh Pirates pitcher and 1909 World Series hero Babe Adams, who appeared younger than his actual age.
Ruth made his first appearance as a professional ballplayer in an inter-squad game on March 7, 1914. He played shortstop and pitched the last two innings of a 15–9 victory. In his second at-bat, Ruth hit a long home run to right field; the blast was locally reported to be longer than a legendary shot hit by Jim Thorpe in Fayetteville. Ruth made his first appearance against a team in organized baseball in an exhibition game versus the major-league Philadelphia Phillies. Ruth pitched the middle three innings and gave up two runs in the fourth, but then settled down and pitched a scoreless fifth and sixth innings. In a game against the Phillies the following afternoon, Ruth entered during the sixth inning and did not allow a run the rest of the way. The Orioles scored seven runs in the bottom of the eighth inning to overcome a 6–0 deficit, and Ruth was the winning pitcher.
Once the regular season began, Ruth was a star pitcher who was also dangerous at the plate. The team performed well, yet received almost no attention from the Baltimore press. A third major league, the Federal League, had begun play, and the local franchise, the Baltimore Terrapins, restored that city to the major leagues for the first time since 1902. Few fans visited Oriole Park, where Ruth and his teammates labored in relative obscurity. Ruth may have been offered a bonus and a larger salary to jump to the Terrapins; when rumors to that effect swept Baltimore, giving Ruth the most publicity he had experienced to date, a Terrapins official denied it, stating it was their policy not to sign players under contract to Dunn.
The competition from the Terrapins caused Dunn to sustain large losses. Although by late June the Orioles were in first place, having won over two-thirds of their games, the paid attendance dropped as low as 150. Dunn explored a possible move by the Orioles to Richmond, Virginia, as well as the sale of a minority interest in the club. These possibilities fell through, leaving Dunn with little choice other than to sell his best players to major league teams to raise money. He offered Ruth to the reigning World Series champions, Connie Mack's Philadelphia Athletics, but Mack had his own financial problems. The Cincinnati Reds and New York Giants expressed interest in Ruth, but Dunn sold his contract, along with those of pitchers Ernie Shore and Ben Egan, to the Boston Red Sox of the American League (AL) on July 4. The sale price was announced as $25,000 but other reports lower the amount to half that, or possibly $8,500 plus the cancellation of a $3,000 loan. Ruth remained with the Orioles for several days while the Red Sox completed a road trip, and reported to the team in Boston on July 11.
On July 11, 1914, Ruth arrived in Boston with Egan and Shore. Ruth later told the story of how that morning he had met Helen Woodford, who would become his first wife. She was a 16-year-old waitress at Landers Coffee Shop, and Ruth related that she served him when he had breakfast there. Other stories, though, suggested that the meeting occurred on another day, and perhaps under other circumstances. Regardless of when he began to woo his first wife, he won his first game as a pitcher for the Red Sox that afternoon, 4–3, over the Cleveland Naps. His catcher was Bill Carrigan, who was also the Red Sox manager. Shore was given a start by Carrigan the next day; he won that and his second start and thereafter was pitched regularly. Ruth lost his second start, and was thereafter little used. In his major league debut as a batter, Ruth went 0-for-2 against left-hander Willie Mitchell, striking out in his first at bat before being removed for a pinch hitter in the seventh inning. Ruth was not much noticed by the fans, as Bostonians watched the Red Sox's crosstown rivals, the Braves, begin a legendary comeback that would take them from last place on the Fourth of July to the 1914 World Series championship.
Egan was traded to Cleveland after two weeks on the Boston roster. During his time with the Red Sox, he kept an eye on the inexperienced Ruth, much as Dunn had in Baltimore. When he was traded, no one took his place as supervisor. Ruth's new teammates considered him brash and would have preferred him as a rookie to remain quiet and inconspicuous. When Ruth insisted on taking batting practice despite being both a rookie who did not play regularly and a pitcher, he arrived to find his bats sawed in half. His teammates nicknamed him "the Big Baboon", a name the swarthy Ruth, who had disliked the nickname "Niggerlips" at St. Mary's, detested. Ruth had received a raise on promotion to the major leagues and quickly acquired tastes for fine food, liquor, and women, among other temptations.
Manager Carrigan allowed Ruth to pitch two exhibition games in mid-August. Although Ruth won both against minor-league competition, he was not restored to the pitching rotation. It is uncertain why Carrigan did not give Ruth additional opportunities to pitch. There are legends—filmed for the screen in The Babe Ruth Story (1948)—that the young pitcher had a habit of signaling his intent to throw a curveball by sticking out his tongue slightly, and that he was easy to hit until this changed. Creamer pointed out that it is common for inexperienced pitchers to display such habits, and the need to break Ruth of his would not constitute a reason to not use him at all. The biographer suggested that Carrigan was unwilling to use Ruth because of the rookie's poor behavior.
On July 30, 1914, Boston owner Joseph Lannin had purchased the minor-league Providence Grays, members of the International League. The Providence team had been owned by several people associated with the Detroit Tigers, including star hitter Ty Cobb, and as part of the transaction, a Providence pitcher was sent to the Tigers. To soothe Providence fans upset at losing a star, Lannin announced that the Red Sox would soon send a replacement to the Grays. This was intended to be Ruth, but his departure for Providence was delayed when Cincinnati Reds owner Garry Herrmann claimed him off of waivers. After Lannin wrote to Herrmann explaining that the Red Sox wanted Ruth in Providence so he could develop as a player, and would not release him to a major league club, Herrmann allowed Ruth to be sent to the minors. Carrigan later stated that Ruth was not sent down to Providence to make him a better player, but to help the Grays win the International League pennant (league championship).
Ruth joined the Grays on August 18, 1914. After Dunn's deals, the Baltimore Orioles managed to hold on to first place until August 15, after which they continued to fade, leaving the pennant race between Providence and Rochester. Ruth was deeply impressed by Providence manager "Wild Bill" Donovan, previously a star pitcher with a 25–4 win–loss record for Detroit in 1907; in later years, he credited Donovan with teaching him much about pitching. Ruth was often called upon to pitch, in one stretch starting (and winning) four games in eight days. On September 5 at Maple Leaf Park in Toronto, Ruth pitched a one-hit 9–0 victory, and hit his first professional home run, his only one as a minor leaguer, off Ellis Johnson. Recalled to Boston after Providence finished the season in first place, he pitched and won a game for the Red Sox against the New York Yankees on October 2, getting his first major league hit, a double. Ruth finished the season with a record of 2–1 as a major leaguer and 23–8 in the International League (for Baltimore and Providence). Once the season concluded, Ruth married Helen in Ellicott City, Maryland. Creamer speculated that they did not marry in Baltimore, where the newlyweds boarded with George Ruth Sr., to avoid possible interference from those at St. Mary's—both bride and groom were not yet of age and Ruth remained on parole from that institution until his 21st birthday.
In March 1915, Ruth reported to Hot Springs, Arkansas, for his first major league spring training. Despite a relatively successful first season, he was not slated to start regularly for the Red Sox, who already had two "superb" left-handed pitchers, according to Creamer: the established stars Dutch Leonard, who had broken the record for the lowest earned run average (ERA) in a single season; and Ray Collins, a 20-game winner in both 1913 and 1914. Ruth was ineffective in his first start, taking the loss in the third game of the season. Injuries and ineffective pitching by other Boston pitchers gave Ruth another chance, and after some good relief appearances, Carrigan allowed Ruth another start, and he won a rain-shortened seven inning game. Ten days later, the manager had him start against the New York Yankees at the Polo Grounds. Ruth took a 3–2 lead into the ninth, but lost the game 4–3 in 13 innings. Ruth, hitting ninth as was customary for pitchers, hit a massive home run into the upper deck in right field off of Jack Warhop. At the time, home runs were rare in baseball, and Ruth's majestic shot awed the crowd. The winning pitcher, Warhop, would in August 1915 conclude a major league career of eight seasons, undistinguished but for being the first major league pitcher to give up a home run to Babe Ruth.
Carrigan was sufficiently impressed by Ruth's pitching to give him a spot in the starting rotation. Ruth finished the 1915 season 18–8 as a pitcher; as a hitter, he batted .315 and had four home runs. The Red Sox won the AL pennant, but with the pitching staff healthy, Ruth was not called upon to pitch in the 1915 World Series against the Philadelphia Phillies. Boston won in five games. Ruth was used as a pinch hitter in Game Five, but grounded out against Phillies ace Grover Cleveland Alexander. Despite his success as a pitcher, Ruth was acquiring a reputation for long home runs; at Sportsman's Park against the St. Louis Browns, a Ruth hit soared over Grand Avenue, breaking the window of a Chevrolet dealership.
In 1916, attention focused on Ruth's pitching as he engaged in repeated pitching duels with Washington Senators' ace Walter Johnson. The two met five times during the season with Ruth winning four and Johnson one (Ruth had a no decision in Johnson's victory). Two of Ruth's victories were by the score of 1–0, one in a 13-inning game. Of the 1–0 shutout decided without extra innings, AL president Ban Johnson stated, "That was one of the best ball games I have ever seen." For the season, Ruth went 23–12, with a 1.75 ERA and nine shutouts, both of which led the league. Ruth's nine shutouts in 1916 set a league record for left-handers that would remain unmatched until Ron Guidry tied it in 1978. The Red Sox won the pennant and World Series again, this time defeating the Brooklyn Robins (as the Dodgers were then known) in five games. Ruth started and won Game 2, 2–1, in 14 innings. Until another game of that length was played in 2005, this was the longest World Series game, and Ruth's pitching performance is still the longest postseason complete game victory.
Carrigan retired as player and manager after 1916, returning to his native Maine to be a businessman. Ruth, who played under four managers who are in the National Baseball Hall of Fame, always maintained that Carrigan, who is not enshrined there, was the best skipper he ever played for. There were other changes in the Red Sox organization that offseason, as Lannin sold the team to a three-man group headed by New York theatrical promoter Harry Frazee. Jack Barry was hired by Frazee as manager.
Ruth went 24–13 with a 2.01 ERA and six shutouts in 1917, but the Sox finished in second place in the league, nine games behind the Chicago White Sox in the standings. On June 23 at Washington, when home plate umpire 'Brick' Owens called the first four pitches as balls, Ruth was ejected from the game and threw a punch at him, and was later suspended for ten days and fined $100. Ernie Shore was called in to relieve Ruth, and was allowed eight warm-up pitches. The runner who had reached base on the walk was caught stealing, and Shore retired all 26 batters he faced to win the game. Shore's feat was listed as a perfect game for many years. In 1991, Major League Baseball's (MLB) Committee on Statistical Accuracy amended it to be listed as a combined no-hitter. In 1917, Ruth was used little as a batter, other than for his plate appearances while pitching, and hit .325 with two home runs.
The United States' entry into World War I occurred at the start of the season and overshadowed baseball. Conscription was introduced in September 1917, and most baseball players in the big leagues were of draft age. This included Barry, who was a player-manager, and who joined the Naval Reserve in an attempt to avoid the draft, only to be called up after the 1917 season. Frazee hired International League President Ed Barrow as Red Sox manager. Barrow had spent the previous 30 years in a variety of baseball jobs, though he never played the game professionally. With the major leagues shorthanded because of the war, Barrow had many holes in the Red Sox lineup to fill.
Ruth also noticed these vacancies in the lineup. He was dissatisfied in the role of a pitcher who appeared every four or five days and wanted to play every day at another position. Barrow used Ruth at first base and in the outfield during the exhibition season, but he restricted him to pitching as the team moved toward Boston and the season opener. At the time, Ruth was possibly the best left-handed pitcher in baseball, and allowing him to play another position was an experiment that could have backfired.
Inexperienced as a manager, Barrow had player Harry Hooper advise him on baseball game strategy. Hooper urged his manager to allow Ruth to play another position when he was not pitching, arguing to Barrow, who had invested in the club, that the crowds were larger on days when Ruth played, as they were attracted by his hitting. In early May, Barrow gave in; Ruth promptly hit home runs in four consecutive games (one an exhibition), the last off of Walter Johnson. For the first time in his career (disregarding pinch-hitting appearances), Ruth was assigned a place in the batting order higher than ninth.
Although Barrow predicted that Ruth would beg to return to pitching the first time he experienced a batting slump, that did not occur. Barrow used Ruth primarily as an outfielder in the war-shortened 1918 season. Ruth hit .300, with 11 home runs, enough to secure him a share of the major league home run title with Tilly Walker of the Philadelphia Athletics. He was still occasionally used as a pitcher, and had a 13–7 record with a 2.22 ERA.
In 1918, the Red Sox won their third pennant in four years and faced the Chicago Cubs in the World Series, which began on September 5, the earliest date in history. The season had been shortened because the government had ruled that baseball players who were eligible for the military would have to be inducted or work in critical war industries, such as armaments plants. Ruth pitched and won Game One for the Red Sox, a 1–0 shutout. Before Game Four, Ruth injured his left hand in a fight but pitched anyway. He gave up seven hits and six walks, but was helped by outstanding fielding behind him and by his own batting efforts, as a fourth-inning triple by Ruth gave his team a 2–0 lead. The Cubs tied the game in the eighth inning, but the Red Sox scored to take a 3–2 lead again in the bottom of that inning. After Ruth gave up a hit and a walk to start the ninth inning, he was relieved on the mound by Joe Bush. To keep Ruth and his bat in the game, he was sent to play left field. Bush retired the side to give Ruth his second win of the Series, and the third and last World Series pitching victory of his career, against no defeats, in three pitching appearances. Ruth's effort gave his team a three-games-to-one lead, and two days later the Red Sox won their third Series in four years, four-games-to-two. Before allowing the Cubs to score in Game Four, Ruth pitched 29+2⁄3 consecutive scoreless innings, a record for the World Series that stood for more than 40 years until 1961, broken by Whitey Ford after Ruth's death. Ruth was prouder of that record than he was of any of his batting feats.
With the World Series over, Ruth gained exemption from the war draft by accepting a nominal position with a Pennsylvania steel mill. Many industrial establishments took pride in their baseball teams and sought to hire major leaguers. The end of the war in November set Ruth free to play baseball without such contrivances.
During the 1919 season, Ruth was used as a pitcher in only 17 of his 130 games and compiled a 9–5 record. Barrow used him as a pitcher mostly in the early part of the season, when the Red Sox manager still had hopes of a second consecutive pennant. By late June, the Red Sox were clearly out of the race, and Barrow had no objection to Ruth concentrating on his hitting, if only because it drew people to the ballpark. Ruth had hit a home run against the Yankees on Opening Day, and another during a month-long batting slump that soon followed. Relieved of his pitching duties, Ruth began an unprecedented spell of slugging home runs, which gave him widespread public and press attention. Even his failures were seen as majestic—one sportswriter said, "When Ruth misses a swipe at the ball, the stands quiver."
Two home runs by Ruth on July 5, and one in each of two consecutive games a week later, raised his season total to 11, tying his career best from 1918. The first record to fall was the AL single-season mark of 16, set by Ralph "Socks" Seybold in 1902. Ruth matched that on July 29, then pulled ahead toward the major league record of 25, set by Buck Freeman in 1899. By the time Ruth reached this in early September, writers had discovered that Ned Williamson of the 1884 Chicago White Stockings had hit 27—though in a ballpark where the distance to right field was only 215 feet (66 m). On September 20, "Babe Ruth Day" at Fenway Park, Ruth won the game with a home run in the bottom of the ninth inning, tying Williamson. He broke the record four days later against the Yankees at the Polo Grounds, and hit one more against the Senators to finish with 29. The home run at Washington made Ruth the first major league player to hit a home run at all eight ballparks in his league. In spite of Ruth's hitting heroics, the Red Sox finished sixth, 20+1⁄2 games behind the league champion White Sox. In his six seasons with Boston, he won 89 games and recorded a 2.19 ERA. He had a four-year stretch where he was second in the AL in wins and ERA behind Walter Johnson, and Ruth had a winning record against Johnson in head-to-head matchups.
As an out-of-towner from New York City, Frazee had been regarded with suspicion by Boston's sportswriters and baseball fans when he bought the team. He won them over with success on the field and a willingness to build the Red Sox by purchasing or trading for players. He offered the Senators $60,000 for Walter Johnson, but Washington owner Clark Griffith was unwilling. Even so, Frazee was successful in bringing other players to Boston, especially as replacements for players in the military. This willingness to spend for players helped the Red Sox secure the 1918 title. The 1919 season saw record-breaking attendance, and Ruth's home runs for Boston made him a national sensation. In March 1919 Ruth was reported as having accepted a three-year contract for a total of $27,000, after protracted negotiations. Nevertheless, on December 26, 1919, Frazee sold Ruth's contract to the New York Yankees.
Not all the circumstances concerning the sale are known, but brewer and former congressman Jacob Ruppert, the New York team's principal owner, reportedly asked Yankee manager Miller Huggins what the team needed to be successful. "Get Ruth from Boston", Huggins supposedly replied, noting that Frazee was perennially in need of money to finance his theatrical productions. In any event, there was precedent for the Ruth transaction: when Boston pitcher Carl Mays left the Red Sox in a 1919 dispute, Frazee had settled the matter by selling Mays to the Yankees, though over the opposition of AL President Johnson.
According to one of Ruth's biographers, Jim Reisler, "why Frazee needed cash in 1919—and large infusions of it quickly—is still, more than 80 years later, a bit of a mystery". The often-told story is that Frazee needed money to finance the musical No, No, Nanette, which was a Broadway hit and brought Frazee financial security. That play did not open until 1925, however, by which time Frazee had sold the Red Sox. Still, the story may be true in essence: No, No, Nanette was based on a Frazee-produced play, My Lady Friends, which opened in 1919.
There were other financial pressures on Frazee, despite his team's success. Ruth, fully aware of baseball's popularity and his role in it, wanted to renegotiate his contract, signed before the 1919 season for $10,000 per year through 1921. He demanded that his salary be doubled, or he would sit out the season and cash in on his popularity through other ventures. Ruth's salary demands were causing other players to ask for more money. Additionally, Frazee still owed Lannin as much as $125,000 from the purchase of the club.
Although Ruppert and his co-owner, Colonel Tillinghast Huston, were both wealthy, and had aggressively purchased and traded for players in 1918 and 1919 to build a winning team, Ruppert faced losses in his brewing interests as Prohibition was implemented, and if their team left the Polo Grounds, where the Yankees were the tenants of the New York Giants, building a stadium in New York would be expensive. Nevertheless, when Frazee, who moved in the same social circles as Huston, hinted to the colonel that Ruth was available for the right price, the Yankees owners quickly pursued the purchase.
Frazee sold the rights to Babe Ruth for $100,000, the largest sum ever paid for a baseball player. The deal also involved a $350,000 loan from Ruppert to Frazee, secured by a mortgage on Fenway Park. Once it was agreed, Frazee informed Barrow, who, stunned, told the owner that he was getting the worse end of the bargain. Cynics have suggested that Barrow may have played a larger role in the Ruth sale, as less than a year after, he became the Yankee general manager, and in the following years made a number of purchases of Red Sox players from Frazee. The $100,000 price included $25,000 in cash, and notes for the same amount due November 1 in 1920, 1921, and 1922; Ruppert and Huston assisted Frazee in selling the notes to banks for immediate cash.
The transaction was contingent on Ruth signing a new contract, which was quickly accomplished—Ruth agreed to fulfill the remaining two years on his contract, but was given a $20,000 bonus, payable over two seasons. The deal was announced on January 6, 1920. Reaction in Boston was mixed: some fans were embittered at the loss of Ruth; others conceded that Ruth had become difficult to deal with. The New York Times suggested that "The short right field wall at the Polo Grounds should prove an easy target for Ruth next season and, playing seventy-seven games at home, it would not be surprising if Ruth surpassed his home run record of twenty-nine circuit clouts next Summer." According to Reisler, "The Yankees had pulled off the sports steal of the century."
According to Marty Appel in his history of the Yankees, the transaction, "changed the fortunes of two high-profile franchises for decades". The Red Sox, winners of five of the first 16 World Series, those played between 1903 and 1919, would not win another pennant until 1946, or another World Series until 2004, a drought attributed in baseball superstition to Frazee's sale of Ruth and sometimes dubbed the "Curse of the Bambino". Conversely, the Yankees had not won the AL championship prior to their acquisition of Ruth. They won seven AL pennants and four World Series with him, and lead baseball with 40 pennants and 27 World Series titles in their history.
When Ruth signed with the Yankees, he completed his transition from a pitcher to a power-hitting outfielder. His fifteen-season Yankee career consisted of over 2,000 games, and Ruth broke many batting records while making only five widely scattered appearances on the mound, winning all of them.
At the end of April 1920, the Yankees were 4–7, with the Red Sox leading the league with a 10–2 mark. Ruth had done little, having injured himself swinging the bat. Both situations began to change on May 1, when Ruth hit a tape measure home run that sent the ball completely out of the Polo Grounds, a feat believed to have been previously accomplished only by Shoeless Joe Jackson. The Yankees won, 6–0, taking three out of four from the Red Sox. Ruth hit his second home run on May 2, and by the end of the month had set a major league record for home runs in a month with 11, and promptly broke it with 13 in June. Fans responded with record attendance figures. On May 16, Ruth and the Yankees drew 38,600 to the Polo Grounds, a record for the ballpark, and 15,000 fans were turned away. Large crowds jammed stadiums to see Ruth play when the Yankees were on the road.
The home runs kept on coming. Ruth tied his own record of 29 on July 15 and broke it with home runs in both games of a doubleheader four days later. By the end of July, he had 37, but his pace slackened somewhat after that. Nevertheless, on September 4, he both tied and broke the organized baseball record for home runs in a season, snapping Perry Werden's 1895 mark of 44 in the minor Western League. The Yankees played well as a team, battling for the league lead early in the summer, but slumped in August in the AL pennant battle with Chicago and Cleveland. The pennant and the World Series were won by Cleveland, who surged ahead after the Black Sox Scandal broke on September 28 and led to the suspension of many of Chicago's top players, including Shoeless Joe Jackson. The Yankees finished third, but drew 1.2 million fans to the Polo Grounds, the first time a team had drawn a seven-figure attendance. The rest of the league sold 600,000 more tickets, many fans there to see Ruth, who led the league with 54 home runs, 158 runs, and 137 runs batted in (RBIs).
In 1920 and afterwards, Ruth was aided in his power hitting by the fact that A.J. Reach Company—the maker of baseballs used in the major leagues—was using a more efficient machine to wind the yarn found within the baseball. The new baseballs went into play in 1920 and ushered the start of the live-ball era; the number of home runs across the major leagues increased by 184 over the previous year. Baseball statistician Bill James pointed out that while Ruth was likely aided by the change in the baseball, there were other factors at work, including the gradual abolition of the spitball (accelerated after the death of Ray Chapman, struck by a pitched ball thrown by Mays in August 1920) and the more frequent use of new baseballs (also a response to Chapman's death). Nevertheless, James theorized that Ruth's 1920 explosion might have happened in 1919, had a full season of 154 games been played rather than 140, had Ruth refrained from pitching 133 innings that season, and if he were playing at any other home field but Fenway Park, where he hit only 9 of 29 home runs.
Yankees business manager Harry Sparrow had died early in the 1920 season. Ruppert and Huston hired Barrow to replace him. The two men quickly made a deal with Frazee for New York to acquire some of the players who would be mainstays of the early Yankee pennant-winning teams, including catcher Wally Schang and pitcher Waite Hoyt. The 21-year-old Hoyt became close to Ruth:
The outrageous life fascinated Hoyt, the don't-give-a-shit freedom of it, the nonstop, pell-mell charge into excess. How did a man drink so much and never get drunk? ... The puzzle of Babe Ruth never was dull, no matter how many times Hoyt picked up the pieces and stared at them. After games he would follow the crowd to the Babe's suite. No matter what the town, the beer would be iced and the bottles would fill the bathtub.
In the offseason, Ruth spent some time in Havana, Cuba, where he was said to have lost $35,000 (equivalent to $570,000 in 2022) betting on horse races.
Ruth hit home runs early and often in the 1921 season, during which he broke Roger Connor's mark for home runs in a career, 138. Each of the almost 600 home runs Ruth hit in his career after that extended his own record. After a slow start, the Yankees were soon locked in a tight pennant race with Cleveland, winners of the 1920 World Series. On September 15, Ruth hit his 55th home run, breaking his year-old single-season record. In late September, the Yankees visited Cleveland and won three out of four games, giving them the upper hand in the race, and clinched their first pennant a few days later. Ruth finished the regular season with 59 home runs, batting .378 and with a slugging percentage of .846. Ruth's 177 runs scored, 119 extra-base hits, and 457 total bases set modern-era records that still stand as of 2023.
The Yankees had high expectations when they met the New York Giants in the 1921 World Series, every game of which was played in the Polo Grounds. The Yankees won the first two games with Ruth in the lineup. However, Ruth badly scraped his elbow during Game 2 when he slid into third base (he had walked and stolen both second and third bases). After the game, he was told by the team physician not to play the rest of the series. Despite this advice, he did play in the next three games, and pinch-hit in Game Eight of the best-of-nine series, but the Yankees lost, five games to three. Ruth hit .316, drove in five runs and hit his first World Series home run.
After the Series, Ruth and teammates Bob Meusel and Bill Piercy participated in a barnstorming tour in the Northeast. A rule then in force prohibited World Series participants from playing in exhibition games during the offseason, the purpose being to prevent Series participants from replicating the Series and undermining its value. Baseball Commissioner Kenesaw Mountain Landis suspended the trio until May 20, 1922, and fined them their 1921 World Series checks. In August 1922, the rule was changed to allow limited barnstorming for World Series participants, with Landis's permission required.
On March 4, 1922, Ruth signed a new contract for three years at $52,000 a year (equivalent to $910,000 in 2022). This was more than two times the largest sum ever paid to a ballplayer up to that point and it represented 40% of the team's player payroll.
Despite his suspension, Ruth was named the Yankees' new on-field captain prior to the 1922 season. During the suspension, he worked out with the team in the morning and played exhibition games with the Yankees on their off days. He and Meusel returned on May 20 to a sellout crowd at the Polo Grounds, but Ruth batted 0-for-4 and was booed. On May 25, he was thrown out of the game for throwing dust in umpire George Hildebrand's face, then climbed into the stands to confront a heckler. Ban Johnson ordered him fined, suspended, and stripped of position as team captain. In his shortened season, Ruth appeared in 110 games, batted .315, with 35 home runs, and drove in 99 runs, but the 1922 season was a disappointment in comparison to his two previous dominating years. Despite Ruth's off-year, the Yankees managed to win the pennant and faced the New York Giants in the World Series for the second consecutive year. In the Series, Giants manager John McGraw instructed his pitchers to throw him nothing but curveballs, and Ruth never adjusted. Ruth had just two hits in 17 at bats, and the Yankees lost to the Giants for the second straight year, by 4–0 (with one tie game). Sportswriter Joe Vila called him, "an exploded phenomenon".
After the season, Ruth was a guest at an Elks Club banquet, set up by Ruth's agent with Yankee team support. There, each speaker, concluding with future New York mayor Jimmy Walker, censured him for his poor behavior. An emotional Ruth promised reform, and, to the surprise of many, followed through. When he reported to spring training, he was in his best shape as a Yankee, weighing only 210 pounds (95 kg).
The Yankees' status as tenants of the Giants at the Polo Grounds had become increasingly uneasy, and in 1922, Giants owner Charles Stoneham said the Yankees' lease, expiring after that season, would not be renewed. Ruppert and Huston had long contemplated a new stadium, and had taken an option on property at 161st Street and River Avenue in the Bronx. Yankee Stadium was completed in time for the home opener on April 18, 1923, at which Ruth hit the first home run in what was quickly dubbed "the House that Ruth Built". The ballpark was designed with Ruth in mind: although the venue's left-field fence was further from home plate than at the Polo Grounds, Yankee Stadium's right-field fence was closer, making home runs easier to hit for left-handed batters. To spare Ruth's eyes, right field—his defensive position—was not pointed into the afternoon sun, as was traditional; left fielder Meusel soon developed headaches from squinting toward home plate.
During the 1923 season, the Yankees were never seriously challenged and won the AL pennant by 17 games. Ruth finished the season with a career-high .393 batting average and 41 home runs, which tied Cy Williams for the most in the major-leagues that year. Ruth hit a career-high 45 doubles in 1923, and he reached base 379 times, then a major league record. For the third straight year, the Yankees faced the Giants in the World Series, which Ruth dominated. He batted .368, walked eight times, scored eight runs, hit three home runs and slugged 1.000 during the series, as the Yankees christened their new stadium with their first World Series championship, four games to two.
In 1924, the Yankees were favored to become the first team to win four consecutive pennants. Plagued by injuries, they found themselves in a battle with the Senators. Although the Yankees won 18 of 22 at one point in September, the Senators beat out the Yankees by two games. Ruth hit .378, winning his only AL batting title, with a league-leading 46 home runs.
Ruth did not look like an athlete; he was described as "toothpicks attached to a piano", with a big upper body but thin wrists and legs. Ruth had kept up his efforts to stay in shape in 1923 and 1924, but by early 1925 weighed nearly 260 pounds (120 kg). His annual visit to Hot Springs, Arkansas, where he exercised and took saunas early in the year, did him no good as he spent much of the time carousing in the resort town. He became ill while there, and relapsed during spring training. Ruth collapsed in Asheville, North Carolina, as the team journeyed north. He was put on a train for New York, where he was briefly hospitalized. A rumor circulated that he had died, prompting British newspapers to print a premature obituary. In New York, Ruth collapsed again and was found unconscious in his hotel bathroom. He was taken to a hospital where he had multiple convulsions. After sportswriter W. O. McGeehan wrote that Ruth's illness was due to binging on hot dogs and soda pop before a game, it became known as "the bellyache heard 'round the world". However, the exact cause of his ailment has never been confirmed and remains a mystery. Glenn Stout, in his history of the Yankees, writes that the Ruth legend is "still one of the most sheltered in sports"; he suggests that alcohol was at the root of Ruth's illness, pointing to the fact that Ruth remained six weeks at St. Vincent's Hospital but was allowed to leave, under supervision, for workouts with the team for part of that time. He concludes that the hospitalization was behavior-related. Playing just 98 games, Ruth had his worst season as a Yankee; he finished with a .290 average and 25 home runs. The Yankees finished next to last in the AL with a 69–85 record, their last season with a losing record until 1965.
Ruth spent part of the offseason of 1925–26 working out at Artie McGovern's gym, where he got back into shape. Barrow and Huggins had rebuilt the team and surrounded the veteran core with good young players like Tony Lazzeri and Lou Gehrig, but the Yankees were not expected to win the pennant.
Ruth returned to his normal production during 1926, when he batted .372 with 47 home runs and 146 RBIs. The Yankees built a 10-game lead by mid-June and coasted to win the pennant by three games. The St. Louis Cardinals had won the National League with the lowest winning percentage for a pennant winner to that point (.578) and the Yankees were expected to win the World Series easily. Although the Yankees won the opener in New York, St. Louis took Games Two and Three. In Game Four, Ruth hit three home runs—the first time this had been done in a World Series game—to lead the Yankees to victory. In the fifth game, Ruth caught a ball as he crashed into the fence. The play was described by baseball writers as a defensive gem. New York took that game, but Grover Cleveland Alexander won Game Six for St. Louis to tie the Series at three games each, then got very drunk. He was nevertheless inserted into Game Seven in the seventh inning and shut down the Yankees to win the game, 3–2, and win the Series. Ruth had hit his fourth home run of the Series earlier in the game and was the only Yankee to reach base off Alexander; he walked in the ninth inning before being thrown out to end the game when he attempted to steal second base. Although Ruth's attempt to steal second is often deemed a baserunning blunder, Creamer pointed out that the Yankees' chances of tying the game would have been greatly improved with a runner in scoring position.
The 1926 World Series was also known for Ruth's promise to Johnny Sylvester, a hospitalized 11-year-old boy. Ruth promised the child that he would hit a home run on his behalf. Sylvester had been injured in a fall from a horse, and a friend of Sylvester's father gave the boy two autographed baseballs signed by Yankees and Cardinals. The friend relayed a promise from Ruth (who did not know the boy) that he would hit a home run for him. After the Series, Ruth visited the boy in the hospital. When the matter became public, the press greatly inflated it, and by some accounts, Ruth allegedly saved the boy's life by visiting him, emotionally promising to hit a home run, and doing so. Ruth's 1926 salary of $52,000 was far more than any other baseball player, but he made at least twice as much in other income, including $100,000 from 12 weeks of vaudeville.
The 1927 New York Yankees team is considered one of the greatest squads to ever take the field. Known as Murderers' Row because of the power of its lineup, the team clinched first place on Labor Day, won a then-AL-record 110 games and took the AL pennant by 19 games. There was no suspense in the pennant race, and the nation turned its attention to Ruth's pursuit of his own single-season home run record of 59 round trippers. Ruth was not alone in this chase. Teammate Lou Gehrig proved to be a slugger who was capable of challenging Ruth for his home run crown; he tied Ruth with 24 home runs late in June. Through July and August, the dynamic duo was never separated by more than two home runs. Gehrig took the lead, 45–44, in the first game of a doubleheader at Fenway Park early in September; Ruth responded with two blasts of his own to take the lead, as it proved permanently—Gehrig finished with 47. Even so, as of September 6, Ruth was still several games off his 1921 pace, and going into the final series against the Senators, had only 57. He hit two in the first game of the series, including one off of Paul Hopkins, facing his first major league batter, to tie the record. The following day, September 30, he broke it with his 60th homer, in the eighth inning off Tom Zachary to break a 2–2 tie. "Sixty! Let's see some son of a bitch try to top that one", Ruth exulted after the game. In addition to his career-high 60 home runs, Ruth batted .356, drove in 164 runs and slugged .772. In the 1927 World Series, the Yankees swept the Pittsburgh Pirates in four games; the National Leaguers were disheartened after watching the Yankees take batting practice before Game One, with ball after ball leaving Forbes Field. According to Appel, "The 1927 New York Yankees. Even today, the words inspire awe ... all baseball success is measured against the '27 team."
The following season started off well for the Yankees, who led the league in the early going. But the Yankees were plagued by injuries, erratic pitching and inconsistent play. The Philadelphia Athletics, rebuilding after some lean years, erased the Yankees' big lead and even took over first place briefly in early September. The Yankees, however, regained first place when they beat the Athletics three out of four games in a pivotal series at Yankee Stadium later that month, and clinched the pennant in the final weekend of the season. Ruth's play in 1928 mirrored his team's performance. He got off to a hot start and on August 1, he had 42 home runs. This put him ahead of his 60 home run pace from the previous season. He then slumped for the latter part of the season, and he hit just twelve home runs in the last two months. Ruth's batting average also fell to .323, well below his career average. Nevertheless, he ended the season with 54 home runs. The Yankees swept the favored Cardinals in four games in the World Series, with Ruth batting .625 and hitting three home runs in Game Four, including one off Alexander.
Before the 1929 season, Ruppert (who had bought out Huston in 1923) announced that the Yankees would wear uniform numbers to allow fans at cavernous Yankee Stadium to easily identify the players. The Cardinals and Indians had each experimented with uniform numbers; the Yankees were the first to use them on both home and away uniforms. Ruth batted third and was given number 3. According to a long-standing baseball legend, the Yankees adopted their now-iconic pinstriped uniforms in hopes of making Ruth look slimmer. In truth, though, they had been wearing pinstripes since 1915.
Although the Yankees started well, the Athletics soon proved they were the better team in 1929, splitting two series with the Yankees in the first month of the season, then taking advantage of a Yankee losing streak in mid-May to gain first place. Although Ruth performed well, the Yankees were not able to catch the Athletics—Connie Mack had built another great team. Tragedy struck the Yankees late in the year as manager Huggins died at 51 of erysipelas, a bacterial skin infection, on September 25, only ten days after he had last directed the team. Despite their past differences, Ruth praised Huggins and described him as a "great guy". The Yankees finished second, 18 games behind the Athletics. Ruth hit .345 during the season, with 46 home runs and 154 RBIs.
On October 17, the Yankees hired Bob Shawkey as manager; he was their fourth choice. Ruth had politicked for the job of player-manager, but Ruppert and Barrow never seriously considered him for the position. Stout deemed this the first hint Ruth would have no future with the Yankees once he retired as a player. Shawkey, a former Yankees player and teammate of Ruth, would prove unable to command Ruth's respect.
On January 7, 1930, salary negotiations between the Yankees and Ruth quickly broke down. Having just concluded a three-year contract at an annual salary of $70,000, Ruth promptly rejected both the Yankees' initial proposal of $70,000 for one year and their 'final' offer of two years at seventy-five—the latter figure equaling the annual salary of then US President Herbert Hoover; instead, Ruth demanded at least $85,000 and three years. When asked why he thought he was "worth more than the President of the United States," Ruth responded: "Say, if I hadn't been sick last summer, I'd have broken hell out of that home run record! Besides, the President gets a four-year contract. I'm only asking for three." Exactly two months later, a compromise was reached, with Ruth settling for two years at an unprecedented $80,000 per year. Ruth's salary was more than 2.4 times greater than the next-highest salary that season, a record margin as of 2019.
In 1930, Ruth hit .359 with 49 home runs (his best in his years after 1928) and 153 RBIs, and pitched his first game in nine years, a complete game victory. Nevertheless, the Athletics won their second consecutive pennant and World Series, as the Yankees finished in third place, sixteen games back. At the end of the season, Shawkey was fired and replaced with Cubs manager Joe McCarthy, though Ruth again unsuccessfully sought the job.
McCarthy was a disciplinarian, but chose not to interfere with Ruth, who did not seek conflict with the manager. The team improved in 1931, but was no match for the Athletics, who won 107 games, 13+1⁄2 games in front of the Yankees. Ruth, for his part, hit .373, with 46 home runs and 163 RBIs. He had 31 doubles, his most since 1924. In the 1932 season, the Yankees went 107–47 and won the pennant. Ruth's effectiveness had decreased somewhat, but he still hit .341 with 41 home runs and 137 RBIs. Nevertheless, he was sidelined twice because of injuries during the season.
The Yankees faced the Cubs, McCarthy's former team, in the 1932 World Series. There was bad blood between the two teams as the Yankees resented the Cubs only awarding half a World Series share to Mark Koenig, a former Yankee. The games at Yankee Stadium had not been sellouts; both were won by the home team, with Ruth collecting two singles, but scoring four runs as he was walked four times by the Cubs pitchers. In Chicago, Ruth was resentful at the hostile crowds that met the Yankees' train and jeered them at the hotel. The crowd for Game Three included New York Governor Franklin D. Roosevelt, the Democratic candidate for president, who sat with Chicago Mayor Anton Cermak. Many in the crowd threw lemons at Ruth, a sign of derision, and others (as well as the Cubs themselves) shouted abuse at Ruth and other Yankees. They were briefly silenced when Ruth hit a three-run home run off Charlie Root in the first inning, but soon revived, and the Cubs tied the score at 4–4 in the fourth inning, partly due to Ruth's fielding error in the outfield. When Ruth came to the plate in the top of the fifth, the Chicago crowd and players, led by pitcher Guy Bush, were screaming insults at Ruth. With the count at two balls and one strike, Ruth gestured, possibly in the direction of center field, and after the next pitch (a strike), may have pointed there with one hand. Ruth hit the fifth pitch over the center field fence; estimates were that it traveled nearly 500 feet (150 m). Whether or not Ruth intended to indicate where he planned to (and did) hit the ball (Charlie Devens, who, in 1999, was interviewed as Ruth's surviving teammate in that game, did not think so), the incident has gone down in legend as Babe Ruth's called shot. The Yankees won Game Three, and the following day clinched the Series with another victory. During that game, Bush hit Ruth on the arm with a pitch, causing words to be exchanged and provoking a game-winning Yankee rally.
Ruth remained productive in 1933. He batted .301, with 34 home runs, 103 RBIs, and a league-leading 114 walks, as the Yankees finished in second place, seven games behind the Senators. Athletics manager Connie Mack selected him to play right field in the first Major League Baseball All-Star Game, held on July 6, 1933, at Comiskey Park in Chicago. He hit the first home run in the All-Star Game's history, a two-run blast against Bill Hallahan during the third inning, which helped the AL win the game 4–2. During the final game of the 1933 season, as a publicity stunt organized by his team, Ruth was called upon and pitched a complete game victory against the Red Sox, his final appearance as a pitcher. Despite unremarkable pitching numbers, Ruth had a 5–0 record in five games for the Yankees, raising his career totals to 94–46.
In 1934, Ruth played in his last full season with the Yankees. By this time, years of high living were starting to catch up with him. His conditioning had deteriorated to the point that he could no longer field or run. He accepted a pay cut to $35,000 from Ruppert, but he was still the highest-paid player in the major leagues. He could still handle a bat, recording a .288 batting average with 22 home runs. However, Reisler described these statistics as "merely mortal" by Ruth's previous standards. Ruth was selected to the AL All-Star team for the second consecutive year, even though he was in the twilight of his career. During the game, New York Giants pitcher Carl Hubbell struck out Ruth and four other future Hall-of-Famers consecutively. The Yankees finished second again, seven games behind the Tigers.
By this time, Ruth knew he was nearly finished as a player. He desired to remain in baseball as a manager. He was often spoken of as a possible candidate as managerial jobs opened up, but in 1932, when he was mentioned as a contender for the Red Sox position, Ruth stated that he was not yet ready to leave the field. There were rumors that Ruth was a likely candidate each time when the Cleveland Indians, Cincinnati Reds, and Detroit Tigers were looking for a manager, but nothing came of them.
Just before the 1934 season, Ruppert offered to make Ruth the manager of the Yankees' top minor-league team, the Newark Bears, but he was talked out of it by his wife, Claire, and his business manager, Christy Walsh. Tigers owner Frank Navin seriously considered acquiring Ruth and making him player-manager. However, Ruth insisted on delaying the meeting until he came back from a trip to Hawaii. Navin was unwilling to wait. Ruth opted to go on his trip, despite Barrow advising him that he was making a mistake; in any event, Ruth's asking price was too high for the notoriously tight-fisted Navin. The Tigers' job ultimately went to Mickey Cochrane.
Early in the 1934 season, Ruth openly campaigned to become the Yankees manager. However, the Yankee job was never a serious possibility. Ruppert always supported McCarthy, who would remain in his position for another 12 seasons. The relationship between Ruth and McCarthy had been lukewarm at best, and Ruth's managerial ambitions further chilled their interpersonal relations. By the end of the season, Ruth hinted that he would retire unless Ruppert named him manager of the Yankees. When the time came, Ruppert wanted Ruth to leave the team without drama or hard feelings.
During the 1934–35 offseason, Ruth circled the world with his wife; the trip included a barnstorming tour of the Far East. At his final stop in the United Kingdom before returning home, Ruth was introduced to cricket by Australian player Alan Fairfax, and after having little luck in a cricketer's stance, he stood as a baseball batter and launched some massive shots around the field, destroying the bat in the process. Although Fairfax regretted that he could not have the time to make Ruth a cricket player, Ruth had lost any interest in such a career upon learning that the best batsmen made only about $40 per week.
Also during the offseason, Ruppert had been sounding out the other clubs in hopes of finding one that would be willing to take Ruth as a manager and/or a player. However, the only serious offer came from Athletics owner-manager Connie Mack, who gave some thought to stepping down as manager in favor of Ruth. However, Mack later dropped the idea, saying that Ruth's wife would be running the team in a month if Ruth ever took over.
While the barnstorming tour was underway, Ruppert began negotiating with Boston Braves owner Judge Emil Fuchs, who wanted Ruth as a gate attraction. The Braves had enjoyed modest recent success, finishing fourth in the National League in both 1933 and 1934, but the team drew poorly at the box office. Unable to afford the rent at Braves Field, Fuchs had considered holding dog races there when the Braves were not at home, only to be turned down by Landis. After a series of phone calls, letters, and meetings, the Yankees traded Ruth to the Braves on February 26, 1935. Ruppert had stated that he would not release Ruth to go to another team as a full-time player. For this reason, it was announced that Ruth would become a team vice president and would be consulted on all club transactions, in addition to playing. He was also made assistant manager to Braves skipper Bill McKechnie. In a long letter to Ruth a few days before the press conference, Fuchs promised Ruth a share in the Braves' profits, with the possibility of becoming co-owner of the team. Fuchs also raised the possibility of Ruth succeeding McKechnie as manager, perhaps as early as 1936. Ruppert called the deal "the greatest opportunity Ruth ever had".
There was considerable attention as Ruth reported for spring training. He did not hit his first home run of the spring until after the team had left Florida, and was beginning the road north in Savannah. He hit two in an exhibition game against the Bears. Amid much press attention, Ruth played his first home game in Boston in over 16 years. Before an opening-day crowd of over 25,000, including five of New England's six state governors, Ruth accounted for all the Braves' runs in a 4–2 defeat of the New York Giants, hitting a two-run home run, singling to drive in a third run and later in the inning scoring the fourth. Although age and weight had slowed him, he made a running catch in left field that sportswriters deemed the defensive highlight of the game.
Ruth had two hits in the second game of the season, but it quickly went downhill both for him and the Braves from there. The season soon settled down to a routine of Ruth performing poorly on the few occasions he even played at all. As April passed into May, Ruth's physical deterioration became even more pronounced. While he remained productive at the plate early on, he could do little else. His conditioning had become so poor that he could barely trot around the bases. He made so many errors that three Braves pitchers told McKechnie they would not take the mound if he was in the lineup. Before long, Ruth stopped hitting as well. He grew increasingly annoyed that McKechnie ignored most of his advice. McKechnie later said that Ruth's presence made enforcing discipline nearly impossible.
Ruth soon realized that Fuchs had deceived him, and had no intention of making him manager or giving him any significant off-field duties. He later said his only duties as vice president consisted of making public appearances and autographing tickets. Ruth also found out that far from giving him a share of the profits, Fuchs wanted him to invest some of his money in the team in a last-ditch effort to improve its balance sheet. As it turned out, Fuchs and Ruppert had both known all along that Ruth's non-playing positions were meaningless.
By the end of the first month of the season, Ruth concluded he was finished even as a part-time player. As early as May 12, he asked Fuchs to let him retire. Ultimately, Fuchs persuaded Ruth to remain at least until after the Memorial Day doubleheader in Philadelphia. In the interim was a western road trip, at which the rival teams had scheduled days to honor him. In Chicago and St. Louis, Ruth performed poorly, and his batting average sank to .155, with only two additional home runs for a total of three on the season so far. In the first two games in Pittsburgh, Ruth had only one hit, though a long fly caught by Paul Waner probably would have been a home run in any other ballpark besides Forbes Field.
Ruth played in the third game of the Pittsburgh series on May 25, 1935, and added one more tale to his playing legend. Ruth went 4-for-4, including three home runs, though the Braves lost the game 11–7. The last two were off Ruth's old Cubs nemesis, Guy Bush. The final home run, both of the game and of Ruth's career, sailed out of the park over the right field upper deck–the first time anyone had hit a fair ball completely out of Forbes Field. Ruth was urged to make this his last game, but he had given his word to Fuchs and played in Cincinnati and Philadelphia. The first game of the doubleheader in Philadelphia—the Braves lost both—was his final major league appearance. Ruth retired on June 2 after an argument with Fuchs. He finished 1935 with a .181 average—easily his worst as a full-time position player—and the final six of his 714 home runs. The Braves, 10–27 when Ruth left, finished 38–115, at .248 the worst winning percentage in modern National League history. Insolvent like his team, Fuchs gave up control of the Braves before the end of the season; the National League took over the franchise at the end of the year.
Of the 5 members in the inaugural class of Baseball Hall of Fame in 1936 (Ty Cobb, Honus Wagner, Christy Mathewson, Walter Johnson and Ruth himself), only Ruth was not given an offer to manage a baseball team.
Although Fuchs had given Ruth his unconditional release, no major league team expressed an interest in hiring him in any capacity. Ruth still hoped to be hired as a manager if he could not play anymore, but only one managerial position, Cleveland, became available between Ruth's retirement and the end of the 1937 season. Asked if he had considered Ruth for the job, Indians owner Alva Bradley replied negatively. Team owners and general managers assessed Ruth's flamboyant personal habits as a reason to exclude him from a managerial job; Barrow said of him, "How can he manage other men when he can't even manage himself?" Creamer believed Ruth was unfairly treated in never being given an opportunity to manage a major league club. The author believed there was not necessarily a relationship between personal conduct and managerial success, noting that John McGraw, Billy Martin, and Bobby Valentine were winners despite character flaws.
Ruth played much golf and in a few exhibition baseball games, where he demonstrated a continuing ability to draw large crowds. This appeal contributed to the Dodgers hiring him as first base coach in 1938. When Ruth was hired, Brooklyn general manager Larry MacPhail made it clear that Ruth would not be considered for the manager's job if, as expected, Burleigh Grimes retired at the end of the season. Although much was said about what Ruth could teach the younger players, in practice, his duties were to appear on the field in uniform and encourage base runners—he was not called upon to relay signs. In August, shortly before the baseball rosters expanded, Ruth sought an opportunity to return as an active player in a pinch hitting role. Ruth often took batting practice before games and felt that he could take on the limited role. Grimes denied his request, citing Ruth's poor vision in his right eye, his inability to run the bases, and the risk of an injury to Ruth.
Ruth got along well with everyone except team captain Leo Durocher, who was hired as Grimes' replacement at season's end. Ruth then left his job as a first base coach and would never again work in any capacity in the game of baseball.
On July 4, 1939, Ruth spoke on Lou Gehrig Appreciation Day at Yankee Stadium as members of the 1927 Yankees and a sellout crowd turned out to honor the first baseman, who was forced into premature retirement by ALS, which would kill him two years later. The next week, Ruth went to Cooperstown, New York, for the formal opening of the Baseball Hall of Fame. Three years earlier, he was one of the first five players elected to the hall. As radio broadcasts of baseball games became popular, Ruth sought a job in that field, arguing that his celebrity and knowledge of baseball would assure large audiences, but he received no offers. During World War II, he made many personal appearances to advance the war effort, including his last appearance as a player at Yankee Stadium, in a 1943 exhibition for the Army-Navy Relief Fund. He hit a long fly ball off Walter Johnson; the blast left the field, curving foul, but Ruth circled the bases anyway. In 1946, he made a final effort to gain a job in baseball when he contacted new Yankees boss MacPhail, but he was sent a rejection letter. In 1999, Ruth's granddaughter, Linda Tosetti, and his stepdaughter, Julia Ruth Stevens, said that Babe's inability to land a managerial role with the Yankees caused him to feel hurt and slump into a severe depression.
Ruth started playing golf when he was 20 and continued playing the game throughout his life. His appearance at many New York courses drew spectators and headlines. Rye Golf Club was among the courses he played with teammate Lyn Lary in June 1933. With birdies on 3 holes, Ruth posted the best score. In retirement, he became one of the first celebrity golfers participating in charity tournaments, including one where he was pitted against Ty Cobb.
Ruth met Helen Woodford (1897–1929), by some accounts, in a coffee shop in Boston, where she was a waitress. They married as teenagers on October 17, 1914. Although Ruth later claimed to have been married in Elkton, Maryland, records show that they were married at St. Paul's Catholic Church in Ellicott City. They adopted a daughter, Dorothy (1921–1989), in 1921. Ruth and Helen separated around 1925 reportedly because of Ruth's repeated infidelities and neglect. They appeared in public as a couple for the last time during the 1926 World Series. Helen died in January 1929 at age 31 in a fire in a house in Watertown, Massachusetts owned by Edward Kinder, a dentist with whom she had been living as "Mrs. Kinder". In her book, My Dad, the Babe, Dorothy claimed that she was Ruth's biological child by a mistress named Juanita Jennings. In 1980, Juanita admitted this to Dorothy and Dorothy's stepsister, Julia Ruth Stevens, who was at the time already very ill.
On April 17, 1929, three months after the death of his first wife, Ruth married actress and model Claire Merritt Hodgson (1897–1976) and adopted her daughter Julia (1916–2019). It was the second and final marriage for both parties. Claire, unlike Helen, was well-travelled and educated, and put structure into Ruth's life, like Miller Huggins did for him on the field.
By one account, Julia and Dorothy were, through no fault of their own, the reason for the seven-year rift in Ruth's relationship with teammate Lou Gehrig. Sometime in 1932, during a conversation that she assumed was private, Gehrig's mother remarked, "It's a shame [Claire] doesn't dress Dorothy as nicely as she dresses her own daughter." When the comment got back to Ruth, he angrily told Gehrig to tell his mother to mind her own business. Gehrig, in turn, took offense at what he perceived as Ruth's comment about his mother. The two men reportedly never spoke off the field until they reconciled at Yankee Stadium on Lou Gehrig Appreciation Day, July 4, 1939, shortly after Gehrig's retirement from baseball.
Although Ruth was married throughout most of his baseball career, when team co-owner Tillinghast 'Cap' Huston asked him to tone down his lifestyle, Ruth replied, "I'll promise to go easier on drinking and to get to bed earlier, but not for you, fifty thousand dollars, or two-hundred and fifty thousand dollars will I give up women. They're too much fun." A detective that the Yankees hired to follow him one night in Chicago reported that Ruth had been with six women. Ping Bodie said that he was not Ruth's roommate while traveling; "I room with his suitcase". Before the start of the 1922 season, Ruth had signed a three-year contract at $52,000 per year with an option to renew for two additional years. His performance during the 1922 season had been disappointing, attributed in part to his drinking and late-night hours. After the end of the 1922 season, he was asked to sign a contract addendum with a morals clause. Ruth and Ruppert signed it on November 11, 1922. It called for Ruth to abstain entirely from the use of intoxicating liquors, and to not stay up later than 1:00 a.m. during the training and playing season without permission of the manager. Ruth was also enjoined from any action or misbehavior that would compromise his ability to play baseball.
As early as the war years, doctors had cautioned Ruth to take better care of his health, and he grudgingly followed their advice, limiting his drinking and not going on a proposed trip to support the troops in the South Pacific. In 1946, Ruth began experiencing severe pain over his left eye and had difficulty swallowing. In November 1946, Ruth entered French Hospital in New York for tests, which revealed that he had an inoperable malignant tumor at the base of his skull and in his neck. The malady was a lesion known as nasopharyngeal carcinoma, or "lymphoepithelioma". His name and fame gave him access to experimental treatments, and he was one of the first cancer patients to receive both drugs and radiation treatment simultaneously. Having lost 80 pounds (36 kg), he was discharged from the hospital in February and went to Florida to recuperate. He returned to New York and Yankee Stadium after the season started. The new commissioner, Happy Chandler (Judge Landis had died in 1944), proclaimed April 27, 1947, Babe Ruth Day around the major leagues, with the most significant observance to be at Yankee Stadium. A number of teammates and others spoke in honor of Ruth, who briefly addressed the crowd of almost 60,000. By then, his voice was a soft whisper with a very low, raspy tone.
Around this time, developments in chemotherapy offered some hope for Ruth. The doctors had not told Ruth he had cancer because of his family's fear that he might do himself harm. They treated him with pterolyl triglutamate (Teropterin), a folic acid derivative; he may have been the first human subject. Ruth showed dramatic improvement during the summer of 1947, so much so that his case was presented by his doctors at a scientific meeting, without using his name. He was able to travel around the country, doing promotional work for the Ford Motor Company on American Legion Baseball. He appeared again at another day in his honor at Yankee Stadium in September, but was not well enough to pitch in an old-timers game as he had hoped.
The improvement was only a temporary remission, and by late 1947, Ruth was unable to help with the writing of his autobiography, The Babe Ruth Story, which was almost entirely ghostwritten. In and out of the hospital in Manhattan, he left for Florida in February 1948, doing what activities he could. After six weeks he returned to New York to appear at a book-signing party. He also traveled to California to witness the filming of the movie based on the book.
On June 5, 1948, a "gaunt and hollowed-out" Ruth visited Yale University to donate a manuscript of The Babe Ruth Story to its library. At Yale, he met with future president George H. W. Bush, who was the captain of the Yale baseball team. On June 13, Ruth visited Yankee Stadium for the final time in his life, appearing at the 25th-anniversary celebrations of "The House that Ruth Built". By this time he had lost much weight and had difficulty walking. Introduced along with his surviving teammates from 1923, Ruth used a bat as a cane. Nat Fein's photo of Ruth taken from behind, standing near home plate and facing "Ruthville" (right field) became one of baseball's most famous and widely circulated photographs, and won the Pulitzer Prize.
Ruth made one final trip on behalf of American Legion Baseball. He then entered Memorial Hospital, where he would die. He was never told he had cancer; however, before his death, he surmised it. He was able to leave the hospital for a few short trips, including a final visit to Baltimore. On July 26, 1948, Ruth left the hospital to attend the premiere of the film The Babe Ruth Story. Shortly thereafter, he returned to the hospital for the final time. He was barely able to speak. Ruth's condition gradually grew worse, and only a few visitors were permitted to see him, one of whom was National League president and future Commissioner of Baseball Ford Frick. "Ruth was so thin it was unbelievable. He had been such a big man and his arms were just skinny little bones, and his face was so haggard", Frick said years later.
Thousands of New Yorkers, including many children, stood vigil outside the hospital during Ruth's final days. On August 16, 1948, at 8:01 p.m., Ruth died in his sleep at the age of 53. His open casket was placed on display in the rotunda of Yankee Stadium, where it remained for two days; 77,000 people filed past to pay him tribute. His Requiem Mass was celebrated by Francis Cardinal Spellman at St. Patrick's Cathedral; a crowd estimated at 75,000 waited outside. Ruth is buried with his second wife, Claire, on a hillside in Section 25 at the Gate of Heaven Cemetery in Hawthorne, New York.
On April 19, 1949, the Yankees unveiled a granite monument in Ruth's honor in center field of Yankee Stadium. The monument was located in the field of play next to a flagpole and similar tributes to Huggins and Gehrig until the stadium was remodeled from 1974 to 1975, which resulted in the outfield fences moving inward and enclosing the monuments from the playing field. This area was known thereafter as Monument Park. Yankee Stadium, "the House that Ruth Built", was replaced after the 2008 season with a new Yankee Stadium across the street from the old one; Monument Park was subsequently moved to the new venue behind the center field fence. Ruth's uniform number 3 has been retired by the Yankees, and he is one of five Yankees players or managers to have a granite monument within the stadium.
The Babe Ruth Birthplace Museum is located at 216 Emory Street, a Baltimore row house where Ruth was born, and three blocks west of Oriole Park at Camden Yards, where the AL's Baltimore Orioles play. The property was restored and opened to the public in 1973 by the non-profit Babe Ruth Birthplace Foundation, Inc. Ruth's widow, Claire, his two daughters, Dorothy and Julia, and his sister, Mamie, helped select and install exhibits for the museum.
Ruth was the first baseball star to be the subject of overwhelming public adulation. Baseball had been known for star players such as Ty Cobb and "Shoeless Joe" Jackson, but both men had uneasy relations with fans. In Cobb's case, the incidents were sometimes marked by violence. Ruth's biographers agreed that he benefited from the timing of his ascension to "Home Run King". The country had been hit hard by both the war and the 1918 flu pandemic and longed for something to help put these traumas behind it. Ruth also resonated in a country which felt, in the aftermath of the war, that it took second place to no one. Montville argued that Ruth was a larger-than-life figure who was capable of unprecedented athletic feats in the nation's largest city. Ruth became an icon of the social changes that marked the early 1920s. In his history of the Yankees, Glenn Stout writes that "Ruth was New York incarnate—uncouth and raw, flamboyant and flashy, oversized, out of scale, and absolutely unstoppable".
During his lifetime, Ruth became a symbol of the United States. During World War II, Japanese soldiers yelled in English, "To hell with Babe Ruth", to anger American soldiers. Ruth replied that he hoped "every Jap that mention[ed] my name gets shot". Creamer recorded that "Babe Ruth transcended sport and moved far beyond the artificial limits of baselines and outfield fences and sports pages". Wagenheim stated, "He appealed to a deeply rooted American yearning for the definitive climax: clean, quick, unarguable." According to Glenn Stout, "Ruth's home runs were [an] exalted, uplifting experience that meant more to fans than any runs they were responsible for. A Babe Ruth home run was an event unto itself, one that meant anything was possible."
Although Ruth was not just a power hitter—he was the Yankees' best bunter, and an excellent outfielder—Ruth's penchant for hitting home runs altered how baseball is played. Prior to 1920, home runs were unusual, and managers tried to win games by getting a runner on base and bringing him around to score through such means as the stolen base, the bunt, and the hit and run. Advocates of what was dubbed "inside baseball", such as Giants manager McGraw, disliked the home run, considering it a blot on the purity of the game. According to sportswriter W. A. Phelon, after the 1920 season, Ruth's breakout performance that season and the response in excitement and attendance, "settled, for all time to come, that the American public is nuttier over the Home Run than the Clever Fielding or the Hitless Pitching. Viva el Home Run and two times viva Babe Ruth, exponent of the home run, and overshadowing star." Bill James states, "When the owners discovered that the fans liked to see home runs, and when the foundations of the games were simultaneously imperiled by disgrace [in the Black Sox Scandal], then there was no turning back." While a few, such as McGraw and Cobb, decried the passing of the old-style play, teams quickly began to seek and develop sluggers.
According to sportswriter Grantland Rice, only two sports figures of the 1920s approached Ruth in popularity—boxer Jack Dempsey and racehorse Man o' War. One of the factors that contributed to Ruth's broad appeal was the uncertainty about his family and early life. Ruth appeared to exemplify the American success story, that even an uneducated, unsophisticated youth, without any family wealth or connections, can do something better than anyone else in the world. Montville writes that "the fog [surrounding his childhood] will make him forever accessible, universal. He will be the patron saint of American possibility." Similarly, the fact that Ruth played in the pre-television era, when a relatively small portion of his fans had the opportunity to see him play allowed his legend to grow through word of mouth and the hyperbole of sports reporters. Reisler states that recent sluggers who surpassed Ruth's 60-home run mark, such as Mark McGwire and Barry Bonds, generated much less excitement than when Ruth repeatedly broke the single-season home run record in the 1920s. Ruth dominated a relatively small sports world, while Americans of the present era have many sports available to watch.
Creamer describes Ruth as "a unique figure in the social history of the United States". Thomas Barthel describes him as one of the first celebrity athletes; numerous biographies have portrayed him as "larger than life". He entered the language: a dominant figure in a field, whether within or outside sports, is often referred to as "the Babe Ruth" of that field. Similarly, "Ruthian" has come to mean in sports, "colossal, dramatic, prodigious, magnificent; with great power". He was the first athlete to make more money from endorsements and other off-the-field activities than from his sport.
In 2006, Montville stated that more books have been written about Ruth than any other member of the Baseball Hall of Fame. At least five of these books (including Creamer's and Wagenheim's) were written in 1973 and 1974. The books were timed to capitalize on the increase in public interest in Ruth as Hank Aaron approached his career home run mark, which he broke on April 8, 1974. As he approached Ruth's record, Aaron stated, "I can't remember a day this year or last when I did not hear the name of Babe Ruth."
Montville suggested that Ruth is probably even more popular today than he was when his career home run record was broken by Aaron. The long ball era that Ruth started continues in baseball, to the delight of the fans. Owners build ballparks to encourage home runs, which are featured on SportsCenter and Baseball Tonight each evening during the season. The questions of performance-enhancing drug use, which dogged later home run hitters such as McGwire and Bonds, do nothing to diminish Ruth's reputation; his overindulgences with beer and hot dogs seem part of a simpler time.
In various surveys and rankings, Ruth has been named the greatest baseball player of all time. In 1998, The Sporting News ranked him number one on the list of "Baseball's 100 Greatest Players". In 1999, baseball fans named Ruth to the Major League Baseball All-Century Team. He was named baseball's Greatest Player Ever in a ballot commemorating the 100th anniversary of professional baseball in 1969. The Associated Press reported in 1993 that Muhammad Ali was tied with Babe Ruth as the most recognized athlete in America. In a 1999 ESPN poll, he was ranked as the second-greatest U.S. athlete of the century, behind Michael Jordan. In 1983, the United States Postal Service honored Ruth with the issuance of a twenty-cent stamp.
Several of the most expensive items of sports memorabilia and baseball memorabilia ever sold at auction are associated with Ruth. As of May 2022, Ruth's 1920 Yankees jersey, which sold for $4,415,658 in 2012 (equivalent to $5.63 million in 2022), is the third most expensive piece of sports memorabilia ever sold, after Diego Maradona's 1986 World Cup jersey and Pierre de Coubertin's original 1892 Olympic Manifesto. The bat with which he hit the first home run at Yankee Stadium is in The Guinness Book of World Records as the most expensive baseball bat sold at auction, having fetched $1.265 million on December 2, 2004 (equivalent to $1.9599 million in 2022). A hat of Ruth's from the 1934 season set a record for a baseball cap when David Wells sold it at auction for $537,278 in 2012. In 2017, Charlie Sheen sold Ruth's 1927 World Series ring for $2,093,927 at auction. It easily broke the record for a championship ring previously set when Julius Erving's 1974 ABA championship ring sold for $460,741 in 2011.
One long-term survivor of the craze over Ruth may be the Baby Ruth candy bar. The original company to market the confectionery, the Curtis Candy Company, maintained that the bar was named after Ruth Cleveland, daughter of former president Grover Cleveland. She died in 1904 and the bar was first marketed in 1921, at the height of the craze over Ruth. He later sought to market candy bearing his name; he was refused a trademark because of the Baby Ruth bar. Corporate files from 1921 are no longer extant; the brand has changed hands several times and is now owned by Ferrara Candy Company. The Ruth estate licensed his likeness for use in an advertising campaign for Baby Ruth in 1995. In 2005, the Baby Ruth bar became the official candy bar of Major League Baseball in a marketing arrangement.
In 2018, President Donald Trump announced that Ruth, along with Elvis Presley and Antonin Scalia, would posthumously receive the Presidential Medal of Freedom. Montville describes the continuing relevance of Babe Ruth in American culture, more than three-quarters of a century after he last swung a bat in a major league game:
The fascination with his life and career continues. He is a bombastic, sloppy hero from our bombastic, sloppy history, origins undetermined, a folk tale of American success. His moon face is as recognizable today as it was when he stared out at Tom Zachary on a certain September afternoon in 1927. If sport has become the national religion, Babe Ruth is the patron saint. He stands at the heart of the game he played, the promise of a warm summer night, a bag of peanuts, and a beer. And just maybe, the longest ball hit out of the park. | [
{
"paragraph_id": 0,
"text": "George Herman \"Babe\" Ruth (February 6, 1895 – August 16, 1948) was an American professional baseball player whose career in Major League Baseball (MLB) spanned 22 seasons, from 1914 through 1935. Nicknamed \"the Bambino\" and \"the Sultan of Swat\", he began his MLB career as a star left-handed pitcher for the Boston Red Sox, but achieved his greatest fame as a slugging outfielder for the New York Yankees. Ruth is regarded as one of the greatest sports heroes in American culture and is considered by many to be the greatest baseball player of all time. In 1936, Ruth was elected into the Baseball Hall of Fame as one of its \"first five\" inaugural members.",
"title": ""
},
{
"paragraph_id": 1,
"text": "At age seven, Ruth was sent to St. Mary's Industrial School for Boys, a reformatory where he was mentored by Brother Matthias Boutlier of the Xaverian Brothers, the school's disciplinarian and a capable baseball player. In 1914, Ruth was signed to play Minor League baseball for the Baltimore Orioles but was soon sold to the Red Sox. By 1916, he had built a reputation as an outstanding pitcher who sometimes hit long home runs, a feat unusual for any player in the dead-ball era. Although Ruth twice won 23 games in a season as a pitcher and was a member of three World Series championship teams with the Red Sox, he wanted to play every day and was allowed to convert to an outfielder. With regular playing time, he broke the MLB single-season home run record in 1919 with 29.",
"title": ""
},
{
"paragraph_id": 2,
"text": "After that season, Red Sox owner Harry Frazee sold Ruth to the Yankees amid controversy. The trade fueled Boston's subsequent 86-year championship drought and popularized the \"Curse of the Bambino\" superstition. In his 15 years with the Yankees, Ruth helped the team win seven American League (AL) pennants and four World Series championships. His big swing led to escalating home run totals that not only drew fans to the ballpark and boosted the sport's popularity but also helped usher in baseball's live-ball era, which evolved from a low-scoring game of strategy to a sport where the home run was a major factor. As part of the Yankees' vaunted \"Murderers' Row\" lineup of 1927, Ruth hit 60 home runs, which extended his own MLB single-season record by a single home run. Ruth's last season with the Yankees was 1934; he retired from the game the following year, after a short stint with the Boston Braves. In his career, he led the American League in home runs twelve times.",
"title": ""
},
{
"paragraph_id": 3,
"text": "During Ruth's career, he was the target of intense press and public attention for his baseball exploits and off-field penchants for drinking and womanizing. After his retirement as a player, he was denied the opportunity to manage a major league club, most likely because of poor behavior during parts of his playing career. In his final years, Ruth made many public appearances, especially in support of American efforts in World War II. In 1946, he became ill with nasopharyngeal cancer and died from the disease two years later. Ruth remains a major figure in American culture.",
"title": ""
},
{
"paragraph_id": 4,
"text": "George Herman Ruth Jr. was born on February 6, 1895, at 216 Emory Street in the Pigtown section of Baltimore, Maryland. Ruth's parents, Katherine (née Schamberger) and George Herman Ruth Sr., were both of German ancestry. According to the 1880 census, his parents were both born in Maryland. His paternal grandparents were from Prussia and Hanover, Germany. Ruth Sr. worked a series of jobs that included lightning rod salesman and streetcar operator. The elder Ruth then became a counterman in a family-owned combination grocery and saloon business on Frederick Street. George Ruth Jr. was born in the house of his maternal grandfather, Pius Schamberger, a German immigrant and trade unionist. Only one of young Ruth's seven siblings, his younger sister Mamie, survived infancy.",
"title": "Early years"
},
{
"paragraph_id": 5,
"text": "Many details of Ruth's childhood are unknown, including the date of his parents' marriage. As a child, Ruth spoke German. When Ruth was a toddler, the family moved to 339 South Woodyear Street, not far from the rail yards; by the time he was six years old, his father had a saloon with an upstairs apartment at 426 West Camden Street. Details are equally scanty about why Ruth was sent at the age of seven to St. Mary's Industrial School for Boys, a reformatory and orphanage. However, according to Julia Ruth Stevens' recount in 1999, because George Sr. was a saloon owner in Baltimore and had given Ruth little supervision growing up, he became a delinquent. Ruth was sent to St. Mary's because George Sr. ran out of ideas to discipline and mentor his son. As an adult, Ruth admitted that as a youth he ran the streets, rarely attended school, and drank beer when his father was not looking. Some accounts say that following a violent incident at his father's saloon, the city authorities decided that this environment was unsuitable for a small child. Ruth entered St. Mary's on June 13, 1902. He was recorded as \"incorrigible\" and spent much of the next 12 years there.",
"title": "Early years"
},
{
"paragraph_id": 6,
"text": "Although St. Mary's boys received an education, students were also expected to learn work skills and help operate the school, particularly once the boys turned 12. Ruth became a shirtmaker and was also proficient as a carpenter. He would adjust his own shirt collars, rather than having a tailor do so, even during his well-paid baseball career. The boys, aged 5 to 21, did most of the work around the facility, from cooking to shoemaking, and renovated St. Mary's in 1912. The food was simple, and the Xaverian Brothers who ran the school insisted on strict discipline; corporal punishment was common. Ruth's nickname there was \"Niggerlips\", as he had large facial features and was darker than most boys at the all-white reformatory.",
"title": "Early years"
},
{
"paragraph_id": 7,
"text": "Ruth was sometimes allowed to rejoin his family or was placed at St. James's Home, a supervised residence with work in the community, but he was always returned to St. Mary's. He was rarely visited by his family; his mother died when he was 12 and, by some accounts, he was permitted to leave St. Mary's only to attend the funeral. How Ruth came to play baseball there is uncertain: according to one account, his placement at St. Mary's was due in part to repeatedly breaking Baltimore's windows with long hits while playing street ball; by another, he was told to join a team on his first day at St. Mary's by the school's athletic director, Brother Herman, becoming a catcher even though left-handers rarely play that position. During his time there he also played third base and shortstop, again unusual for a left-hander, and was forced to wear mitts and gloves made for right-handers. He was encouraged in his pursuits by the school's Prefect of Discipline, Brother Matthias Boutlier, a native of Nova Scotia. A large man, Brother Matthias was greatly respected by the boys both for his strength and for his fairness. For the rest of his life, Ruth would praise Brother Matthias, and his running and hitting styles closely resembled his teacher's. Ruth stated, \"I think I was born as a hitter the first day I ever saw him hit a baseball.\" The older man became a mentor and role model to Ruth; biographer Robert W. Creamer commented on the closeness between the two:",
"title": "Early years"
},
{
"paragraph_id": 8,
"text": "Ruth revered Brother Matthias ... which is remarkable, considering that Matthias was in charge of making boys behave and that Ruth was one of the great natural misbehavers of all time. ... George Ruth caught Brother Matthias' attention early, and the calm, considerable attention the big man gave the young hellraiser from the waterfront struck a spark of response in the boy's soul ... [that may have] blunted a few of the more savage teeth in the gross man whom I have heard at least a half-dozen of his baseball contemporaries describe with admiring awe and wonder as \"an animal.\"",
"title": "Early years"
},
{
"paragraph_id": 9,
"text": "The school's influence remained with Ruth in other ways. He was a lifelong Catholic who would sometimes attend Mass after carousing all night, and he became a well-known member of the Knights of Columbus. He would visit orphanages, schools, and hospitals throughout his life, often avoiding publicity. He was generous to St. Mary's as he became famous and rich, donating money and his presence at fundraisers, and spending $5,000 to buy Brother Matthias a Cadillac in 1926—subsequently replacing it when it was destroyed in an accident. Nevertheless, his biographer Leigh Montville suggests that many of the off-the-field excesses of Ruth's career were driven by the deprivations of his time at St. Mary's.",
"title": "Early years"
},
{
"paragraph_id": 10,
"text": "Most of the boys at St. Mary's played baseball in organized leagues at different levels of proficiency. Ruth later estimated that he played 200 games a year as he steadily climbed the ladder of success. Although he played all positions at one time or another, he gained stardom as a pitcher. According to Brother Matthias, Ruth was standing to one side laughing at the bumbling pitching efforts of fellow students, and Matthias told him to go in and see if he could do better. Ruth had become the best pitcher at St. Mary's, and when he was 18 in 1913, he was allowed to leave the premises to play weekend games on teams that were drawn from the community. He was mentioned in several newspaper articles, for both his pitching prowess and ability to hit long home runs.",
"title": "Early years"
},
{
"paragraph_id": 11,
"text": "In early 1914, Ruth signed a professional baseball contract with Jack Dunn, who owned and managed the minor-league Baltimore Orioles, an International League team. The circumstances of Ruth's signing are not known with certainty. By some accounts, Dunn was urged to attend a game between an all-star team from St. Mary's and one from another Xaverian facility, Mount St. Mary's College. Some versions have Ruth running away before the eagerly awaited game, to return in time to be punished, and then pitching St. Mary's to victory as Dunn watched. Others have Washington Senators pitcher Joe Engel, a Mount St. Mary's graduate, pitching in an alumni game after watching a preliminary contest between the college's freshmen and a team from St. Mary's, including Ruth. Engel watched Ruth play, then told Dunn about him at a chance meeting in Washington. Ruth, in his autobiography, stated only that he worked out for Dunn for a half hour, and was signed. According to biographer Kal Wagenheim, there were legal difficulties to be straightened out as Ruth was supposed to remain at the school until he turned 21, though SportsCentury stated in a documentary that Ruth had already been discharged from St. Mary's when he turned 19, and earned a monthly salary of $100.",
"title": "Professional baseball"
},
{
"paragraph_id": 12,
"text": "The train journey to spring training in Fayetteville, North Carolina, in early March was likely Ruth's first outside the Baltimore area. The rookie ballplayer was the subject of various pranks by veteran players, who were probably also the source of his famous nickname. There are various accounts of how Ruth came to be called \"Babe\", but most center on his being referred to as \"Dunnie's babe\" (or some variant). SportsCentury reported that his nickname was gained because he was the new \"darling\" or \"project\" of Dunn, not only because of Ruth's raw talent, but also because of his lack of knowledge of the proper etiquette of eating out in a restaurant, being in a hotel, or being on a train. \"Babe\" was, at that time, a common nickname in baseball, with perhaps the most famous to that point being Pittsburgh Pirates pitcher and 1909 World Series hero Babe Adams, who appeared younger than his actual age.",
"title": "Professional baseball"
},
{
"paragraph_id": 13,
"text": "Ruth made his first appearance as a professional ballplayer in an inter-squad game on March 7, 1914. He played shortstop and pitched the last two innings of a 15–9 victory. In his second at-bat, Ruth hit a long home run to right field; the blast was locally reported to be longer than a legendary shot hit by Jim Thorpe in Fayetteville. Ruth made his first appearance against a team in organized baseball in an exhibition game versus the major-league Philadelphia Phillies. Ruth pitched the middle three innings and gave up two runs in the fourth, but then settled down and pitched a scoreless fifth and sixth innings. In a game against the Phillies the following afternoon, Ruth entered during the sixth inning and did not allow a run the rest of the way. The Orioles scored seven runs in the bottom of the eighth inning to overcome a 6–0 deficit, and Ruth was the winning pitcher.",
"title": "Professional baseball"
},
{
"paragraph_id": 14,
"text": "Once the regular season began, Ruth was a star pitcher who was also dangerous at the plate. The team performed well, yet received almost no attention from the Baltimore press. A third major league, the Federal League, had begun play, and the local franchise, the Baltimore Terrapins, restored that city to the major leagues for the first time since 1902. Few fans visited Oriole Park, where Ruth and his teammates labored in relative obscurity. Ruth may have been offered a bonus and a larger salary to jump to the Terrapins; when rumors to that effect swept Baltimore, giving Ruth the most publicity he had experienced to date, a Terrapins official denied it, stating it was their policy not to sign players under contract to Dunn.",
"title": "Professional baseball"
},
{
"paragraph_id": 15,
"text": "The competition from the Terrapins caused Dunn to sustain large losses. Although by late June the Orioles were in first place, having won over two-thirds of their games, the paid attendance dropped as low as 150. Dunn explored a possible move by the Orioles to Richmond, Virginia, as well as the sale of a minority interest in the club. These possibilities fell through, leaving Dunn with little choice other than to sell his best players to major league teams to raise money. He offered Ruth to the reigning World Series champions, Connie Mack's Philadelphia Athletics, but Mack had his own financial problems. The Cincinnati Reds and New York Giants expressed interest in Ruth, but Dunn sold his contract, along with those of pitchers Ernie Shore and Ben Egan, to the Boston Red Sox of the American League (AL) on July 4. The sale price was announced as $25,000 but other reports lower the amount to half that, or possibly $8,500 plus the cancellation of a $3,000 loan. Ruth remained with the Orioles for several days while the Red Sox completed a road trip, and reported to the team in Boston on July 11.",
"title": "Professional baseball"
},
{
"paragraph_id": 16,
"text": "On July 11, 1914, Ruth arrived in Boston with Egan and Shore. Ruth later told the story of how that morning he had met Helen Woodford, who would become his first wife. She was a 16-year-old waitress at Landers Coffee Shop, and Ruth related that she served him when he had breakfast there. Other stories, though, suggested that the meeting occurred on another day, and perhaps under other circumstances. Regardless of when he began to woo his first wife, he won his first game as a pitcher for the Red Sox that afternoon, 4–3, over the Cleveland Naps. His catcher was Bill Carrigan, who was also the Red Sox manager. Shore was given a start by Carrigan the next day; he won that and his second start and thereafter was pitched regularly. Ruth lost his second start, and was thereafter little used. In his major league debut as a batter, Ruth went 0-for-2 against left-hander Willie Mitchell, striking out in his first at bat before being removed for a pinch hitter in the seventh inning. Ruth was not much noticed by the fans, as Bostonians watched the Red Sox's crosstown rivals, the Braves, begin a legendary comeback that would take them from last place on the Fourth of July to the 1914 World Series championship.",
"title": "Professional baseball"
},
{
"paragraph_id": 17,
"text": "Egan was traded to Cleveland after two weeks on the Boston roster. During his time with the Red Sox, he kept an eye on the inexperienced Ruth, much as Dunn had in Baltimore. When he was traded, no one took his place as supervisor. Ruth's new teammates considered him brash and would have preferred him as a rookie to remain quiet and inconspicuous. When Ruth insisted on taking batting practice despite being both a rookie who did not play regularly and a pitcher, he arrived to find his bats sawed in half. His teammates nicknamed him \"the Big Baboon\", a name the swarthy Ruth, who had disliked the nickname \"Niggerlips\" at St. Mary's, detested. Ruth had received a raise on promotion to the major leagues and quickly acquired tastes for fine food, liquor, and women, among other temptations.",
"title": "Professional baseball"
},
{
"paragraph_id": 18,
"text": "Manager Carrigan allowed Ruth to pitch two exhibition games in mid-August. Although Ruth won both against minor-league competition, he was not restored to the pitching rotation. It is uncertain why Carrigan did not give Ruth additional opportunities to pitch. There are legends—filmed for the screen in The Babe Ruth Story (1948)—that the young pitcher had a habit of signaling his intent to throw a curveball by sticking out his tongue slightly, and that he was easy to hit until this changed. Creamer pointed out that it is common for inexperienced pitchers to display such habits, and the need to break Ruth of his would not constitute a reason to not use him at all. The biographer suggested that Carrigan was unwilling to use Ruth because of the rookie's poor behavior.",
"title": "Professional baseball"
},
{
"paragraph_id": 19,
"text": "On July 30, 1914, Boston owner Joseph Lannin had purchased the minor-league Providence Grays, members of the International League. The Providence team had been owned by several people associated with the Detroit Tigers, including star hitter Ty Cobb, and as part of the transaction, a Providence pitcher was sent to the Tigers. To soothe Providence fans upset at losing a star, Lannin announced that the Red Sox would soon send a replacement to the Grays. This was intended to be Ruth, but his departure for Providence was delayed when Cincinnati Reds owner Garry Herrmann claimed him off of waivers. After Lannin wrote to Herrmann explaining that the Red Sox wanted Ruth in Providence so he could develop as a player, and would not release him to a major league club, Herrmann allowed Ruth to be sent to the minors. Carrigan later stated that Ruth was not sent down to Providence to make him a better player, but to help the Grays win the International League pennant (league championship).",
"title": "Professional baseball"
},
{
"paragraph_id": 20,
"text": "Ruth joined the Grays on August 18, 1914. After Dunn's deals, the Baltimore Orioles managed to hold on to first place until August 15, after which they continued to fade, leaving the pennant race between Providence and Rochester. Ruth was deeply impressed by Providence manager \"Wild Bill\" Donovan, previously a star pitcher with a 25–4 win–loss record for Detroit in 1907; in later years, he credited Donovan with teaching him much about pitching. Ruth was often called upon to pitch, in one stretch starting (and winning) four games in eight days. On September 5 at Maple Leaf Park in Toronto, Ruth pitched a one-hit 9–0 victory, and hit his first professional home run, his only one as a minor leaguer, off Ellis Johnson. Recalled to Boston after Providence finished the season in first place, he pitched and won a game for the Red Sox against the New York Yankees on October 2, getting his first major league hit, a double. Ruth finished the season with a record of 2–1 as a major leaguer and 23–8 in the International League (for Baltimore and Providence). Once the season concluded, Ruth married Helen in Ellicott City, Maryland. Creamer speculated that they did not marry in Baltimore, where the newlyweds boarded with George Ruth Sr., to avoid possible interference from those at St. Mary's—both bride and groom were not yet of age and Ruth remained on parole from that institution until his 21st birthday.",
"title": "Professional baseball"
},
{
"paragraph_id": 21,
"text": "In March 1915, Ruth reported to Hot Springs, Arkansas, for his first major league spring training. Despite a relatively successful first season, he was not slated to start regularly for the Red Sox, who already had two \"superb\" left-handed pitchers, according to Creamer: the established stars Dutch Leonard, who had broken the record for the lowest earned run average (ERA) in a single season; and Ray Collins, a 20-game winner in both 1913 and 1914. Ruth was ineffective in his first start, taking the loss in the third game of the season. Injuries and ineffective pitching by other Boston pitchers gave Ruth another chance, and after some good relief appearances, Carrigan allowed Ruth another start, and he won a rain-shortened seven inning game. Ten days later, the manager had him start against the New York Yankees at the Polo Grounds. Ruth took a 3–2 lead into the ninth, but lost the game 4–3 in 13 innings. Ruth, hitting ninth as was customary for pitchers, hit a massive home run into the upper deck in right field off of Jack Warhop. At the time, home runs were rare in baseball, and Ruth's majestic shot awed the crowd. The winning pitcher, Warhop, would in August 1915 conclude a major league career of eight seasons, undistinguished but for being the first major league pitcher to give up a home run to Babe Ruth.",
"title": "Professional baseball"
},
{
"paragraph_id": 22,
"text": "Carrigan was sufficiently impressed by Ruth's pitching to give him a spot in the starting rotation. Ruth finished the 1915 season 18–8 as a pitcher; as a hitter, he batted .315 and had four home runs. The Red Sox won the AL pennant, but with the pitching staff healthy, Ruth was not called upon to pitch in the 1915 World Series against the Philadelphia Phillies. Boston won in five games. Ruth was used as a pinch hitter in Game Five, but grounded out against Phillies ace Grover Cleveland Alexander. Despite his success as a pitcher, Ruth was acquiring a reputation for long home runs; at Sportsman's Park against the St. Louis Browns, a Ruth hit soared over Grand Avenue, breaking the window of a Chevrolet dealership.",
"title": "Professional baseball"
},
{
"paragraph_id": 23,
"text": "In 1916, attention focused on Ruth's pitching as he engaged in repeated pitching duels with Washington Senators' ace Walter Johnson. The two met five times during the season with Ruth winning four and Johnson one (Ruth had a no decision in Johnson's victory). Two of Ruth's victories were by the score of 1–0, one in a 13-inning game. Of the 1–0 shutout decided without extra innings, AL president Ban Johnson stated, \"That was one of the best ball games I have ever seen.\" For the season, Ruth went 23–12, with a 1.75 ERA and nine shutouts, both of which led the league. Ruth's nine shutouts in 1916 set a league record for left-handers that would remain unmatched until Ron Guidry tied it in 1978. The Red Sox won the pennant and World Series again, this time defeating the Brooklyn Robins (as the Dodgers were then known) in five games. Ruth started and won Game 2, 2–1, in 14 innings. Until another game of that length was played in 2005, this was the longest World Series game, and Ruth's pitching performance is still the longest postseason complete game victory.",
"title": "Professional baseball"
},
{
"paragraph_id": 24,
"text": "Carrigan retired as player and manager after 1916, returning to his native Maine to be a businessman. Ruth, who played under four managers who are in the National Baseball Hall of Fame, always maintained that Carrigan, who is not enshrined there, was the best skipper he ever played for. There were other changes in the Red Sox organization that offseason, as Lannin sold the team to a three-man group headed by New York theatrical promoter Harry Frazee. Jack Barry was hired by Frazee as manager.",
"title": "Professional baseball"
},
{
"paragraph_id": 25,
"text": "Ruth went 24–13 with a 2.01 ERA and six shutouts in 1917, but the Sox finished in second place in the league, nine games behind the Chicago White Sox in the standings. On June 23 at Washington, when home plate umpire 'Brick' Owens called the first four pitches as balls, Ruth was ejected from the game and threw a punch at him, and was later suspended for ten days and fined $100. Ernie Shore was called in to relieve Ruth, and was allowed eight warm-up pitches. The runner who had reached base on the walk was caught stealing, and Shore retired all 26 batters he faced to win the game. Shore's feat was listed as a perfect game for many years. In 1991, Major League Baseball's (MLB) Committee on Statistical Accuracy amended it to be listed as a combined no-hitter. In 1917, Ruth was used little as a batter, other than for his plate appearances while pitching, and hit .325 with two home runs.",
"title": "Professional baseball"
},
{
"paragraph_id": 26,
"text": "The United States' entry into World War I occurred at the start of the season and overshadowed baseball. Conscription was introduced in September 1917, and most baseball players in the big leagues were of draft age. This included Barry, who was a player-manager, and who joined the Naval Reserve in an attempt to avoid the draft, only to be called up after the 1917 season. Frazee hired International League President Ed Barrow as Red Sox manager. Barrow had spent the previous 30 years in a variety of baseball jobs, though he never played the game professionally. With the major leagues shorthanded because of the war, Barrow had many holes in the Red Sox lineup to fill.",
"title": "Professional baseball"
},
{
"paragraph_id": 27,
"text": "Ruth also noticed these vacancies in the lineup. He was dissatisfied in the role of a pitcher who appeared every four or five days and wanted to play every day at another position. Barrow used Ruth at first base and in the outfield during the exhibition season, but he restricted him to pitching as the team moved toward Boston and the season opener. At the time, Ruth was possibly the best left-handed pitcher in baseball, and allowing him to play another position was an experiment that could have backfired.",
"title": "Professional baseball"
},
{
"paragraph_id": 28,
"text": "Inexperienced as a manager, Barrow had player Harry Hooper advise him on baseball game strategy. Hooper urged his manager to allow Ruth to play another position when he was not pitching, arguing to Barrow, who had invested in the club, that the crowds were larger on days when Ruth played, as they were attracted by his hitting. In early May, Barrow gave in; Ruth promptly hit home runs in four consecutive games (one an exhibition), the last off of Walter Johnson. For the first time in his career (disregarding pinch-hitting appearances), Ruth was assigned a place in the batting order higher than ninth.",
"title": "Professional baseball"
},
{
"paragraph_id": 29,
"text": "Although Barrow predicted that Ruth would beg to return to pitching the first time he experienced a batting slump, that did not occur. Barrow used Ruth primarily as an outfielder in the war-shortened 1918 season. Ruth hit .300, with 11 home runs, enough to secure him a share of the major league home run title with Tilly Walker of the Philadelphia Athletics. He was still occasionally used as a pitcher, and had a 13–7 record with a 2.22 ERA.",
"title": "Professional baseball"
},
{
"paragraph_id": 30,
"text": "In 1918, the Red Sox won their third pennant in four years and faced the Chicago Cubs in the World Series, which began on September 5, the earliest date in history. The season had been shortened because the government had ruled that baseball players who were eligible for the military would have to be inducted or work in critical war industries, such as armaments plants. Ruth pitched and won Game One for the Red Sox, a 1–0 shutout. Before Game Four, Ruth injured his left hand in a fight but pitched anyway. He gave up seven hits and six walks, but was helped by outstanding fielding behind him and by his own batting efforts, as a fourth-inning triple by Ruth gave his team a 2–0 lead. The Cubs tied the game in the eighth inning, but the Red Sox scored to take a 3–2 lead again in the bottom of that inning. After Ruth gave up a hit and a walk to start the ninth inning, he was relieved on the mound by Joe Bush. To keep Ruth and his bat in the game, he was sent to play left field. Bush retired the side to give Ruth his second win of the Series, and the third and last World Series pitching victory of his career, against no defeats, in three pitching appearances. Ruth's effort gave his team a three-games-to-one lead, and two days later the Red Sox won their third Series in four years, four-games-to-two. Before allowing the Cubs to score in Game Four, Ruth pitched 29+2⁄3 consecutive scoreless innings, a record for the World Series that stood for more than 40 years until 1961, broken by Whitey Ford after Ruth's death. Ruth was prouder of that record than he was of any of his batting feats.",
"title": "Professional baseball"
},
{
"paragraph_id": 31,
"text": "With the World Series over, Ruth gained exemption from the war draft by accepting a nominal position with a Pennsylvania steel mill. Many industrial establishments took pride in their baseball teams and sought to hire major leaguers. The end of the war in November set Ruth free to play baseball without such contrivances.",
"title": "Professional baseball"
},
{
"paragraph_id": 32,
"text": "During the 1919 season, Ruth was used as a pitcher in only 17 of his 130 games and compiled a 9–5 record. Barrow used him as a pitcher mostly in the early part of the season, when the Red Sox manager still had hopes of a second consecutive pennant. By late June, the Red Sox were clearly out of the race, and Barrow had no objection to Ruth concentrating on his hitting, if only because it drew people to the ballpark. Ruth had hit a home run against the Yankees on Opening Day, and another during a month-long batting slump that soon followed. Relieved of his pitching duties, Ruth began an unprecedented spell of slugging home runs, which gave him widespread public and press attention. Even his failures were seen as majestic—one sportswriter said, \"When Ruth misses a swipe at the ball, the stands quiver.\"",
"title": "Professional baseball"
},
{
"paragraph_id": 33,
"text": "Two home runs by Ruth on July 5, and one in each of two consecutive games a week later, raised his season total to 11, tying his career best from 1918. The first record to fall was the AL single-season mark of 16, set by Ralph \"Socks\" Seybold in 1902. Ruth matched that on July 29, then pulled ahead toward the major league record of 25, set by Buck Freeman in 1899. By the time Ruth reached this in early September, writers had discovered that Ned Williamson of the 1884 Chicago White Stockings had hit 27—though in a ballpark where the distance to right field was only 215 feet (66 m). On September 20, \"Babe Ruth Day\" at Fenway Park, Ruth won the game with a home run in the bottom of the ninth inning, tying Williamson. He broke the record four days later against the Yankees at the Polo Grounds, and hit one more against the Senators to finish with 29. The home run at Washington made Ruth the first major league player to hit a home run at all eight ballparks in his league. In spite of Ruth's hitting heroics, the Red Sox finished sixth, 20+1⁄2 games behind the league champion White Sox. In his six seasons with Boston, he won 89 games and recorded a 2.19 ERA. He had a four-year stretch where he was second in the AL in wins and ERA behind Walter Johnson, and Ruth had a winning record against Johnson in head-to-head matchups.",
"title": "Professional baseball"
},
{
"paragraph_id": 34,
"text": "As an out-of-towner from New York City, Frazee had been regarded with suspicion by Boston's sportswriters and baseball fans when he bought the team. He won them over with success on the field and a willingness to build the Red Sox by purchasing or trading for players. He offered the Senators $60,000 for Walter Johnson, but Washington owner Clark Griffith was unwilling. Even so, Frazee was successful in bringing other players to Boston, especially as replacements for players in the military. This willingness to spend for players helped the Red Sox secure the 1918 title. The 1919 season saw record-breaking attendance, and Ruth's home runs for Boston made him a national sensation. In March 1919 Ruth was reported as having accepted a three-year contract for a total of $27,000, after protracted negotiations. Nevertheless, on December 26, 1919, Frazee sold Ruth's contract to the New York Yankees.",
"title": "Professional baseball"
},
{
"paragraph_id": 35,
"text": "Not all the circumstances concerning the sale are known, but brewer and former congressman Jacob Ruppert, the New York team's principal owner, reportedly asked Yankee manager Miller Huggins what the team needed to be successful. \"Get Ruth from Boston\", Huggins supposedly replied, noting that Frazee was perennially in need of money to finance his theatrical productions. In any event, there was precedent for the Ruth transaction: when Boston pitcher Carl Mays left the Red Sox in a 1919 dispute, Frazee had settled the matter by selling Mays to the Yankees, though over the opposition of AL President Johnson.",
"title": "Professional baseball"
},
{
"paragraph_id": 36,
"text": "According to one of Ruth's biographers, Jim Reisler, \"why Frazee needed cash in 1919—and large infusions of it quickly—is still, more than 80 years later, a bit of a mystery\". The often-told story is that Frazee needed money to finance the musical No, No, Nanette, which was a Broadway hit and brought Frazee financial security. That play did not open until 1925, however, by which time Frazee had sold the Red Sox. Still, the story may be true in essence: No, No, Nanette was based on a Frazee-produced play, My Lady Friends, which opened in 1919.",
"title": "Professional baseball"
},
{
"paragraph_id": 37,
"text": "There were other financial pressures on Frazee, despite his team's success. Ruth, fully aware of baseball's popularity and his role in it, wanted to renegotiate his contract, signed before the 1919 season for $10,000 per year through 1921. He demanded that his salary be doubled, or he would sit out the season and cash in on his popularity through other ventures. Ruth's salary demands were causing other players to ask for more money. Additionally, Frazee still owed Lannin as much as $125,000 from the purchase of the club.",
"title": "Professional baseball"
},
{
"paragraph_id": 38,
"text": "Although Ruppert and his co-owner, Colonel Tillinghast Huston, were both wealthy, and had aggressively purchased and traded for players in 1918 and 1919 to build a winning team, Ruppert faced losses in his brewing interests as Prohibition was implemented, and if their team left the Polo Grounds, where the Yankees were the tenants of the New York Giants, building a stadium in New York would be expensive. Nevertheless, when Frazee, who moved in the same social circles as Huston, hinted to the colonel that Ruth was available for the right price, the Yankees owners quickly pursued the purchase.",
"title": "Professional baseball"
},
{
"paragraph_id": 39,
"text": "Frazee sold the rights to Babe Ruth for $100,000, the largest sum ever paid for a baseball player. The deal also involved a $350,000 loan from Ruppert to Frazee, secured by a mortgage on Fenway Park. Once it was agreed, Frazee informed Barrow, who, stunned, told the owner that he was getting the worse end of the bargain. Cynics have suggested that Barrow may have played a larger role in the Ruth sale, as less than a year after, he became the Yankee general manager, and in the following years made a number of purchases of Red Sox players from Frazee. The $100,000 price included $25,000 in cash, and notes for the same amount due November 1 in 1920, 1921, and 1922; Ruppert and Huston assisted Frazee in selling the notes to banks for immediate cash.",
"title": "Professional baseball"
},
{
"paragraph_id": 40,
"text": "The transaction was contingent on Ruth signing a new contract, which was quickly accomplished—Ruth agreed to fulfill the remaining two years on his contract, but was given a $20,000 bonus, payable over two seasons. The deal was announced on January 6, 1920. Reaction in Boston was mixed: some fans were embittered at the loss of Ruth; others conceded that Ruth had become difficult to deal with. The New York Times suggested that \"The short right field wall at the Polo Grounds should prove an easy target for Ruth next season and, playing seventy-seven games at home, it would not be surprising if Ruth surpassed his home run record of twenty-nine circuit clouts next Summer.\" According to Reisler, \"The Yankees had pulled off the sports steal of the century.\"",
"title": "Professional baseball"
},
{
"paragraph_id": 41,
"text": "According to Marty Appel in his history of the Yankees, the transaction, \"changed the fortunes of two high-profile franchises for decades\". The Red Sox, winners of five of the first 16 World Series, those played between 1903 and 1919, would not win another pennant until 1946, or another World Series until 2004, a drought attributed in baseball superstition to Frazee's sale of Ruth and sometimes dubbed the \"Curse of the Bambino\". Conversely, the Yankees had not won the AL championship prior to their acquisition of Ruth. They won seven AL pennants and four World Series with him, and lead baseball with 40 pennants and 27 World Series titles in their history.",
"title": "Professional baseball"
},
{
"paragraph_id": 42,
"text": "When Ruth signed with the Yankees, he completed his transition from a pitcher to a power-hitting outfielder. His fifteen-season Yankee career consisted of over 2,000 games, and Ruth broke many batting records while making only five widely scattered appearances on the mound, winning all of them.",
"title": "Professional baseball"
},
{
"paragraph_id": 43,
"text": "At the end of April 1920, the Yankees were 4–7, with the Red Sox leading the league with a 10–2 mark. Ruth had done little, having injured himself swinging the bat. Both situations began to change on May 1, when Ruth hit a tape measure home run that sent the ball completely out of the Polo Grounds, a feat believed to have been previously accomplished only by Shoeless Joe Jackson. The Yankees won, 6–0, taking three out of four from the Red Sox. Ruth hit his second home run on May 2, and by the end of the month had set a major league record for home runs in a month with 11, and promptly broke it with 13 in June. Fans responded with record attendance figures. On May 16, Ruth and the Yankees drew 38,600 to the Polo Grounds, a record for the ballpark, and 15,000 fans were turned away. Large crowds jammed stadiums to see Ruth play when the Yankees were on the road.",
"title": "Professional baseball"
},
{
"paragraph_id": 44,
"text": "The home runs kept on coming. Ruth tied his own record of 29 on July 15 and broke it with home runs in both games of a doubleheader four days later. By the end of July, he had 37, but his pace slackened somewhat after that. Nevertheless, on September 4, he both tied and broke the organized baseball record for home runs in a season, snapping Perry Werden's 1895 mark of 44 in the minor Western League. The Yankees played well as a team, battling for the league lead early in the summer, but slumped in August in the AL pennant battle with Chicago and Cleveland. The pennant and the World Series were won by Cleveland, who surged ahead after the Black Sox Scandal broke on September 28 and led to the suspension of many of Chicago's top players, including Shoeless Joe Jackson. The Yankees finished third, but drew 1.2 million fans to the Polo Grounds, the first time a team had drawn a seven-figure attendance. The rest of the league sold 600,000 more tickets, many fans there to see Ruth, who led the league with 54 home runs, 158 runs, and 137 runs batted in (RBIs).",
"title": "Professional baseball"
},
{
"paragraph_id": 45,
"text": "In 1920 and afterwards, Ruth was aided in his power hitting by the fact that A.J. Reach Company—the maker of baseballs used in the major leagues—was using a more efficient machine to wind the yarn found within the baseball. The new baseballs went into play in 1920 and ushered the start of the live-ball era; the number of home runs across the major leagues increased by 184 over the previous year. Baseball statistician Bill James pointed out that while Ruth was likely aided by the change in the baseball, there were other factors at work, including the gradual abolition of the spitball (accelerated after the death of Ray Chapman, struck by a pitched ball thrown by Mays in August 1920) and the more frequent use of new baseballs (also a response to Chapman's death). Nevertheless, James theorized that Ruth's 1920 explosion might have happened in 1919, had a full season of 154 games been played rather than 140, had Ruth refrained from pitching 133 innings that season, and if he were playing at any other home field but Fenway Park, where he hit only 9 of 29 home runs.",
"title": "Professional baseball"
},
{
"paragraph_id": 46,
"text": "Yankees business manager Harry Sparrow had died early in the 1920 season. Ruppert and Huston hired Barrow to replace him. The two men quickly made a deal with Frazee for New York to acquire some of the players who would be mainstays of the early Yankee pennant-winning teams, including catcher Wally Schang and pitcher Waite Hoyt. The 21-year-old Hoyt became close to Ruth:",
"title": "Professional baseball"
},
{
"paragraph_id": 47,
"text": "The outrageous life fascinated Hoyt, the don't-give-a-shit freedom of it, the nonstop, pell-mell charge into excess. How did a man drink so much and never get drunk? ... The puzzle of Babe Ruth never was dull, no matter how many times Hoyt picked up the pieces and stared at them. After games he would follow the crowd to the Babe's suite. No matter what the town, the beer would be iced and the bottles would fill the bathtub.",
"title": "Professional baseball"
},
{
"paragraph_id": 48,
"text": "In the offseason, Ruth spent some time in Havana, Cuba, where he was said to have lost $35,000 (equivalent to $570,000 in 2022) betting on horse races.",
"title": "Professional baseball"
},
{
"paragraph_id": 49,
"text": "Ruth hit home runs early and often in the 1921 season, during which he broke Roger Connor's mark for home runs in a career, 138. Each of the almost 600 home runs Ruth hit in his career after that extended his own record. After a slow start, the Yankees were soon locked in a tight pennant race with Cleveland, winners of the 1920 World Series. On September 15, Ruth hit his 55th home run, breaking his year-old single-season record. In late September, the Yankees visited Cleveland and won three out of four games, giving them the upper hand in the race, and clinched their first pennant a few days later. Ruth finished the regular season with 59 home runs, batting .378 and with a slugging percentage of .846. Ruth's 177 runs scored, 119 extra-base hits, and 457 total bases set modern-era records that still stand as of 2023.",
"title": "Professional baseball"
},
{
"paragraph_id": 50,
"text": "The Yankees had high expectations when they met the New York Giants in the 1921 World Series, every game of which was played in the Polo Grounds. The Yankees won the first two games with Ruth in the lineup. However, Ruth badly scraped his elbow during Game 2 when he slid into third base (he had walked and stolen both second and third bases). After the game, he was told by the team physician not to play the rest of the series. Despite this advice, he did play in the next three games, and pinch-hit in Game Eight of the best-of-nine series, but the Yankees lost, five games to three. Ruth hit .316, drove in five runs and hit his first World Series home run.",
"title": "Professional baseball"
},
{
"paragraph_id": 51,
"text": "After the Series, Ruth and teammates Bob Meusel and Bill Piercy participated in a barnstorming tour in the Northeast. A rule then in force prohibited World Series participants from playing in exhibition games during the offseason, the purpose being to prevent Series participants from replicating the Series and undermining its value. Baseball Commissioner Kenesaw Mountain Landis suspended the trio until May 20, 1922, and fined them their 1921 World Series checks. In August 1922, the rule was changed to allow limited barnstorming for World Series participants, with Landis's permission required.",
"title": "Professional baseball"
},
{
"paragraph_id": 52,
"text": "On March 4, 1922, Ruth signed a new contract for three years at $52,000 a year (equivalent to $910,000 in 2022). This was more than two times the largest sum ever paid to a ballplayer up to that point and it represented 40% of the team's player payroll.",
"title": "Professional baseball"
},
{
"paragraph_id": 53,
"text": "Despite his suspension, Ruth was named the Yankees' new on-field captain prior to the 1922 season. During the suspension, he worked out with the team in the morning and played exhibition games with the Yankees on their off days. He and Meusel returned on May 20 to a sellout crowd at the Polo Grounds, but Ruth batted 0-for-4 and was booed. On May 25, he was thrown out of the game for throwing dust in umpire George Hildebrand's face, then climbed into the stands to confront a heckler. Ban Johnson ordered him fined, suspended, and stripped of position as team captain. In his shortened season, Ruth appeared in 110 games, batted .315, with 35 home runs, and drove in 99 runs, but the 1922 season was a disappointment in comparison to his two previous dominating years. Despite Ruth's off-year, the Yankees managed to win the pennant and faced the New York Giants in the World Series for the second consecutive year. In the Series, Giants manager John McGraw instructed his pitchers to throw him nothing but curveballs, and Ruth never adjusted. Ruth had just two hits in 17 at bats, and the Yankees lost to the Giants for the second straight year, by 4–0 (with one tie game). Sportswriter Joe Vila called him, \"an exploded phenomenon\".",
"title": "Professional baseball"
},
{
"paragraph_id": 54,
"text": "After the season, Ruth was a guest at an Elks Club banquet, set up by Ruth's agent with Yankee team support. There, each speaker, concluding with future New York mayor Jimmy Walker, censured him for his poor behavior. An emotional Ruth promised reform, and, to the surprise of many, followed through. When he reported to spring training, he was in his best shape as a Yankee, weighing only 210 pounds (95 kg).",
"title": "Professional baseball"
},
{
"paragraph_id": 55,
"text": "The Yankees' status as tenants of the Giants at the Polo Grounds had become increasingly uneasy, and in 1922, Giants owner Charles Stoneham said the Yankees' lease, expiring after that season, would not be renewed. Ruppert and Huston had long contemplated a new stadium, and had taken an option on property at 161st Street and River Avenue in the Bronx. Yankee Stadium was completed in time for the home opener on April 18, 1923, at which Ruth hit the first home run in what was quickly dubbed \"the House that Ruth Built\". The ballpark was designed with Ruth in mind: although the venue's left-field fence was further from home plate than at the Polo Grounds, Yankee Stadium's right-field fence was closer, making home runs easier to hit for left-handed batters. To spare Ruth's eyes, right field—his defensive position—was not pointed into the afternoon sun, as was traditional; left fielder Meusel soon developed headaches from squinting toward home plate.",
"title": "Professional baseball"
},
{
"paragraph_id": 56,
"text": "During the 1923 season, the Yankees were never seriously challenged and won the AL pennant by 17 games. Ruth finished the season with a career-high .393 batting average and 41 home runs, which tied Cy Williams for the most in the major-leagues that year. Ruth hit a career-high 45 doubles in 1923, and he reached base 379 times, then a major league record. For the third straight year, the Yankees faced the Giants in the World Series, which Ruth dominated. He batted .368, walked eight times, scored eight runs, hit three home runs and slugged 1.000 during the series, as the Yankees christened their new stadium with their first World Series championship, four games to two.",
"title": "Professional baseball"
},
{
"paragraph_id": 57,
"text": "In 1924, the Yankees were favored to become the first team to win four consecutive pennants. Plagued by injuries, they found themselves in a battle with the Senators. Although the Yankees won 18 of 22 at one point in September, the Senators beat out the Yankees by two games. Ruth hit .378, winning his only AL batting title, with a league-leading 46 home runs.",
"title": "Professional baseball"
},
{
"paragraph_id": 58,
"text": "Ruth did not look like an athlete; he was described as \"toothpicks attached to a piano\", with a big upper body but thin wrists and legs. Ruth had kept up his efforts to stay in shape in 1923 and 1924, but by early 1925 weighed nearly 260 pounds (120 kg). His annual visit to Hot Springs, Arkansas, where he exercised and took saunas early in the year, did him no good as he spent much of the time carousing in the resort town. He became ill while there, and relapsed during spring training. Ruth collapsed in Asheville, North Carolina, as the team journeyed north. He was put on a train for New York, where he was briefly hospitalized. A rumor circulated that he had died, prompting British newspapers to print a premature obituary. In New York, Ruth collapsed again and was found unconscious in his hotel bathroom. He was taken to a hospital where he had multiple convulsions. After sportswriter W. O. McGeehan wrote that Ruth's illness was due to binging on hot dogs and soda pop before a game, it became known as \"the bellyache heard 'round the world\". However, the exact cause of his ailment has never been confirmed and remains a mystery. Glenn Stout, in his history of the Yankees, writes that the Ruth legend is \"still one of the most sheltered in sports\"; he suggests that alcohol was at the root of Ruth's illness, pointing to the fact that Ruth remained six weeks at St. Vincent's Hospital but was allowed to leave, under supervision, for workouts with the team for part of that time. He concludes that the hospitalization was behavior-related. Playing just 98 games, Ruth had his worst season as a Yankee; he finished with a .290 average and 25 home runs. The Yankees finished next to last in the AL with a 69–85 record, their last season with a losing record until 1965.",
"title": "Professional baseball"
},
{
"paragraph_id": 59,
"text": "Ruth spent part of the offseason of 1925–26 working out at Artie McGovern's gym, where he got back into shape. Barrow and Huggins had rebuilt the team and surrounded the veteran core with good young players like Tony Lazzeri and Lou Gehrig, but the Yankees were not expected to win the pennant.",
"title": "Professional baseball"
},
{
"paragraph_id": 60,
"text": "Ruth returned to his normal production during 1926, when he batted .372 with 47 home runs and 146 RBIs. The Yankees built a 10-game lead by mid-June and coasted to win the pennant by three games. The St. Louis Cardinals had won the National League with the lowest winning percentage for a pennant winner to that point (.578) and the Yankees were expected to win the World Series easily. Although the Yankees won the opener in New York, St. Louis took Games Two and Three. In Game Four, Ruth hit three home runs—the first time this had been done in a World Series game—to lead the Yankees to victory. In the fifth game, Ruth caught a ball as he crashed into the fence. The play was described by baseball writers as a defensive gem. New York took that game, but Grover Cleveland Alexander won Game Six for St. Louis to tie the Series at three games each, then got very drunk. He was nevertheless inserted into Game Seven in the seventh inning and shut down the Yankees to win the game, 3–2, and win the Series. Ruth had hit his fourth home run of the Series earlier in the game and was the only Yankee to reach base off Alexander; he walked in the ninth inning before being thrown out to end the game when he attempted to steal second base. Although Ruth's attempt to steal second is often deemed a baserunning blunder, Creamer pointed out that the Yankees' chances of tying the game would have been greatly improved with a runner in scoring position.",
"title": "Professional baseball"
},
{
"paragraph_id": 61,
"text": "The 1926 World Series was also known for Ruth's promise to Johnny Sylvester, a hospitalized 11-year-old boy. Ruth promised the child that he would hit a home run on his behalf. Sylvester had been injured in a fall from a horse, and a friend of Sylvester's father gave the boy two autographed baseballs signed by Yankees and Cardinals. The friend relayed a promise from Ruth (who did not know the boy) that he would hit a home run for him. After the Series, Ruth visited the boy in the hospital. When the matter became public, the press greatly inflated it, and by some accounts, Ruth allegedly saved the boy's life by visiting him, emotionally promising to hit a home run, and doing so. Ruth's 1926 salary of $52,000 was far more than any other baseball player, but he made at least twice as much in other income, including $100,000 from 12 weeks of vaudeville.",
"title": "Professional baseball"
},
{
"paragraph_id": 62,
"text": "The 1927 New York Yankees team is considered one of the greatest squads to ever take the field. Known as Murderers' Row because of the power of its lineup, the team clinched first place on Labor Day, won a then-AL-record 110 games and took the AL pennant by 19 games. There was no suspense in the pennant race, and the nation turned its attention to Ruth's pursuit of his own single-season home run record of 59 round trippers. Ruth was not alone in this chase. Teammate Lou Gehrig proved to be a slugger who was capable of challenging Ruth for his home run crown; he tied Ruth with 24 home runs late in June. Through July and August, the dynamic duo was never separated by more than two home runs. Gehrig took the lead, 45–44, in the first game of a doubleheader at Fenway Park early in September; Ruth responded with two blasts of his own to take the lead, as it proved permanently—Gehrig finished with 47. Even so, as of September 6, Ruth was still several games off his 1921 pace, and going into the final series against the Senators, had only 57. He hit two in the first game of the series, including one off of Paul Hopkins, facing his first major league batter, to tie the record. The following day, September 30, he broke it with his 60th homer, in the eighth inning off Tom Zachary to break a 2–2 tie. \"Sixty! Let's see some son of a bitch try to top that one\", Ruth exulted after the game. In addition to his career-high 60 home runs, Ruth batted .356, drove in 164 runs and slugged .772. In the 1927 World Series, the Yankees swept the Pittsburgh Pirates in four games; the National Leaguers were disheartened after watching the Yankees take batting practice before Game One, with ball after ball leaving Forbes Field. According to Appel, \"The 1927 New York Yankees. Even today, the words inspire awe ... all baseball success is measured against the '27 team.\"",
"title": "Professional baseball"
},
{
"paragraph_id": 63,
"text": "The following season started off well for the Yankees, who led the league in the early going. But the Yankees were plagued by injuries, erratic pitching and inconsistent play. The Philadelphia Athletics, rebuilding after some lean years, erased the Yankees' big lead and even took over first place briefly in early September. The Yankees, however, regained first place when they beat the Athletics three out of four games in a pivotal series at Yankee Stadium later that month, and clinched the pennant in the final weekend of the season. Ruth's play in 1928 mirrored his team's performance. He got off to a hot start and on August 1, he had 42 home runs. This put him ahead of his 60 home run pace from the previous season. He then slumped for the latter part of the season, and he hit just twelve home runs in the last two months. Ruth's batting average also fell to .323, well below his career average. Nevertheless, he ended the season with 54 home runs. The Yankees swept the favored Cardinals in four games in the World Series, with Ruth batting .625 and hitting three home runs in Game Four, including one off Alexander.",
"title": "Professional baseball"
},
{
"paragraph_id": 64,
"text": "Before the 1929 season, Ruppert (who had bought out Huston in 1923) announced that the Yankees would wear uniform numbers to allow fans at cavernous Yankee Stadium to easily identify the players. The Cardinals and Indians had each experimented with uniform numbers; the Yankees were the first to use them on both home and away uniforms. Ruth batted third and was given number 3. According to a long-standing baseball legend, the Yankees adopted their now-iconic pinstriped uniforms in hopes of making Ruth look slimmer. In truth, though, they had been wearing pinstripes since 1915.",
"title": "Professional baseball"
},
{
"paragraph_id": 65,
"text": "Although the Yankees started well, the Athletics soon proved they were the better team in 1929, splitting two series with the Yankees in the first month of the season, then taking advantage of a Yankee losing streak in mid-May to gain first place. Although Ruth performed well, the Yankees were not able to catch the Athletics—Connie Mack had built another great team. Tragedy struck the Yankees late in the year as manager Huggins died at 51 of erysipelas, a bacterial skin infection, on September 25, only ten days after he had last directed the team. Despite their past differences, Ruth praised Huggins and described him as a \"great guy\". The Yankees finished second, 18 games behind the Athletics. Ruth hit .345 during the season, with 46 home runs and 154 RBIs.",
"title": "Professional baseball"
},
{
"paragraph_id": 66,
"text": "On October 17, the Yankees hired Bob Shawkey as manager; he was their fourth choice. Ruth had politicked for the job of player-manager, but Ruppert and Barrow never seriously considered him for the position. Stout deemed this the first hint Ruth would have no future with the Yankees once he retired as a player. Shawkey, a former Yankees player and teammate of Ruth, would prove unable to command Ruth's respect.",
"title": "Professional baseball"
},
{
"paragraph_id": 67,
"text": "On January 7, 1930, salary negotiations between the Yankees and Ruth quickly broke down. Having just concluded a three-year contract at an annual salary of $70,000, Ruth promptly rejected both the Yankees' initial proposal of $70,000 for one year and their 'final' offer of two years at seventy-five—the latter figure equaling the annual salary of then US President Herbert Hoover; instead, Ruth demanded at least $85,000 and three years. When asked why he thought he was \"worth more than the President of the United States,\" Ruth responded: \"Say, if I hadn't been sick last summer, I'd have broken hell out of that home run record! Besides, the President gets a four-year contract. I'm only asking for three.\" Exactly two months later, a compromise was reached, with Ruth settling for two years at an unprecedented $80,000 per year. Ruth's salary was more than 2.4 times greater than the next-highest salary that season, a record margin as of 2019.",
"title": "Professional baseball"
},
{
"paragraph_id": 68,
"text": "In 1930, Ruth hit .359 with 49 home runs (his best in his years after 1928) and 153 RBIs, and pitched his first game in nine years, a complete game victory. Nevertheless, the Athletics won their second consecutive pennant and World Series, as the Yankees finished in third place, sixteen games back. At the end of the season, Shawkey was fired and replaced with Cubs manager Joe McCarthy, though Ruth again unsuccessfully sought the job.",
"title": "Professional baseball"
},
{
"paragraph_id": 69,
"text": "McCarthy was a disciplinarian, but chose not to interfere with Ruth, who did not seek conflict with the manager. The team improved in 1931, but was no match for the Athletics, who won 107 games, 13+1⁄2 games in front of the Yankees. Ruth, for his part, hit .373, with 46 home runs and 163 RBIs. He had 31 doubles, his most since 1924. In the 1932 season, the Yankees went 107–47 and won the pennant. Ruth's effectiveness had decreased somewhat, but he still hit .341 with 41 home runs and 137 RBIs. Nevertheless, he was sidelined twice because of injuries during the season.",
"title": "Professional baseball"
},
{
"paragraph_id": 70,
"text": "The Yankees faced the Cubs, McCarthy's former team, in the 1932 World Series. There was bad blood between the two teams as the Yankees resented the Cubs only awarding half a World Series share to Mark Koenig, a former Yankee. The games at Yankee Stadium had not been sellouts; both were won by the home team, with Ruth collecting two singles, but scoring four runs as he was walked four times by the Cubs pitchers. In Chicago, Ruth was resentful at the hostile crowds that met the Yankees' train and jeered them at the hotel. The crowd for Game Three included New York Governor Franklin D. Roosevelt, the Democratic candidate for president, who sat with Chicago Mayor Anton Cermak. Many in the crowd threw lemons at Ruth, a sign of derision, and others (as well as the Cubs themselves) shouted abuse at Ruth and other Yankees. They were briefly silenced when Ruth hit a three-run home run off Charlie Root in the first inning, but soon revived, and the Cubs tied the score at 4–4 in the fourth inning, partly due to Ruth's fielding error in the outfield. When Ruth came to the plate in the top of the fifth, the Chicago crowd and players, led by pitcher Guy Bush, were screaming insults at Ruth. With the count at two balls and one strike, Ruth gestured, possibly in the direction of center field, and after the next pitch (a strike), may have pointed there with one hand. Ruth hit the fifth pitch over the center field fence; estimates were that it traveled nearly 500 feet (150 m). Whether or not Ruth intended to indicate where he planned to (and did) hit the ball (Charlie Devens, who, in 1999, was interviewed as Ruth's surviving teammate in that game, did not think so), the incident has gone down in legend as Babe Ruth's called shot. The Yankees won Game Three, and the following day clinched the Series with another victory. During that game, Bush hit Ruth on the arm with a pitch, causing words to be exchanged and provoking a game-winning Yankee rally.",
"title": "Professional baseball"
},
{
"paragraph_id": 71,
"text": "Ruth remained productive in 1933. He batted .301, with 34 home runs, 103 RBIs, and a league-leading 114 walks, as the Yankees finished in second place, seven games behind the Senators. Athletics manager Connie Mack selected him to play right field in the first Major League Baseball All-Star Game, held on July 6, 1933, at Comiskey Park in Chicago. He hit the first home run in the All-Star Game's history, a two-run blast against Bill Hallahan during the third inning, which helped the AL win the game 4–2. During the final game of the 1933 season, as a publicity stunt organized by his team, Ruth was called upon and pitched a complete game victory against the Red Sox, his final appearance as a pitcher. Despite unremarkable pitching numbers, Ruth had a 5–0 record in five games for the Yankees, raising his career totals to 94–46.",
"title": "Professional baseball"
},
{
"paragraph_id": 72,
"text": "In 1934, Ruth played in his last full season with the Yankees. By this time, years of high living were starting to catch up with him. His conditioning had deteriorated to the point that he could no longer field or run. He accepted a pay cut to $35,000 from Ruppert, but he was still the highest-paid player in the major leagues. He could still handle a bat, recording a .288 batting average with 22 home runs. However, Reisler described these statistics as \"merely mortal\" by Ruth's previous standards. Ruth was selected to the AL All-Star team for the second consecutive year, even though he was in the twilight of his career. During the game, New York Giants pitcher Carl Hubbell struck out Ruth and four other future Hall-of-Famers consecutively. The Yankees finished second again, seven games behind the Tigers.",
"title": "Professional baseball"
},
{
"paragraph_id": 73,
"text": "By this time, Ruth knew he was nearly finished as a player. He desired to remain in baseball as a manager. He was often spoken of as a possible candidate as managerial jobs opened up, but in 1932, when he was mentioned as a contender for the Red Sox position, Ruth stated that he was not yet ready to leave the field. There were rumors that Ruth was a likely candidate each time when the Cleveland Indians, Cincinnati Reds, and Detroit Tigers were looking for a manager, but nothing came of them.",
"title": "Professional baseball"
},
{
"paragraph_id": 74,
"text": "Just before the 1934 season, Ruppert offered to make Ruth the manager of the Yankees' top minor-league team, the Newark Bears, but he was talked out of it by his wife, Claire, and his business manager, Christy Walsh. Tigers owner Frank Navin seriously considered acquiring Ruth and making him player-manager. However, Ruth insisted on delaying the meeting until he came back from a trip to Hawaii. Navin was unwilling to wait. Ruth opted to go on his trip, despite Barrow advising him that he was making a mistake; in any event, Ruth's asking price was too high for the notoriously tight-fisted Navin. The Tigers' job ultimately went to Mickey Cochrane.",
"title": "Professional baseball"
},
{
"paragraph_id": 75,
"text": "Early in the 1934 season, Ruth openly campaigned to become the Yankees manager. However, the Yankee job was never a serious possibility. Ruppert always supported McCarthy, who would remain in his position for another 12 seasons. The relationship between Ruth and McCarthy had been lukewarm at best, and Ruth's managerial ambitions further chilled their interpersonal relations. By the end of the season, Ruth hinted that he would retire unless Ruppert named him manager of the Yankees. When the time came, Ruppert wanted Ruth to leave the team without drama or hard feelings.",
"title": "Professional baseball"
},
{
"paragraph_id": 76,
"text": "During the 1934–35 offseason, Ruth circled the world with his wife; the trip included a barnstorming tour of the Far East. At his final stop in the United Kingdom before returning home, Ruth was introduced to cricket by Australian player Alan Fairfax, and after having little luck in a cricketer's stance, he stood as a baseball batter and launched some massive shots around the field, destroying the bat in the process. Although Fairfax regretted that he could not have the time to make Ruth a cricket player, Ruth had lost any interest in such a career upon learning that the best batsmen made only about $40 per week.",
"title": "Professional baseball"
},
{
"paragraph_id": 77,
"text": "Also during the offseason, Ruppert had been sounding out the other clubs in hopes of finding one that would be willing to take Ruth as a manager and/or a player. However, the only serious offer came from Athletics owner-manager Connie Mack, who gave some thought to stepping down as manager in favor of Ruth. However, Mack later dropped the idea, saying that Ruth's wife would be running the team in a month if Ruth ever took over.",
"title": "Professional baseball"
},
{
"paragraph_id": 78,
"text": "While the barnstorming tour was underway, Ruppert began negotiating with Boston Braves owner Judge Emil Fuchs, who wanted Ruth as a gate attraction. The Braves had enjoyed modest recent success, finishing fourth in the National League in both 1933 and 1934, but the team drew poorly at the box office. Unable to afford the rent at Braves Field, Fuchs had considered holding dog races there when the Braves were not at home, only to be turned down by Landis. After a series of phone calls, letters, and meetings, the Yankees traded Ruth to the Braves on February 26, 1935. Ruppert had stated that he would not release Ruth to go to another team as a full-time player. For this reason, it was announced that Ruth would become a team vice president and would be consulted on all club transactions, in addition to playing. He was also made assistant manager to Braves skipper Bill McKechnie. In a long letter to Ruth a few days before the press conference, Fuchs promised Ruth a share in the Braves' profits, with the possibility of becoming co-owner of the team. Fuchs also raised the possibility of Ruth succeeding McKechnie as manager, perhaps as early as 1936. Ruppert called the deal \"the greatest opportunity Ruth ever had\".",
"title": "Professional baseball"
},
{
"paragraph_id": 79,
"text": "There was considerable attention as Ruth reported for spring training. He did not hit his first home run of the spring until after the team had left Florida, and was beginning the road north in Savannah. He hit two in an exhibition game against the Bears. Amid much press attention, Ruth played his first home game in Boston in over 16 years. Before an opening-day crowd of over 25,000, including five of New England's six state governors, Ruth accounted for all the Braves' runs in a 4–2 defeat of the New York Giants, hitting a two-run home run, singling to drive in a third run and later in the inning scoring the fourth. Although age and weight had slowed him, he made a running catch in left field that sportswriters deemed the defensive highlight of the game.",
"title": "Professional baseball"
},
{
"paragraph_id": 80,
"text": "Ruth had two hits in the second game of the season, but it quickly went downhill both for him and the Braves from there. The season soon settled down to a routine of Ruth performing poorly on the few occasions he even played at all. As April passed into May, Ruth's physical deterioration became even more pronounced. While he remained productive at the plate early on, he could do little else. His conditioning had become so poor that he could barely trot around the bases. He made so many errors that three Braves pitchers told McKechnie they would not take the mound if he was in the lineup. Before long, Ruth stopped hitting as well. He grew increasingly annoyed that McKechnie ignored most of his advice. McKechnie later said that Ruth's presence made enforcing discipline nearly impossible.",
"title": "Professional baseball"
},
{
"paragraph_id": 81,
"text": "Ruth soon realized that Fuchs had deceived him, and had no intention of making him manager or giving him any significant off-field duties. He later said his only duties as vice president consisted of making public appearances and autographing tickets. Ruth also found out that far from giving him a share of the profits, Fuchs wanted him to invest some of his money in the team in a last-ditch effort to improve its balance sheet. As it turned out, Fuchs and Ruppert had both known all along that Ruth's non-playing positions were meaningless.",
"title": "Professional baseball"
},
{
"paragraph_id": 82,
"text": "By the end of the first month of the season, Ruth concluded he was finished even as a part-time player. As early as May 12, he asked Fuchs to let him retire. Ultimately, Fuchs persuaded Ruth to remain at least until after the Memorial Day doubleheader in Philadelphia. In the interim was a western road trip, at which the rival teams had scheduled days to honor him. In Chicago and St. Louis, Ruth performed poorly, and his batting average sank to .155, with only two additional home runs for a total of three on the season so far. In the first two games in Pittsburgh, Ruth had only one hit, though a long fly caught by Paul Waner probably would have been a home run in any other ballpark besides Forbes Field.",
"title": "Professional baseball"
},
{
"paragraph_id": 83,
"text": "Ruth played in the third game of the Pittsburgh series on May 25, 1935, and added one more tale to his playing legend. Ruth went 4-for-4, including three home runs, though the Braves lost the game 11–7. The last two were off Ruth's old Cubs nemesis, Guy Bush. The final home run, both of the game and of Ruth's career, sailed out of the park over the right field upper deck–the first time anyone had hit a fair ball completely out of Forbes Field. Ruth was urged to make this his last game, but he had given his word to Fuchs and played in Cincinnati and Philadelphia. The first game of the doubleheader in Philadelphia—the Braves lost both—was his final major league appearance. Ruth retired on June 2 after an argument with Fuchs. He finished 1935 with a .181 average—easily his worst as a full-time position player—and the final six of his 714 home runs. The Braves, 10–27 when Ruth left, finished 38–115, at .248 the worst winning percentage in modern National League history. Insolvent like his team, Fuchs gave up control of the Braves before the end of the season; the National League took over the franchise at the end of the year.",
"title": "Professional baseball"
},
{
"paragraph_id": 84,
"text": "Of the 5 members in the inaugural class of Baseball Hall of Fame in 1936 (Ty Cobb, Honus Wagner, Christy Mathewson, Walter Johnson and Ruth himself), only Ruth was not given an offer to manage a baseball team.",
"title": "Professional baseball"
},
{
"paragraph_id": 85,
"text": "Although Fuchs had given Ruth his unconditional release, no major league team expressed an interest in hiring him in any capacity. Ruth still hoped to be hired as a manager if he could not play anymore, but only one managerial position, Cleveland, became available between Ruth's retirement and the end of the 1937 season. Asked if he had considered Ruth for the job, Indians owner Alva Bradley replied negatively. Team owners and general managers assessed Ruth's flamboyant personal habits as a reason to exclude him from a managerial job; Barrow said of him, \"How can he manage other men when he can't even manage himself?\" Creamer believed Ruth was unfairly treated in never being given an opportunity to manage a major league club. The author believed there was not necessarily a relationship between personal conduct and managerial success, noting that John McGraw, Billy Martin, and Bobby Valentine were winners despite character flaws.",
"title": "Retirement"
},
{
"paragraph_id": 86,
"text": "Ruth played much golf and in a few exhibition baseball games, where he demonstrated a continuing ability to draw large crowds. This appeal contributed to the Dodgers hiring him as first base coach in 1938. When Ruth was hired, Brooklyn general manager Larry MacPhail made it clear that Ruth would not be considered for the manager's job if, as expected, Burleigh Grimes retired at the end of the season. Although much was said about what Ruth could teach the younger players, in practice, his duties were to appear on the field in uniform and encourage base runners—he was not called upon to relay signs. In August, shortly before the baseball rosters expanded, Ruth sought an opportunity to return as an active player in a pinch hitting role. Ruth often took batting practice before games and felt that he could take on the limited role. Grimes denied his request, citing Ruth's poor vision in his right eye, his inability to run the bases, and the risk of an injury to Ruth.",
"title": "Retirement"
},
{
"paragraph_id": 87,
"text": "Ruth got along well with everyone except team captain Leo Durocher, who was hired as Grimes' replacement at season's end. Ruth then left his job as a first base coach and would never again work in any capacity in the game of baseball.",
"title": "Retirement"
},
{
"paragraph_id": 88,
"text": "On July 4, 1939, Ruth spoke on Lou Gehrig Appreciation Day at Yankee Stadium as members of the 1927 Yankees and a sellout crowd turned out to honor the first baseman, who was forced into premature retirement by ALS, which would kill him two years later. The next week, Ruth went to Cooperstown, New York, for the formal opening of the Baseball Hall of Fame. Three years earlier, he was one of the first five players elected to the hall. As radio broadcasts of baseball games became popular, Ruth sought a job in that field, arguing that his celebrity and knowledge of baseball would assure large audiences, but he received no offers. During World War II, he made many personal appearances to advance the war effort, including his last appearance as a player at Yankee Stadium, in a 1943 exhibition for the Army-Navy Relief Fund. He hit a long fly ball off Walter Johnson; the blast left the field, curving foul, but Ruth circled the bases anyway. In 1946, he made a final effort to gain a job in baseball when he contacted new Yankees boss MacPhail, but he was sent a rejection letter. In 1999, Ruth's granddaughter, Linda Tosetti, and his stepdaughter, Julia Ruth Stevens, said that Babe's inability to land a managerial role with the Yankees caused him to feel hurt and slump into a severe depression.",
"title": "Retirement"
},
{
"paragraph_id": 89,
"text": "Ruth started playing golf when he was 20 and continued playing the game throughout his life. His appearance at many New York courses drew spectators and headlines. Rye Golf Club was among the courses he played with teammate Lyn Lary in June 1933. With birdies on 3 holes, Ruth posted the best score. In retirement, he became one of the first celebrity golfers participating in charity tournaments, including one where he was pitted against Ty Cobb.",
"title": "Retirement"
},
{
"paragraph_id": 90,
"text": "Ruth met Helen Woodford (1897–1929), by some accounts, in a coffee shop in Boston, where she was a waitress. They married as teenagers on October 17, 1914. Although Ruth later claimed to have been married in Elkton, Maryland, records show that they were married at St. Paul's Catholic Church in Ellicott City. They adopted a daughter, Dorothy (1921–1989), in 1921. Ruth and Helen separated around 1925 reportedly because of Ruth's repeated infidelities and neglect. They appeared in public as a couple for the last time during the 1926 World Series. Helen died in January 1929 at age 31 in a fire in a house in Watertown, Massachusetts owned by Edward Kinder, a dentist with whom she had been living as \"Mrs. Kinder\". In her book, My Dad, the Babe, Dorothy claimed that she was Ruth's biological child by a mistress named Juanita Jennings. In 1980, Juanita admitted this to Dorothy and Dorothy's stepsister, Julia Ruth Stevens, who was at the time already very ill.",
"title": "Personal life"
},
{
"paragraph_id": 91,
"text": "On April 17, 1929, three months after the death of his first wife, Ruth married actress and model Claire Merritt Hodgson (1897–1976) and adopted her daughter Julia (1916–2019). It was the second and final marriage for both parties. Claire, unlike Helen, was well-travelled and educated, and put structure into Ruth's life, like Miller Huggins did for him on the field.",
"title": "Personal life"
},
{
"paragraph_id": 92,
"text": "By one account, Julia and Dorothy were, through no fault of their own, the reason for the seven-year rift in Ruth's relationship with teammate Lou Gehrig. Sometime in 1932, during a conversation that she assumed was private, Gehrig's mother remarked, \"It's a shame [Claire] doesn't dress Dorothy as nicely as she dresses her own daughter.\" When the comment got back to Ruth, he angrily told Gehrig to tell his mother to mind her own business. Gehrig, in turn, took offense at what he perceived as Ruth's comment about his mother. The two men reportedly never spoke off the field until they reconciled at Yankee Stadium on Lou Gehrig Appreciation Day, July 4, 1939, shortly after Gehrig's retirement from baseball.",
"title": "Personal life"
},
{
"paragraph_id": 93,
"text": "Although Ruth was married throughout most of his baseball career, when team co-owner Tillinghast 'Cap' Huston asked him to tone down his lifestyle, Ruth replied, \"I'll promise to go easier on drinking and to get to bed earlier, but not for you, fifty thousand dollars, or two-hundred and fifty thousand dollars will I give up women. They're too much fun.\" A detective that the Yankees hired to follow him one night in Chicago reported that Ruth had been with six women. Ping Bodie said that he was not Ruth's roommate while traveling; \"I room with his suitcase\". Before the start of the 1922 season, Ruth had signed a three-year contract at $52,000 per year with an option to renew for two additional years. His performance during the 1922 season had been disappointing, attributed in part to his drinking and late-night hours. After the end of the 1922 season, he was asked to sign a contract addendum with a morals clause. Ruth and Ruppert signed it on November 11, 1922. It called for Ruth to abstain entirely from the use of intoxicating liquors, and to not stay up later than 1:00 a.m. during the training and playing season without permission of the manager. Ruth was also enjoined from any action or misbehavior that would compromise his ability to play baseball.",
"title": "Personal life"
},
{
"paragraph_id": 94,
"text": "As early as the war years, doctors had cautioned Ruth to take better care of his health, and he grudgingly followed their advice, limiting his drinking and not going on a proposed trip to support the troops in the South Pacific. In 1946, Ruth began experiencing severe pain over his left eye and had difficulty swallowing. In November 1946, Ruth entered French Hospital in New York for tests, which revealed that he had an inoperable malignant tumor at the base of his skull and in his neck. The malady was a lesion known as nasopharyngeal carcinoma, or \"lymphoepithelioma\". His name and fame gave him access to experimental treatments, and he was one of the first cancer patients to receive both drugs and radiation treatment simultaneously. Having lost 80 pounds (36 kg), he was discharged from the hospital in February and went to Florida to recuperate. He returned to New York and Yankee Stadium after the season started. The new commissioner, Happy Chandler (Judge Landis had died in 1944), proclaimed April 27, 1947, Babe Ruth Day around the major leagues, with the most significant observance to be at Yankee Stadium. A number of teammates and others spoke in honor of Ruth, who briefly addressed the crowd of almost 60,000. By then, his voice was a soft whisper with a very low, raspy tone.",
"title": "Cancer and death (1946–1948)"
},
{
"paragraph_id": 95,
"text": "Around this time, developments in chemotherapy offered some hope for Ruth. The doctors had not told Ruth he had cancer because of his family's fear that he might do himself harm. They treated him with pterolyl triglutamate (Teropterin), a folic acid derivative; he may have been the first human subject. Ruth showed dramatic improvement during the summer of 1947, so much so that his case was presented by his doctors at a scientific meeting, without using his name. He was able to travel around the country, doing promotional work for the Ford Motor Company on American Legion Baseball. He appeared again at another day in his honor at Yankee Stadium in September, but was not well enough to pitch in an old-timers game as he had hoped.",
"title": "Cancer and death (1946–1948)"
},
{
"paragraph_id": 96,
"text": "The improvement was only a temporary remission, and by late 1947, Ruth was unable to help with the writing of his autobiography, The Babe Ruth Story, which was almost entirely ghostwritten. In and out of the hospital in Manhattan, he left for Florida in February 1948, doing what activities he could. After six weeks he returned to New York to appear at a book-signing party. He also traveled to California to witness the filming of the movie based on the book.",
"title": "Cancer and death (1946–1948)"
},
{
"paragraph_id": 97,
"text": "On June 5, 1948, a \"gaunt and hollowed-out\" Ruth visited Yale University to donate a manuscript of The Babe Ruth Story to its library. At Yale, he met with future president George H. W. Bush, who was the captain of the Yale baseball team. On June 13, Ruth visited Yankee Stadium for the final time in his life, appearing at the 25th-anniversary celebrations of \"The House that Ruth Built\". By this time he had lost much weight and had difficulty walking. Introduced along with his surviving teammates from 1923, Ruth used a bat as a cane. Nat Fein's photo of Ruth taken from behind, standing near home plate and facing \"Ruthville\" (right field) became one of baseball's most famous and widely circulated photographs, and won the Pulitzer Prize.",
"title": "Cancer and death (1946–1948)"
},
{
"paragraph_id": 98,
"text": "Ruth made one final trip on behalf of American Legion Baseball. He then entered Memorial Hospital, where he would die. He was never told he had cancer; however, before his death, he surmised it. He was able to leave the hospital for a few short trips, including a final visit to Baltimore. On July 26, 1948, Ruth left the hospital to attend the premiere of the film The Babe Ruth Story. Shortly thereafter, he returned to the hospital for the final time. He was barely able to speak. Ruth's condition gradually grew worse, and only a few visitors were permitted to see him, one of whom was National League president and future Commissioner of Baseball Ford Frick. \"Ruth was so thin it was unbelievable. He had been such a big man and his arms were just skinny little bones, and his face was so haggard\", Frick said years later.",
"title": "Cancer and death (1946–1948)"
},
{
"paragraph_id": 99,
"text": "Thousands of New Yorkers, including many children, stood vigil outside the hospital during Ruth's final days. On August 16, 1948, at 8:01 p.m., Ruth died in his sleep at the age of 53. His open casket was placed on display in the rotunda of Yankee Stadium, where it remained for two days; 77,000 people filed past to pay him tribute. His Requiem Mass was celebrated by Francis Cardinal Spellman at St. Patrick's Cathedral; a crowd estimated at 75,000 waited outside. Ruth is buried with his second wife, Claire, on a hillside in Section 25 at the Gate of Heaven Cemetery in Hawthorne, New York.",
"title": "Cancer and death (1946–1948)"
},
{
"paragraph_id": 100,
"text": "On April 19, 1949, the Yankees unveiled a granite monument in Ruth's honor in center field of Yankee Stadium. The monument was located in the field of play next to a flagpole and similar tributes to Huggins and Gehrig until the stadium was remodeled from 1974 to 1975, which resulted in the outfield fences moving inward and enclosing the monuments from the playing field. This area was known thereafter as Monument Park. Yankee Stadium, \"the House that Ruth Built\", was replaced after the 2008 season with a new Yankee Stadium across the street from the old one; Monument Park was subsequently moved to the new venue behind the center field fence. Ruth's uniform number 3 has been retired by the Yankees, and he is one of five Yankees players or managers to have a granite monument within the stadium.",
"title": "Memorial and museum"
},
{
"paragraph_id": 101,
"text": "The Babe Ruth Birthplace Museum is located at 216 Emory Street, a Baltimore row house where Ruth was born, and three blocks west of Oriole Park at Camden Yards, where the AL's Baltimore Orioles play. The property was restored and opened to the public in 1973 by the non-profit Babe Ruth Birthplace Foundation, Inc. Ruth's widow, Claire, his two daughters, Dorothy and Julia, and his sister, Mamie, helped select and install exhibits for the museum.",
"title": "Memorial and museum"
},
{
"paragraph_id": 102,
"text": "Ruth was the first baseball star to be the subject of overwhelming public adulation. Baseball had been known for star players such as Ty Cobb and \"Shoeless Joe\" Jackson, but both men had uneasy relations with fans. In Cobb's case, the incidents were sometimes marked by violence. Ruth's biographers agreed that he benefited from the timing of his ascension to \"Home Run King\". The country had been hit hard by both the war and the 1918 flu pandemic and longed for something to help put these traumas behind it. Ruth also resonated in a country which felt, in the aftermath of the war, that it took second place to no one. Montville argued that Ruth was a larger-than-life figure who was capable of unprecedented athletic feats in the nation's largest city. Ruth became an icon of the social changes that marked the early 1920s. In his history of the Yankees, Glenn Stout writes that \"Ruth was New York incarnate—uncouth and raw, flamboyant and flashy, oversized, out of scale, and absolutely unstoppable\".",
"title": "Impact"
},
{
"paragraph_id": 103,
"text": "During his lifetime, Ruth became a symbol of the United States. During World War II, Japanese soldiers yelled in English, \"To hell with Babe Ruth\", to anger American soldiers. Ruth replied that he hoped \"every Jap that mention[ed] my name gets shot\". Creamer recorded that \"Babe Ruth transcended sport and moved far beyond the artificial limits of baselines and outfield fences and sports pages\". Wagenheim stated, \"He appealed to a deeply rooted American yearning for the definitive climax: clean, quick, unarguable.\" According to Glenn Stout, \"Ruth's home runs were [an] exalted, uplifting experience that meant more to fans than any runs they were responsible for. A Babe Ruth home run was an event unto itself, one that meant anything was possible.\"",
"title": "Impact"
},
{
"paragraph_id": 104,
"text": "Although Ruth was not just a power hitter—he was the Yankees' best bunter, and an excellent outfielder—Ruth's penchant for hitting home runs altered how baseball is played. Prior to 1920, home runs were unusual, and managers tried to win games by getting a runner on base and bringing him around to score through such means as the stolen base, the bunt, and the hit and run. Advocates of what was dubbed \"inside baseball\", such as Giants manager McGraw, disliked the home run, considering it a blot on the purity of the game. According to sportswriter W. A. Phelon, after the 1920 season, Ruth's breakout performance that season and the response in excitement and attendance, \"settled, for all time to come, that the American public is nuttier over the Home Run than the Clever Fielding or the Hitless Pitching. Viva el Home Run and two times viva Babe Ruth, exponent of the home run, and overshadowing star.\" Bill James states, \"When the owners discovered that the fans liked to see home runs, and when the foundations of the games were simultaneously imperiled by disgrace [in the Black Sox Scandal], then there was no turning back.\" While a few, such as McGraw and Cobb, decried the passing of the old-style play, teams quickly began to seek and develop sluggers.",
"title": "Impact"
},
{
"paragraph_id": 105,
"text": "According to sportswriter Grantland Rice, only two sports figures of the 1920s approached Ruth in popularity—boxer Jack Dempsey and racehorse Man o' War. One of the factors that contributed to Ruth's broad appeal was the uncertainty about his family and early life. Ruth appeared to exemplify the American success story, that even an uneducated, unsophisticated youth, without any family wealth or connections, can do something better than anyone else in the world. Montville writes that \"the fog [surrounding his childhood] will make him forever accessible, universal. He will be the patron saint of American possibility.\" Similarly, the fact that Ruth played in the pre-television era, when a relatively small portion of his fans had the opportunity to see him play allowed his legend to grow through word of mouth and the hyperbole of sports reporters. Reisler states that recent sluggers who surpassed Ruth's 60-home run mark, such as Mark McGwire and Barry Bonds, generated much less excitement than when Ruth repeatedly broke the single-season home run record in the 1920s. Ruth dominated a relatively small sports world, while Americans of the present era have many sports available to watch.",
"title": "Impact"
},
{
"paragraph_id": 106,
"text": "Creamer describes Ruth as \"a unique figure in the social history of the United States\". Thomas Barthel describes him as one of the first celebrity athletes; numerous biographies have portrayed him as \"larger than life\". He entered the language: a dominant figure in a field, whether within or outside sports, is often referred to as \"the Babe Ruth\" of that field. Similarly, \"Ruthian\" has come to mean in sports, \"colossal, dramatic, prodigious, magnificent; with great power\". He was the first athlete to make more money from endorsements and other off-the-field activities than from his sport.",
"title": "Legacy"
},
{
"paragraph_id": 107,
"text": "In 2006, Montville stated that more books have been written about Ruth than any other member of the Baseball Hall of Fame. At least five of these books (including Creamer's and Wagenheim's) were written in 1973 and 1974. The books were timed to capitalize on the increase in public interest in Ruth as Hank Aaron approached his career home run mark, which he broke on April 8, 1974. As he approached Ruth's record, Aaron stated, \"I can't remember a day this year or last when I did not hear the name of Babe Ruth.\"",
"title": "Legacy"
},
{
"paragraph_id": 108,
"text": "Montville suggested that Ruth is probably even more popular today than he was when his career home run record was broken by Aaron. The long ball era that Ruth started continues in baseball, to the delight of the fans. Owners build ballparks to encourage home runs, which are featured on SportsCenter and Baseball Tonight each evening during the season. The questions of performance-enhancing drug use, which dogged later home run hitters such as McGwire and Bonds, do nothing to diminish Ruth's reputation; his overindulgences with beer and hot dogs seem part of a simpler time.",
"title": "Legacy"
},
{
"paragraph_id": 109,
"text": "In various surveys and rankings, Ruth has been named the greatest baseball player of all time. In 1998, The Sporting News ranked him number one on the list of \"Baseball's 100 Greatest Players\". In 1999, baseball fans named Ruth to the Major League Baseball All-Century Team. He was named baseball's Greatest Player Ever in a ballot commemorating the 100th anniversary of professional baseball in 1969. The Associated Press reported in 1993 that Muhammad Ali was tied with Babe Ruth as the most recognized athlete in America. In a 1999 ESPN poll, he was ranked as the second-greatest U.S. athlete of the century, behind Michael Jordan. In 1983, the United States Postal Service honored Ruth with the issuance of a twenty-cent stamp.",
"title": "Legacy"
},
{
"paragraph_id": 110,
"text": "Several of the most expensive items of sports memorabilia and baseball memorabilia ever sold at auction are associated with Ruth. As of May 2022, Ruth's 1920 Yankees jersey, which sold for $4,415,658 in 2012 (equivalent to $5.63 million in 2022), is the third most expensive piece of sports memorabilia ever sold, after Diego Maradona's 1986 World Cup jersey and Pierre de Coubertin's original 1892 Olympic Manifesto. The bat with which he hit the first home run at Yankee Stadium is in The Guinness Book of World Records as the most expensive baseball bat sold at auction, having fetched $1.265 million on December 2, 2004 (equivalent to $1.9599 million in 2022). A hat of Ruth's from the 1934 season set a record for a baseball cap when David Wells sold it at auction for $537,278 in 2012. In 2017, Charlie Sheen sold Ruth's 1927 World Series ring for $2,093,927 at auction. It easily broke the record for a championship ring previously set when Julius Erving's 1974 ABA championship ring sold for $460,741 in 2011.",
"title": "Legacy"
},
{
"paragraph_id": 111,
"text": "One long-term survivor of the craze over Ruth may be the Baby Ruth candy bar. The original company to market the confectionery, the Curtis Candy Company, maintained that the bar was named after Ruth Cleveland, daughter of former president Grover Cleveland. She died in 1904 and the bar was first marketed in 1921, at the height of the craze over Ruth. He later sought to market candy bearing his name; he was refused a trademark because of the Baby Ruth bar. Corporate files from 1921 are no longer extant; the brand has changed hands several times and is now owned by Ferrara Candy Company. The Ruth estate licensed his likeness for use in an advertising campaign for Baby Ruth in 1995. In 2005, the Baby Ruth bar became the official candy bar of Major League Baseball in a marketing arrangement.",
"title": "Legacy"
},
{
"paragraph_id": 112,
"text": "In 2018, President Donald Trump announced that Ruth, along with Elvis Presley and Antonin Scalia, would posthumously receive the Presidential Medal of Freedom. Montville describes the continuing relevance of Babe Ruth in American culture, more than three-quarters of a century after he last swung a bat in a major league game:",
"title": "Legacy"
},
{
"paragraph_id": 113,
"text": "The fascination with his life and career continues. He is a bombastic, sloppy hero from our bombastic, sloppy history, origins undetermined, a folk tale of American success. His moon face is as recognizable today as it was when he stared out at Tom Zachary on a certain September afternoon in 1927. If sport has become the national religion, Babe Ruth is the patron saint. He stands at the heart of the game he played, the promise of a warm summer night, a bag of peanuts, and a beer. And just maybe, the longest ball hit out of the park.",
"title": "Legacy"
}
] | George Herman "Babe" Ruth was an American professional baseball player whose career in Major League Baseball (MLB) spanned 22 seasons, from 1914 through 1935. Nicknamed "the Bambino" and "the Sultan of Swat", he began his MLB career as a star left-handed pitcher for the Boston Red Sox, but achieved his greatest fame as a slugging outfielder for the New York Yankees. Ruth is regarded as one of the greatest sports heroes in American culture and is considered by many to be the greatest baseball player of all time. In 1936, Ruth was elected into the Baseball Hall of Fame as one of its "first five" inaugural members. At age seven, Ruth was sent to St. Mary's Industrial School for Boys, a reformatory where he was mentored by Brother Matthias Boutlier of the Xaverian Brothers, the school's disciplinarian and a capable baseball player. In 1914, Ruth was signed to play Minor League baseball for the Baltimore Orioles but was soon sold to the Red Sox. By 1916, he had built a reputation as an outstanding pitcher who sometimes hit long home runs, a feat unusual for any player in the dead-ball era. Although Ruth twice won 23 games in a season as a pitcher and was a member of three World Series championship teams with the Red Sox, he wanted to play every day and was allowed to convert to an outfielder. With regular playing time, he broke the MLB single-season home run record in 1919 with 29. After that season, Red Sox owner Harry Frazee sold Ruth to the Yankees amid controversy. The trade fueled Boston's subsequent 86-year championship drought and popularized the "Curse of the Bambino" superstition. In his 15 years with the Yankees, Ruth helped the team win seven American League (AL) pennants and four World Series championships. His big swing led to escalating home run totals that not only drew fans to the ballpark and boosted the sport's popularity but also helped usher in baseball's live-ball era, which evolved from a low-scoring game of strategy to a sport where the home run was a major factor. As part of the Yankees' vaunted "Murderers' Row" lineup of 1927, Ruth hit 60 home runs, which extended his own MLB single-season record by a single home run. Ruth's last season with the Yankees was 1934; he retired from the game the following year, after a short stint with the Boston Braves. In his career, he led the American League in home runs twelve times. During Ruth's career, he was the target of intense press and public attention for his baseball exploits and off-field penchants for drinking and womanizing. After his retirement as a player, he was denied the opportunity to manage a major league club, most likely because of poor behavior during parts of his playing career. In his final years, Ruth made many public appearances, especially in support of American efforts in World War II. In 1946, he became ill with nasopharyngeal cancer and died from the disease two years later. Ruth remains a major figure in American culture. | 2001-09-12T23:27:57Z | 2023-12-31T23:48:21Z | [
"Template:Infobox baseball biography",
"Template:Efn",
"Template:Notelist",
"Template:Cite magazine",
"Template:Cite book",
"Template:Navboxes",
"Template:Authority control",
"Template:Multiple image",
"Template:Inflation",
"Template:R",
"Template:MLBBioRet",
"Template:Inflation/year",
"Template:Sabrbio",
"Template:S-bef",
"Template:S-ach",
"Template:Pp-move-indef",
"Template:Frac",
"Template:Currentyear",
"Template:Nbsp",
"Template:Bbhof",
"Template:IMDb name",
"Template:S-start",
"Template:Portal bar",
"Template:Short description",
"Template:Cite web",
"Template:Baseballstats",
"Template:S-sports",
"Template:About",
"Template:Pp-semi",
"Template:Blockquote",
"Template:Convert",
"Template:Reflist",
"Template:Harvp",
"Template:Refbegin",
"Template:Asof",
"Template:Sisterlinks",
"Template:S-aft",
"Template:Babe Ruth",
"Template:Use American English",
"Template:Featured article",
"Template:Sfnp",
"Template:Further",
"Template:As of",
"Template:Cite episode",
"Template:Refend",
"Template:Use mdy dates",
"Template:Citation",
"Template:Cite news",
"Template:Cite journal",
"Template:S-ttl",
"Template:S-end"
] | https://en.wikipedia.org/wiki/Babe_Ruth |
4,177 | Barge | Barge often refers to a flat-bottomed inland waterway vessel which does not have its own means of mechanical propulsion. The first modern barges were pulled by tugs, but on inland waterways, most are pushed by pusher boats, or other vessels. The term barge has a rich history, and therefore there are many other types of barges.
"Barge" is attested from 1300, from Old French barge, from Vulgar Latin barga. The word originally could refer to any small boat; the modern meaning arose around 1480. Bark "small ship" is attested from 1420, from Old French barque, from Vulgar Latin barca (400 AD). The more precise meaning of Barque as "three-masted sailing vessel" arose in the 17th century, and often takes the French spelling for disambiguation. Both are probably derived from the Latin barica, from Greek baris "Egyptian boat", from Coptic bari "small boat", hieroglyphic Egyptian
and similar ba-y-r for "basket-shaped boat". By extension, the term "embark" literally means to board the kind of boat called a "barque".
In Great Britain a merchant barge was originally a flat bottomed merchant vessel for use on navigable rivers. Most of these barges had sails. For traffic on the River Severn the barge was described as: The lesser sort are called barges and frigates, being from forty to sixty feet in length, having a single mast and square sail, and carrying from twenty to forty tons burthen. The larger vessels were called trows. On the River Irwell there was reference to barges passing below Barton Aqueduct with their mast and sails standing. Barges on the Thames were called west country barges.
During the Industrial Revolution, a substantial network of narrow canals was developed in Great Britain from 1750 onward. These new British canals had locks of only 7 feet (2.1 m) wide. This led to the development of the narrowboats, which had a beam of no more than 6 feet 10 inches (2.08 m). It was soon realized that the narrow locks were too limiting. Later locks were therefore doubled in width to 14 feet (4.3 m). This led to the development of the widebeam.
The narrowboats were initially also known as barges, but only a very few had sails, unlike earlier vessels. From the start, most of the new canals were constructed with an adjacent towpath along which draft horses walked, towing the barges. These types of canal craft are so specific that on the British canal system the term 'barge' was not used to describe narrowboats and widebeams. Narrowboats and widebeams are still used on canals, now engine-powered.
On the British canal system, the Thames sailing barge, and Dutch barge and unspecified other styles of barge, are still known as barges. The term Dutch barge is nowadays often used to refer to an accommodation ship, but originally refers to the slightly larger Dutch version of the Thames sailing barge.
The people who moved barges were known as lightermen. Poles are used on barges to fend off other nearby vessels or a wharf. These are often called 'pike poles'. The long pole used to maneuver or propel a barge has given rise to the saying "I wouldn't touch that [subject/thing] with a barge pole."
In the United Kingdom the word barge had many meanings by the 1890s, and these varied locally. On the Mersey a barge was called a 'Flat', on the Thames a Lighter or barge, and on the Humber a 'Keel'. A Lighter had neither mast nor rigging. A keel did have a single mast with sails. Barge and lighter were used indiscriminately. A local distinction was that any flat that was not propelled by steam was a barge, although it might be a sailing flat.
The term Dumb barge was probably taken into use to end the confusion. The term Dumb barge surfaced in the early nineteenth century. It first denoted the use of a barge as a mooring platform in a fixed place. As it went up and down with the tides, it made a very convenient mooring place for steam vessels. Within a few decades, the term dumb barge evolved, and came to mean: 'a vessel propelled by oars only'. By the 1890s Dumb barge was still used only on the Thames.
By 1880 barges on British rivers and canals were often towed by steam tugboats. On the Thames, many dumb barges still relied on their poles, oars and the tide. Others dumb barges made use of about 50 tugboats to tow them to their destinations. While many coal barges were towed, many dumb barges that handled single parcels were not.
In the United States a barge was not a sailing vessel by the end of the 19th century. Indeed, barges were often created by cutting down (razeeing) sailing vessels. In New York this was an accepted meaning of the term barge. The somewhat smaller scow was built as such, but the scow also had its sailing counterpart the sailing scow.
The innovation that led to the modern barge was the use of iron barges towed by a steam tugboat. These were first used to transport grain and other bulk products. From about 1840 to 1870 the towed iron barge was quickly introduced on the Rhine, Danube, Don, Dniester, and rivers in Egypt, India and Australia. Many of these barges were built in Great Britain.
Nowadays 'barge' generally refers to a dumb barge. In Europe, a Dumb barge is: An inland waterway transport freight vessel designed to be towed which does not have its own means of mechanical propulsion. In America, a barge is generally pushed.
Barges are used today for transporting low-value bulk items, as the cost of hauling goods that way is very low. Barges are also used for very heavy or bulky items; a typical American barge measures 195 by 35 feet (59.4 m × 10.7 m), and can carry up to about 1,500 short tons (1,400 t) of cargo. The most common European barges measure 251 by 37 feet (76.5 m × 11.4 m) and can carry up to about 2,450 tonnes (2,700 short tons).
As an example, on June 26, 2006, in the US a 565-short-ton (513 t) catalytic cracking unit reactor was shipped by barge from the Tulsa Port of Catoosa in Oklahoma to a refinery in Pascagoula, Mississippi. Extremely large objects are normally shipped in sections and assembled after delivery, but shipping an assembled unit reduces costs and avoids reliance on construction labor at the delivery site, which in the case of the reactor was still recovering from Hurricane Katrina. Of the reactor's 700-mile (1,100 km) journey, only about 40 miles (64 km) were traveled overland, from the final port to the refinery.
Self-propelled barges may be used for traveling downstream or upstream in placid waters; they are operated as an unpowered barge, with the assistance of a tugboat, when traveling upstream in faster waters. Canal barges are usually made for the particular canal in which they will operate.
Unpowered vessels—barges—may be used for other purposes, such as large accommodation vessels, towed to where they are needed and stationed there as long as necessary. An example is the Bibby Stockholm. | [
{
"paragraph_id": 0,
"text": "Barge often refers to a flat-bottomed inland waterway vessel which does not have its own means of mechanical propulsion. The first modern barges were pulled by tugs, but on inland waterways, most are pushed by pusher boats, or other vessels. The term barge has a rich history, and therefore there are many other types of barges.",
"title": ""
},
{
"paragraph_id": 1,
"text": "\"Barge\" is attested from 1300, from Old French barge, from Vulgar Latin barga. The word originally could refer to any small boat; the modern meaning arose around 1480. Bark \"small ship\" is attested from 1420, from Old French barque, from Vulgar Latin barca (400 AD). The more precise meaning of Barque as \"three-masted sailing vessel\" arose in the 17th century, and often takes the French spelling for disambiguation. Both are probably derived from the Latin barica, from Greek baris \"Egyptian boat\", from Coptic bari \"small boat\", hieroglyphic Egyptian",
"title": "History of the barge"
},
{
"paragraph_id": 2,
"text": "and similar ba-y-r for \"basket-shaped boat\". By extension, the term \"embark\" literally means to board the kind of boat called a \"barque\".",
"title": "History of the barge"
},
{
"paragraph_id": 3,
"text": "In Great Britain a merchant barge was originally a flat bottomed merchant vessel for use on navigable rivers. Most of these barges had sails. For traffic on the River Severn the barge was described as: The lesser sort are called barges and frigates, being from forty to sixty feet in length, having a single mast and square sail, and carrying from twenty to forty tons burthen. The larger vessels were called trows. On the River Irwell there was reference to barges passing below Barton Aqueduct with their mast and sails standing. Barges on the Thames were called west country barges.",
"title": "History of the barge"
},
{
"paragraph_id": 4,
"text": "During the Industrial Revolution, a substantial network of narrow canals was developed in Great Britain from 1750 onward. These new British canals had locks of only 7 feet (2.1 m) wide. This led to the development of the narrowboats, which had a beam of no more than 6 feet 10 inches (2.08 m). It was soon realized that the narrow locks were too limiting. Later locks were therefore doubled in width to 14 feet (4.3 m). This led to the development of the widebeam.",
"title": "History of the barge"
},
{
"paragraph_id": 5,
"text": "The narrowboats were initially also known as barges, but only a very few had sails, unlike earlier vessels. From the start, most of the new canals were constructed with an adjacent towpath along which draft horses walked, towing the barges. These types of canal craft are so specific that on the British canal system the term 'barge' was not used to describe narrowboats and widebeams. Narrowboats and widebeams are still used on canals, now engine-powered.",
"title": "History of the barge"
},
{
"paragraph_id": 6,
"text": "On the British canal system, the Thames sailing barge, and Dutch barge and unspecified other styles of barge, are still known as barges. The term Dutch barge is nowadays often used to refer to an accommodation ship, but originally refers to the slightly larger Dutch version of the Thames sailing barge.",
"title": "History of the barge"
},
{
"paragraph_id": 7,
"text": "The people who moved barges were known as lightermen. Poles are used on barges to fend off other nearby vessels or a wharf. These are often called 'pike poles'. The long pole used to maneuver or propel a barge has given rise to the saying \"I wouldn't touch that [subject/thing] with a barge pole.\"",
"title": "History of the barge"
},
{
"paragraph_id": 8,
"text": "In the United Kingdom the word barge had many meanings by the 1890s, and these varied locally. On the Mersey a barge was called a 'Flat', on the Thames a Lighter or barge, and on the Humber a 'Keel'. A Lighter had neither mast nor rigging. A keel did have a single mast with sails. Barge and lighter were used indiscriminately. A local distinction was that any flat that was not propelled by steam was a barge, although it might be a sailing flat.",
"title": "History of the barge"
},
{
"paragraph_id": 9,
"text": "The term Dumb barge was probably taken into use to end the confusion. The term Dumb barge surfaced in the early nineteenth century. It first denoted the use of a barge as a mooring platform in a fixed place. As it went up and down with the tides, it made a very convenient mooring place for steam vessels. Within a few decades, the term dumb barge evolved, and came to mean: 'a vessel propelled by oars only'. By the 1890s Dumb barge was still used only on the Thames.",
"title": "History of the barge"
},
{
"paragraph_id": 10,
"text": "By 1880 barges on British rivers and canals were often towed by steam tugboats. On the Thames, many dumb barges still relied on their poles, oars and the tide. Others dumb barges made use of about 50 tugboats to tow them to their destinations. While many coal barges were towed, many dumb barges that handled single parcels were not.",
"title": "History of the barge"
},
{
"paragraph_id": 11,
"text": "In the United States a barge was not a sailing vessel by the end of the 19th century. Indeed, barges were often created by cutting down (razeeing) sailing vessels. In New York this was an accepted meaning of the term barge. The somewhat smaller scow was built as such, but the scow also had its sailing counterpart the sailing scow.",
"title": "History of the barge"
},
{
"paragraph_id": 12,
"text": "The innovation that led to the modern barge was the use of iron barges towed by a steam tugboat. These were first used to transport grain and other bulk products. From about 1840 to 1870 the towed iron barge was quickly introduced on the Rhine, Danube, Don, Dniester, and rivers in Egypt, India and Australia. Many of these barges were built in Great Britain.",
"title": "The modern barge"
},
{
"paragraph_id": 13,
"text": "Nowadays 'barge' generally refers to a dumb barge. In Europe, a Dumb barge is: An inland waterway transport freight vessel designed to be towed which does not have its own means of mechanical propulsion. In America, a barge is generally pushed.",
"title": "The modern barge"
},
{
"paragraph_id": 14,
"text": "Barges are used today for transporting low-value bulk items, as the cost of hauling goods that way is very low. Barges are also used for very heavy or bulky items; a typical American barge measures 195 by 35 feet (59.4 m × 10.7 m), and can carry up to about 1,500 short tons (1,400 t) of cargo. The most common European barges measure 251 by 37 feet (76.5 m × 11.4 m) and can carry up to about 2,450 tonnes (2,700 short tons).",
"title": "The modern barge"
},
{
"paragraph_id": 15,
"text": "As an example, on June 26, 2006, in the US a 565-short-ton (513 t) catalytic cracking unit reactor was shipped by barge from the Tulsa Port of Catoosa in Oklahoma to a refinery in Pascagoula, Mississippi. Extremely large objects are normally shipped in sections and assembled after delivery, but shipping an assembled unit reduces costs and avoids reliance on construction labor at the delivery site, which in the case of the reactor was still recovering from Hurricane Katrina. Of the reactor's 700-mile (1,100 km) journey, only about 40 miles (64 km) were traveled overland, from the final port to the refinery.",
"title": "The modern barge"
},
{
"paragraph_id": 16,
"text": "Self-propelled barges may be used for traveling downstream or upstream in placid waters; they are operated as an unpowered barge, with the assistance of a tugboat, when traveling upstream in faster waters. Canal barges are usually made for the particular canal in which they will operate.",
"title": "The modern barge"
},
{
"paragraph_id": 17,
"text": "Unpowered vessels—barges—may be used for other purposes, such as large accommodation vessels, towed to where they are needed and stationed there as long as necessary. An example is the Bibby Stockholm.",
"title": "The modern barge"
}
] | Barge often refers to a flat-bottomed inland waterway vessel which does not have its own means of mechanical propulsion. The first modern barges were pulled by tugs, but on inland waterways, most are pushed by pusher boats, or other vessels. The term barge has a rich history, and therefore there are many other types of barges. | 2001-09-13T20:52:53Z | 2023-12-31T10:21:22Z | [
"Template:Unreferenced section",
"Template:Div col end",
"Template:Commons category",
"Template:WWII US ships",
"Template:Other uses",
"Template:Circa",
"Template:Portal",
"Template:Citation",
"Template:Reflist",
"Template:Wiktionary",
"Template:Short description",
"Template:Div col",
"Template:Annotated link",
"Template:Cite news",
"Template:Cite EB1911",
"Template:ModernMerchantShipTypes",
"Template:MARCOMships",
"Template:Sfn",
"Template:Cn",
"Template:Authority control",
"Template:Convert",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Barge |
4,178 | Bill Schelter | William Frederick Schelter (1947 – July 30, 2001) was a professor of mathematics at The University of Texas at Austin and a Lisp developer and programmer. Schelter is credited with the development of the GNU Common Lisp (GCL) implementation of Common Lisp and the GPL'd version of the computer algebra system Macsyma called Maxima. Schelter authored Austin Kyoto Common Lisp (AKCL) under contract with IBM. AKCL formed the foundation for Axiom, another computer algebra system. AKCL eventually became GNU Common Lisp. He is also credited with the first port of the GNU C compiler to the Intel 386 architecture, used in the original implementation of the Linux kernel.
Schelter obtained his Ph.D. at McGill University in 1972. His mathematical specialties were noncommutative ring theory and computational algebra and its applications, including automated theorem proving in geometry.
In the summer of 2001, age 54, he died suddenly of a heart attack while traveling in Russia. | [
{
"paragraph_id": 0,
"text": "William Frederick Schelter (1947 – July 30, 2001) was a professor of mathematics at The University of Texas at Austin and a Lisp developer and programmer. Schelter is credited with the development of the GNU Common Lisp (GCL) implementation of Common Lisp and the GPL'd version of the computer algebra system Macsyma called Maxima. Schelter authored Austin Kyoto Common Lisp (AKCL) under contract with IBM. AKCL formed the foundation for Axiom, another computer algebra system. AKCL eventually became GNU Common Lisp. He is also credited with the first port of the GNU C compiler to the Intel 386 architecture, used in the original implementation of the Linux kernel.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Schelter obtained his Ph.D. at McGill University in 1972. His mathematical specialties were noncommutative ring theory and computational algebra and its applications, including automated theorem proving in geometry.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the summer of 2001, age 54, he died suddenly of a heart attack while traveling in Russia.",
"title": ""
},
{
"paragraph_id": 3,
"text": "",
"title": "External links"
}
] | William Frederick Schelter was a professor of mathematics at The University of Texas at Austin and a Lisp developer and programmer. Schelter is credited with the development of the GNU Common Lisp (GCL) implementation of Common Lisp and the GPL'd version of the computer algebra system Macsyma called Maxima. Schelter authored Austin Kyoto Common Lisp (AKCL) under contract with IBM. AKCL formed the foundation for Axiom, another computer algebra system. AKCL eventually became GNU Common Lisp. He is also credited with the first port of the GNU C compiler to the Intel 386 architecture, used in the original implementation of the Linux kernel. Schelter obtained his Ph.D. at McGill University in 1972. His mathematical specialties were noncommutative ring theory and computational algebra and its applications, including automated theorem proving in geometry. In the summer of 2001, age 54, he died suddenly of a heart attack while traveling in Russia. | 2022-10-06T15:33:58Z | [
"Template:Infobox person",
"Template:Authority control",
"Template:UTexas-stub",
"Template:US-mathematician-stub",
"Template:Short description",
"Template:Reflist",
"Template:Webarchive",
"Template:MathGenealogy",
"Template:US-academic-bio-stub",
"Template:Primary sources"
] | https://en.wikipedia.org/wiki/Bill_Schelter |
|
4,179 | British English | British English (BrE, en-GB, or BE) is the set of varieties of the English language native to the United Kingdom. More narrowly, it can refer specifically to the English language in England, or, more broadly, to the collective dialects of English throughout the British Isles taken as a single umbrella variety, for instance additionally incorporating Scottish English, Welsh English, and Northern Irish English. Tom McArthur in the Oxford Guide to World English acknowledges that British English shares "all the ambiguities and tensions [with] the word 'British' and as a result can be used and interpreted in two ways, more broadly or more narrowly, within a range of blurring and ambiguity".
Variations exist in formal (both written and spoken) English in the United Kingdom. For example, the adjective wee is almost exclusively used in parts of Scotland, North East England, Northern Ireland, Ireland, and occasionally Yorkshire, whereas the adjective little is predominant elsewhere. Nevertheless, there is a meaningful degree of uniformity in written English within the United Kingdom, and this could be described by the term British English. The forms of spoken English, however, vary considerably more than in most other areas of the world where English is spoken and so a uniform concept of British English is more difficult to apply to the spoken language.
Globally, countries that are former British colonies or members of the Commonwealth tend to follow British English, as is the case for English used within the European Union. In China, both British English and American English are taught. The UK government actively teaches and promotes English around the world and operates in over 200 countries.
English is a West Germanic language that originated from the Anglo-Frisian dialects brought to Britain by Germanic settlers from various parts of what is now northwest Germany and the northern Netherlands. The resident population at this time was generally speaking Common Brittonic—the insular variety of Continental Celtic, which was influenced by the Roman occupation. This group of languages (Welsh, Cornish, Cumbric) cohabited alongside English into the modern period, but due to their remoteness from the Germanic languages, influence on English was notably limited. However, the degree of influence remains debated, and it has recently been argued that its grammatical influence accounts for the substantial innovations noted between English and the other West Germanic languages.
Initially, Old English was a diverse group of dialects, reflecting the varied origins of the Anglo-Saxon kingdoms of England. One of these dialects, Late West Saxon, eventually came to dominate. The original Old English language was then influenced by two waves of invasion: the first was by speakers of the Scandinavian branch of the Germanic family, who settled in parts of Britain in the eighth and ninth centuries; the second was the Normans in the 11th century, who spoke Old Norman and ultimately developed an English variety of this called Anglo-Norman. These two invasions caused English to become "mixed" to some degree (though it was never a truly mixed language in the strictest sense of the word; mixed languages arise from the cohabitation of speakers of different languages, who develop a hybrid tongue for basic communication).
The more idiomatic, concrete and descriptive English is, the more it is from Anglo-Saxon origins. The more intellectual and abstract English is, the more it contains Latin and French influences, e.g. swine (like the Germanic schwein) is the animal in the field bred by the occupied Anglo-Saxons and pork (like the French porc) is the animal at the table eaten by the occupying Normans. Another example is the Anglo-Saxon cu meaning cow, and the French bœuf meaning beef.
Cohabitation with the Scandinavians resulted in a significant grammatical simplification and lexical enrichment of the Anglo-Frisian core of English; the later Norman occupation led to the grafting onto that Germanic core of a more elaborate layer of words from the Romance branch of the European languages. This Norman influence entered English largely through the courts and government. Thus, English developed into a "borrowing" language of great flexibility and with a huge vocabulary.
Dialects and accents vary amongst the four countries of the United Kingdom, as well as within the countries themselves.
The major divisions are normally classified as English English (or English as spoken in England, which encompasses Southern English, West Country, East and West Midlands English and Northern English dialects), Ulster English (in Northern Ireland), Welsh English (not to be confused with the Welsh language), and Scottish English (not to be confused with the Scots language or Scottish Gaelic language). The various British dialects also differ in the words that they have borrowed from other languages.
Around the middle of the 15th century, there were points where within the 5 major dialects there were almost 500 ways to spell the word though.
Following its last major survey of English Dialects (1949–1950), the University of Leeds has started work on a new project. In May 2007 the Arts and Humanities Research Council awarded a grant to Leeds to study British regional dialects.
The team are sifting through a large collection of examples of regional slang words and phrases turned up by the "Voices project" run by the BBC, in which they invited the public to send in examples of English still spoken throughout the country. The BBC Voices project also collected hundreds of news articles about how the British speak English from swearing through to items on language schools. This information will also be collated and analysed by Johnson's team both for content and for where it was reported. "Perhaps the most remarkable finding in the Voices study is that the English language is as diverse as ever, despite our increased mobility and constant exposure to other accents and dialects through TV and radio". When discussing the award of the grant in 2007, Leeds University stated:
that they were "very pleased"—and indeed, "well chuffed"—at receiving their generous grant. He could, of course, have been "bostin" if he had come from the Black Country, or if he was a Scouser he would have been well "made up" over so many spondoolicks, because as a Geordie might say, £460,000 is a "canny load of chink".
Most people in Britain speak with a regional accent or dialect. However, about 2% of Britons speak with an accent called Received Pronunciation (also called "the King's English", "Oxford English" and "BBC English"), that is essentially region-less. It derives from a mixture of the Midlands and Southern dialects spoken in London in the early modern period. It is frequently used as a model for teaching English to foreign learners.
In the South East there are significantly different accents; the Cockney accent spoken by some East Londoners is strikingly different from Received Pronunciation (RP). Cockney rhyming slang can be (and was initially intended to be) difficult for outsiders to understand, although the extent of its use is often somewhat exaggerated.
Londoners speak with a mixture of accents, depending on ethnicity, neighbourhood, class, age, upbringing, and sundry other factors. Estuary English has been gaining prominence in recent decades: it has some features of RP and some of Cockney. Immigrants to the UK in recent decades have brought many more languages to the country and particularly to London. Surveys started in 1979 by the Inner London Education Authority discovered over 125 languages being spoken domestically by the families of the inner city's schoolchildren. Notably Multicultural London English, a sociolect that emerged in the late 20th century spoken mainly by young, working-class people in multicultural parts of London.
Since the mass internal migration to Northamptonshire in the 1940s and given its position between several major accent regions, it has become a source of various accent developments. In Northampton the older accent has been influenced by overspill Londoners. There is an accent known locally as the Kettering accent, which is a transitional accent between the East Midlands and East Anglian. It is the last southern Midlands accent to use the broad "a" in words like bath or grass (i.e. barth or grarss). Conversely crass or plastic use a slender "a". A few miles northwest in Leicestershire the slender "a" becomes more widespread generally. In the town of Corby, five miles (8 km) north, one can find Corbyite which, unlike the Kettering accent, is largely influenced by the West Scottish accent.
Phonological features characteristic of British English revolve around the pronunciation of the letter R, as well as the dental plosive T and some diphthongs specific to this dialect.
Once regarded as a Cockney feature, in a number of forms of spoken British English, /t/ has become commonly realised as a glottal stop [ʔ] when it is in the intervocalic position, in a process called T-glottalisation. National media, being based in London, have seen the glottal stop spreading more widely than it once was in word endings, not being heard as "no[ʔ]" and bottle of water being heard as "bo[ʔ]le of wa[ʔ]er". It is still stigmatised when used at the beginning and central positions, such as later, while often has all but regained /t/. Other consonants subject to this usage in Cockney English are p, as in pa[ʔ]er and k as in ba[ʔ]er.
In most areas of England and Wales, outside the West Country and other near-by counties of the UK, the consonant R is not pronounced if not followed by a vowel, lengthening the preceding vowel instead. This phenomenon is known as non-rhoticity. In these same areas, a tendency exists to insert an R between a word ending in a vowel and a next word beginning with a vowel. This is called the intrusive R. It could be understood as a merger, in that words that once ended in an R and words that did not are no longer treated differently. This is also due to London-centric influences. Examples of R-dropping are car and sugar, where the R is not pronounced.
British dialects differ on the extent of diphthongisation of long vowels, with southern varieties extensively turning them into diphthongs, and with northern dialects normally preserving many of them. As a comparison, North American varieties could be said to be in-between.
Long vowels /iː/ and /uː/ are usually preserved, and in several areas also /oː/ and /eː/, as in go and say (unlike other varieties of English, that change them to [oʊ] and [eɪ] respectively). Some areas go as far as not diphthongising medieval /iː/ and /uː/, that give rise to modern /aɪ/ and /aʊ/; that is, for example, in the traditional accent of Newcastle upon Tyne, 'out' will sound as 'oot', and in parts of Scotland and North-West England, 'my' will be pronounced as 'me'.
Long vowels /iː/ and /uː/ are diphthongised to [ɪi] and [ʊu] respectively (or, more technically, [ʏʉ], with a raised tongue), so that ee and oo in feed and food are pronounced with a movement. The diphthong [oʊ] is also pronounced with a greater movement, normally [əʊ], [əʉ] or [əɨ].
Dropping a morphological grammatical number, in collective nouns, is stronger in British English than North American English. This is to treat them as plural when once grammatically singular, a perceived natural number prevails, especially when applying to institutional nouns and groups of people.
The noun 'police', for example, undergoes this treatment:
Police are investigating the theft of work tools worth £500 from a van at the Sprucefield park and ride car park in Lisburn.
A football team can be treated likewise:
Arsenal have lost just one of 20 home Premier League matches against Manchester City.
This tendency can be observed in texts produced already in the 19th century. For example, Jane Austen, a British author, writes in Chapter 4 of Pride and Prejudice, published in 1813:
All the world are good and agreeable in your eyes.
However, in Chapter 16, the grammatical number is used.
The world is blinded by his fortune and consequence.
Some dialects of British English use negative concords, also known as double negatives. Rather than changing a word or using a positive, words like nobody, not, nothing, and never would be used in the same sentence. While this does not occur in Standard English, it does occur in non-standard dialects. The double negation follows the idea of two different morphemes, one that causes the double negation, and one that is used for the point or the verb.
As with English around the world, the English language as used in the United Kingdom is governed by convention rather than formal code: there is no body equivalent to the Académie française or the Royal Spanish Academy. Dictionaries (for example, the Oxford English Dictionary, the Longman Dictionary of Contemporary English, the Chambers Dictionary, and the Collins Dictionary) record usage rather than attempting to prescribe it. In addition, vocabulary and usage change with time: words are freely borrowed from other languages and other strains of English, and neologisms are frequent.
For historical reasons dating back to the rise of London in the ninth century, the form of language spoken in London and the East Midlands became standard English within the Court, and ultimately became the basis for generally accepted use in the law, government, literature and education in Britain. The standardisation of British English is thought to be from both dialect levelling and a thought of social superiority. Speaking in the Standard dialect created class distinctions; those who did not speak the standard English would be considered of a lesser class or social status and often discounted or considered of a low intelligence. Another contribution to the standardisation of British English was the introduction of the printing press to England in the mid-15th century. In doing so, William Caxton enabled a common language and spelling to be dispersed among the entirety of England at a much faster rate.
Samuel Johnson's A Dictionary of the English Language (1755) was a large step in the English-language spelling reform, where the purification of language focused on standardising both speech and spelling. By the early 20th century, British authors had produced numerous books intended as guides to English grammar and usage, a few of which achieved sufficient acclaim to have remained in print for long periods and to have been reissued in new editions after some decades. These include, most notably of all, Fowler's Modern English Usage and The Complete Plain Words by Sir Ernest Gowers.
Detailed guidance on many aspects of writing British English for publication is included in style guides issued by various publishers including The Times newspaper, the Oxford University Press and the Cambridge University Press. The Oxford University Press guidelines were originally drafted as a single broadsheet page by Horace Henry Hart, and were at the time (1893) the first guide of their type in English; they were gradually expanded and eventually published, first as Hart's Rules, and in 2002 as part of The Oxford Manual of Style. Comparable in authority and stature to The Chicago Manual of Style for published American English, the Oxford Manual is a fairly exhaustive standard for published British English that writers can turn to in the absence of specific guidance from their publishing house.
British English is the basis of, and very similar to Commonwealth English. Commonwealth English is English spoken and written in Commonwealth countries, though often with some local variation. This includes English spoken in Australia, Malta, New Zealand, Nigeria, and South Africa. It also includes South Asian English used in South Asia, in English varieties in Southeast Asia, and in parts of Africa. Canadian English is based on British English, but has more influence from American English. British English, for example, is the closest English to Indian English, but Indian English has extra vocabulary and some English words are assigned different meanings. | [
{
"paragraph_id": 0,
"text": "British English (BrE, en-GB, or BE) is the set of varieties of the English language native to the United Kingdom. More narrowly, it can refer specifically to the English language in England, or, more broadly, to the collective dialects of English throughout the British Isles taken as a single umbrella variety, for instance additionally incorporating Scottish English, Welsh English, and Northern Irish English. Tom McArthur in the Oxford Guide to World English acknowledges that British English shares \"all the ambiguities and tensions [with] the word 'British' and as a result can be used and interpreted in two ways, more broadly or more narrowly, within a range of blurring and ambiguity\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Variations exist in formal (both written and spoken) English in the United Kingdom. For example, the adjective wee is almost exclusively used in parts of Scotland, North East England, Northern Ireland, Ireland, and occasionally Yorkshire, whereas the adjective little is predominant elsewhere. Nevertheless, there is a meaningful degree of uniformity in written English within the United Kingdom, and this could be described by the term British English. The forms of spoken English, however, vary considerably more than in most other areas of the world where English is spoken and so a uniform concept of British English is more difficult to apply to the spoken language.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Globally, countries that are former British colonies or members of the Commonwealth tend to follow British English, as is the case for English used within the European Union. In China, both British English and American English are taught. The UK government actively teaches and promotes English around the world and operates in over 200 countries.",
"title": ""
},
{
"paragraph_id": 3,
"text": "English is a West Germanic language that originated from the Anglo-Frisian dialects brought to Britain by Germanic settlers from various parts of what is now northwest Germany and the northern Netherlands. The resident population at this time was generally speaking Common Brittonic—the insular variety of Continental Celtic, which was influenced by the Roman occupation. This group of languages (Welsh, Cornish, Cumbric) cohabited alongside English into the modern period, but due to their remoteness from the Germanic languages, influence on English was notably limited. However, the degree of influence remains debated, and it has recently been argued that its grammatical influence accounts for the substantial innovations noted between English and the other West Germanic languages.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Initially, Old English was a diverse group of dialects, reflecting the varied origins of the Anglo-Saxon kingdoms of England. One of these dialects, Late West Saxon, eventually came to dominate. The original Old English language was then influenced by two waves of invasion: the first was by speakers of the Scandinavian branch of the Germanic family, who settled in parts of Britain in the eighth and ninth centuries; the second was the Normans in the 11th century, who spoke Old Norman and ultimately developed an English variety of this called Anglo-Norman. These two invasions caused English to become \"mixed\" to some degree (though it was never a truly mixed language in the strictest sense of the word; mixed languages arise from the cohabitation of speakers of different languages, who develop a hybrid tongue for basic communication).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The more idiomatic, concrete and descriptive English is, the more it is from Anglo-Saxon origins. The more intellectual and abstract English is, the more it contains Latin and French influences, e.g. swine (like the Germanic schwein) is the animal in the field bred by the occupied Anglo-Saxons and pork (like the French porc) is the animal at the table eaten by the occupying Normans. Another example is the Anglo-Saxon cu meaning cow, and the French bœuf meaning beef.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Cohabitation with the Scandinavians resulted in a significant grammatical simplification and lexical enrichment of the Anglo-Frisian core of English; the later Norman occupation led to the grafting onto that Germanic core of a more elaborate layer of words from the Romance branch of the European languages. This Norman influence entered English largely through the courts and government. Thus, English developed into a \"borrowing\" language of great flexibility and with a huge vocabulary.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Dialects and accents vary amongst the four countries of the United Kingdom, as well as within the countries themselves.",
"title": "Dialects"
},
{
"paragraph_id": 8,
"text": "The major divisions are normally classified as English English (or English as spoken in England, which encompasses Southern English, West Country, East and West Midlands English and Northern English dialects), Ulster English (in Northern Ireland), Welsh English (not to be confused with the Welsh language), and Scottish English (not to be confused with the Scots language or Scottish Gaelic language). The various British dialects also differ in the words that they have borrowed from other languages.",
"title": "Dialects"
},
{
"paragraph_id": 9,
"text": "Around the middle of the 15th century, there were points where within the 5 major dialects there were almost 500 ways to spell the word though.",
"title": "Dialects"
},
{
"paragraph_id": 10,
"text": "Following its last major survey of English Dialects (1949–1950), the University of Leeds has started work on a new project. In May 2007 the Arts and Humanities Research Council awarded a grant to Leeds to study British regional dialects.",
"title": "Dialects"
},
{
"paragraph_id": 11,
"text": "The team are sifting through a large collection of examples of regional slang words and phrases turned up by the \"Voices project\" run by the BBC, in which they invited the public to send in examples of English still spoken throughout the country. The BBC Voices project also collected hundreds of news articles about how the British speak English from swearing through to items on language schools. This information will also be collated and analysed by Johnson's team both for content and for where it was reported. \"Perhaps the most remarkable finding in the Voices study is that the English language is as diverse as ever, despite our increased mobility and constant exposure to other accents and dialects through TV and radio\". When discussing the award of the grant in 2007, Leeds University stated:",
"title": "Dialects"
},
{
"paragraph_id": 12,
"text": "that they were \"very pleased\"—and indeed, \"well chuffed\"—at receiving their generous grant. He could, of course, have been \"bostin\" if he had come from the Black Country, or if he was a Scouser he would have been well \"made up\" over so many spondoolicks, because as a Geordie might say, £460,000 is a \"canny load of chink\".",
"title": "Dialects"
},
{
"paragraph_id": 13,
"text": "Most people in Britain speak with a regional accent or dialect. However, about 2% of Britons speak with an accent called Received Pronunciation (also called \"the King's English\", \"Oxford English\" and \"BBC English\"), that is essentially region-less. It derives from a mixture of the Midlands and Southern dialects spoken in London in the early modern period. It is frequently used as a model for teaching English to foreign learners.",
"title": "Dialects"
},
{
"paragraph_id": 14,
"text": "In the South East there are significantly different accents; the Cockney accent spoken by some East Londoners is strikingly different from Received Pronunciation (RP). Cockney rhyming slang can be (and was initially intended to be) difficult for outsiders to understand, although the extent of its use is often somewhat exaggerated.",
"title": "Dialects"
},
{
"paragraph_id": 15,
"text": "Londoners speak with a mixture of accents, depending on ethnicity, neighbourhood, class, age, upbringing, and sundry other factors. Estuary English has been gaining prominence in recent decades: it has some features of RP and some of Cockney. Immigrants to the UK in recent decades have brought many more languages to the country and particularly to London. Surveys started in 1979 by the Inner London Education Authority discovered over 125 languages being spoken domestically by the families of the inner city's schoolchildren. Notably Multicultural London English, a sociolect that emerged in the late 20th century spoken mainly by young, working-class people in multicultural parts of London.",
"title": "Dialects"
},
{
"paragraph_id": 16,
"text": "Since the mass internal migration to Northamptonshire in the 1940s and given its position between several major accent regions, it has become a source of various accent developments. In Northampton the older accent has been influenced by overspill Londoners. There is an accent known locally as the Kettering accent, which is a transitional accent between the East Midlands and East Anglian. It is the last southern Midlands accent to use the broad \"a\" in words like bath or grass (i.e. barth or grarss). Conversely crass or plastic use a slender \"a\". A few miles northwest in Leicestershire the slender \"a\" becomes more widespread generally. In the town of Corby, five miles (8 km) north, one can find Corbyite which, unlike the Kettering accent, is largely influenced by the West Scottish accent.",
"title": "Dialects"
},
{
"paragraph_id": 17,
"text": "Phonological features characteristic of British English revolve around the pronunciation of the letter R, as well as the dental plosive T and some diphthongs specific to this dialect.",
"title": "Features"
},
{
"paragraph_id": 18,
"text": "Once regarded as a Cockney feature, in a number of forms of spoken British English, /t/ has become commonly realised as a glottal stop [ʔ] when it is in the intervocalic position, in a process called T-glottalisation. National media, being based in London, have seen the glottal stop spreading more widely than it once was in word endings, not being heard as \"no[ʔ]\" and bottle of water being heard as \"bo[ʔ]le of wa[ʔ]er\". It is still stigmatised when used at the beginning and central positions, such as later, while often has all but regained /t/. Other consonants subject to this usage in Cockney English are p, as in pa[ʔ]er and k as in ba[ʔ]er.",
"title": "Features"
},
{
"paragraph_id": 19,
"text": "In most areas of England and Wales, outside the West Country and other near-by counties of the UK, the consonant R is not pronounced if not followed by a vowel, lengthening the preceding vowel instead. This phenomenon is known as non-rhoticity. In these same areas, a tendency exists to insert an R between a word ending in a vowel and a next word beginning with a vowel. This is called the intrusive R. It could be understood as a merger, in that words that once ended in an R and words that did not are no longer treated differently. This is also due to London-centric influences. Examples of R-dropping are car and sugar, where the R is not pronounced.",
"title": "Features"
},
{
"paragraph_id": 20,
"text": "British dialects differ on the extent of diphthongisation of long vowels, with southern varieties extensively turning them into diphthongs, and with northern dialects normally preserving many of them. As a comparison, North American varieties could be said to be in-between.",
"title": "Features"
},
{
"paragraph_id": 21,
"text": "Long vowels /iː/ and /uː/ are usually preserved, and in several areas also /oː/ and /eː/, as in go and say (unlike other varieties of English, that change them to [oʊ] and [eɪ] respectively). Some areas go as far as not diphthongising medieval /iː/ and /uː/, that give rise to modern /aɪ/ and /aʊ/; that is, for example, in the traditional accent of Newcastle upon Tyne, 'out' will sound as 'oot', and in parts of Scotland and North-West England, 'my' will be pronounced as 'me'.",
"title": "Features"
},
{
"paragraph_id": 22,
"text": "Long vowels /iː/ and /uː/ are diphthongised to [ɪi] and [ʊu] respectively (or, more technically, [ʏʉ], with a raised tongue), so that ee and oo in feed and food are pronounced with a movement. The diphthong [oʊ] is also pronounced with a greater movement, normally [əʊ], [əʉ] or [əɨ].",
"title": "Features"
},
{
"paragraph_id": 23,
"text": "Dropping a morphological grammatical number, in collective nouns, is stronger in British English than North American English. This is to treat them as plural when once grammatically singular, a perceived natural number prevails, especially when applying to institutional nouns and groups of people.",
"title": "Features"
},
{
"paragraph_id": 24,
"text": "The noun 'police', for example, undergoes this treatment:",
"title": "Features"
},
{
"paragraph_id": 25,
"text": "Police are investigating the theft of work tools worth £500 from a van at the Sprucefield park and ride car park in Lisburn.",
"title": "Features"
},
{
"paragraph_id": 26,
"text": "A football team can be treated likewise:",
"title": "Features"
},
{
"paragraph_id": 27,
"text": "Arsenal have lost just one of 20 home Premier League matches against Manchester City.",
"title": "Features"
},
{
"paragraph_id": 28,
"text": "This tendency can be observed in texts produced already in the 19th century. For example, Jane Austen, a British author, writes in Chapter 4 of Pride and Prejudice, published in 1813:",
"title": "Features"
},
{
"paragraph_id": 29,
"text": "All the world are good and agreeable in your eyes.",
"title": "Features"
},
{
"paragraph_id": 30,
"text": "However, in Chapter 16, the grammatical number is used.",
"title": "Features"
},
{
"paragraph_id": 31,
"text": "The world is blinded by his fortune and consequence.",
"title": "Features"
},
{
"paragraph_id": 32,
"text": "Some dialects of British English use negative concords, also known as double negatives. Rather than changing a word or using a positive, words like nobody, not, nothing, and never would be used in the same sentence. While this does not occur in Standard English, it does occur in non-standard dialects. The double negation follows the idea of two different morphemes, one that causes the double negation, and one that is used for the point or the verb.",
"title": "Features"
},
{
"paragraph_id": 33,
"text": "As with English around the world, the English language as used in the United Kingdom is governed by convention rather than formal code: there is no body equivalent to the Académie française or the Royal Spanish Academy. Dictionaries (for example, the Oxford English Dictionary, the Longman Dictionary of Contemporary English, the Chambers Dictionary, and the Collins Dictionary) record usage rather than attempting to prescribe it. In addition, vocabulary and usage change with time: words are freely borrowed from other languages and other strains of English, and neologisms are frequent.",
"title": "Standardisation"
},
{
"paragraph_id": 34,
"text": "For historical reasons dating back to the rise of London in the ninth century, the form of language spoken in London and the East Midlands became standard English within the Court, and ultimately became the basis for generally accepted use in the law, government, literature and education in Britain. The standardisation of British English is thought to be from both dialect levelling and a thought of social superiority. Speaking in the Standard dialect created class distinctions; those who did not speak the standard English would be considered of a lesser class or social status and often discounted or considered of a low intelligence. Another contribution to the standardisation of British English was the introduction of the printing press to England in the mid-15th century. In doing so, William Caxton enabled a common language and spelling to be dispersed among the entirety of England at a much faster rate.",
"title": "Standardisation"
},
{
"paragraph_id": 35,
"text": "Samuel Johnson's A Dictionary of the English Language (1755) was a large step in the English-language spelling reform, where the purification of language focused on standardising both speech and spelling. By the early 20th century, British authors had produced numerous books intended as guides to English grammar and usage, a few of which achieved sufficient acclaim to have remained in print for long periods and to have been reissued in new editions after some decades. These include, most notably of all, Fowler's Modern English Usage and The Complete Plain Words by Sir Ernest Gowers.",
"title": "Standardisation"
},
{
"paragraph_id": 36,
"text": "Detailed guidance on many aspects of writing British English for publication is included in style guides issued by various publishers including The Times newspaper, the Oxford University Press and the Cambridge University Press. The Oxford University Press guidelines were originally drafted as a single broadsheet page by Horace Henry Hart, and were at the time (1893) the first guide of their type in English; they were gradually expanded and eventually published, first as Hart's Rules, and in 2002 as part of The Oxford Manual of Style. Comparable in authority and stature to The Chicago Manual of Style for published American English, the Oxford Manual is a fairly exhaustive standard for published British English that writers can turn to in the absence of specific guidance from their publishing house.",
"title": "Standardisation"
},
{
"paragraph_id": 37,
"text": "British English is the basis of, and very similar to Commonwealth English. Commonwealth English is English spoken and written in Commonwealth countries, though often with some local variation. This includes English spoken in Australia, Malta, New Zealand, Nigeria, and South Africa. It also includes South Asian English used in South Asia, in English varieties in Southeast Asia, and in parts of Africa. Canadian English is based on British English, but has more influence from American English. British English, for example, is the closest English to Indian English, but Indian English has extra vocabulary and some English words are assigned different meanings.",
"title": "Relationship with Commonwealth English"
}
] | British English is the set of varieties of the English language native to the United Kingdom. More narrowly, it can refer specifically to the English language in England, or, more broadly, to the collective dialects of English throughout the British Isles taken as a single umbrella variety, for instance additionally incorporating Scottish English, Welsh English, and Northern Irish English. Tom McArthur in the Oxford Guide to World English acknowledges that British English shares "all the ambiguities and tensions [with] the word 'British' and as a result can be used and interpreted in two ways, more broadly or more narrowly, within a range of blurring and ambiguity". Variations exist in formal English in the United Kingdom. For example, the adjective wee is almost exclusively used in parts of Scotland, North East England, Northern Ireland, Ireland, and occasionally Yorkshire, whereas the adjective little is predominant elsewhere. Nevertheless, there is a meaningful degree of uniformity in written English within the United Kingdom, and this could be described by the term British English. The forms of spoken English, however, vary considerably more than in most other areas of the world where English is spoken and so a uniform concept of British English is more difficult to apply to the spoken language. Globally, countries that are former British colonies or members of the Commonwealth tend to follow British English, as is the case for English used within the European Union. In China, both British English and American English are taught. The UK government actively teaches and promotes English around the world and operates in over 200 countries. | 2001-11-08T00:38:01Z | 2023-12-30T08:21:00Z | [
"Template:Use British English",
"Template:IPA notice",
"Template:Cbignore",
"Template:Commons category",
"Template:Cite report",
"Template:Lang",
"Template:Efn",
"Template:Convert",
"Template:Cite news",
"Template:See also",
"Template:Respell",
"Template:Cite journal",
"Template:Webarchive",
"Template:Cite web",
"Template:Cite book",
"Template:English dialects by continent",
"Template:English official language clickable map",
"Template:Use dmy dates",
"Template:Infobox Language",
"Template:Refn",
"Template:IPA",
"Template:Authority control",
"Template:Main",
"Template:Cols",
"Template:Reflist",
"Template:Citation",
"Template:American and British English differences",
"Template:Colend",
"Template:Notes",
"Template:Portal bar",
"Template:Short description",
"Template:Blockquote",
"Template:ISBN"
] | https://en.wikipedia.org/wiki/British_English |
4,181 | Battle | A battle is an occurrence of combat in warfare between opposing military units of any number or size. A war usually consists of multiple battles. In general, a battle is a military engagement that is well defined in duration, area, and force commitment. An engagement with only limited commitment between the forces and without decisive results is sometimes called a skirmish.
The word "battle" can also be used infrequently to refer to an entire operational campaign, although this usage greatly diverges from its conventional or customary meaning. Generally, the word "battle" is used for such campaigns if referring to a protracted combat encounter in which either one or both of the combatants had the same methods, resources, and strategic objectives throughout the encounter. Some prominent examples of this would be the Battle of the Atlantic, Battle of Britain, and Battle of Stalingrad, all in World War II.
Wars and military campaigns are guided by military strategy, whereas battles take place on a level of planning and execution known as operational mobility. German strategist Carl von Clausewitz stated that "the employment of battles ... to achieve the object of war" was the essence of strategy.
Battle is a loanword from the Old French bataille, first attested in 1297, from Late Latin battualia, meaning "exercise of soldiers and gladiators in fighting and fencing", from Late Latin (taken from Germanic) battuere "beat", from which the English word battery is also derived via Middle English batri.
The defining characteristic of the fight as a concept in military science has changed with the variations in the organisation, employment and technology of military forces. The English military historian John Keegan suggested an ideal definition of battle as "something which happens between two armies leading to the moral then physical disintegration of one or the other of them" but the origins and outcomes of battles can rarely be summarized so neatly. Battle in the 20th and 21st centuries is defined as the combat between large components of the forces in a military campaign, used to achieve military objectives. Where the duration of the battle is longer than a week, it is often for reasons of planning called an operation. Battles can be planned, encountered or forced by one side when the other is unable to withdraw from combat.
A battle always has as its purpose the reaching of a mission goal by use of military force. A victory in the battle is achieved when one of the opposing sides forces the other to abandon its mission and surrender its forces, routs the other (i.e., forces it to retreat or renders it militarily ineffective for further combat operations) or annihilates the latter, resulting in their deaths or capture. A battle may end in a Pyrrhic victory, which ultimately favors the defeated party. If no resolution is reached in a battle, it can result in a stalemate. A conflict in which one side is unwilling to reach a decision by a direct battle using conventional warfare often becomes an insurgency.
Until the 19th century the majority of battles were of short duration, many lasting a part of a day. (The Battle of Preston (1648), the Battle of Nations (1813) and the Battle of Gettysburg (1863) were exceptional in lasting three days.) This was mainly due to the difficulty of supplying armies in the field or conducting night operations. The means of prolonging a battle was typically with siege warfare. Improvements in transport and the sudden evolving of trench warfare, with its siege-like nature during the First World War in the 20th century, lengthened the duration of battles to days and weeks. This created the requirement for unit rotation to prevent combat fatigue, with troops preferably not remaining in a combat area of operations for more than a month.
The use of the term "battle" in military history has led to its misuse when referring to almost any scale of combat, notably by strategic forces involving hundreds of thousands of troops that may be engaged in either one battle at a time (Battle of Leipzig) or operations (Battle of Kursk). The space a battle occupies depends on the range of the weapons of the combatants. A "battle" in this broader sense may be of long duration and take place over a large area, as in the case of the Battle of Britain or the Battle of the Atlantic. Until the advent of artillery and aircraft, battles were fought with the two sides within sight, if not reach, of each other. The depth of the battlefield has also increased in modern warfare with inclusion of the supporting units in the rear areas; supply, artillery, medical personnel etc. often outnumber the front-line combat troops.
Battles are made up of a multitude of individual combats, skirmishes and small engagements and the combatants will usually only experience a small part of the battle. To the infantryman, there may be little to distinguish between combat as part of a minor raid or a big offensive, nor is it likely that he anticipates the future course of the battle; few of the British infantry who went over the top on the first day on the Somme, 1 July 1916, would have anticipated that the battle would last five months. Some of the Allied infantry who had just dealt a crushing defeat to the French at the Battle of Waterloo fully expected to have to fight again the next day (at the Battle of Wavre).
Battlespace is a unified strategic concept to integrate and combine armed forces for the military theatre of operations, including air, information, land, sea and space. It includes the environment, factors and conditions that must be understood to apply combat power, protect the force or complete the mission, comprising enemy and friendly armed forces; facilities; weather; terrain; and the electromagnetic spectrum.
Battles are decided by various factors, the number and quality of combatants and equipment, the skill of commanders and terrain are among the most prominent. Weapons and armour can be decisive; on many occasions armies have achieved victory through more advanced weapons than those of their opponents. An extreme example was in the Battle of Omdurman, in which a large army of Sudanese Mahdists armed in a traditional manner were destroyed by an Anglo-Egyptian force equipped with Maxim machine guns and artillery.
On some occasions, simple weapons employed in an unorthodox fashion have proven advantageous; Swiss pikemen gained many victories through their ability to transform a traditionally defensive weapon into an offensive one. Zulus in the early 19th century were victorious in battles against their rivals in part because they adopted a new kind of spear, the iklwa. Forces with inferior weapons have still emerged victorious at times, for example in the Wars of Scottish Independence. Disciplined troops are often of greater importance; at the Battle of Alesia, the Romans were greatly outnumbered but won because of superior training.
Battles can also be determined by terrain. Capturing high ground has been the main tactic in innumerable battles. An army that holds the high ground forces the enemy to climb and thus wear themselves down. Areas of jungle and forest, with dense vegetation act as force-multipliers, of benefit to inferior armies. Terrain may have lost importance in modern warfare, due to the advent of aircraft, though the terrain is still vital for camouflage, especially for guerrilla warfare.
Generals and commanders also play an important role, Hannibal, Julius Caesar, Khalid ibn Walid, Subutai and Napoleon Bonaparte were all skilled generals and their armies were extremely successful at times. An army that can trust the commands of their leaders with conviction in its success invariably has a higher morale than an army that doubts its every move. The British in the naval Battle of Trafalgar owed its success to the reputation of Admiral Lord Nelson.
Battles can be fought on land, at sea, and in the air. Naval battles have occurred since before the 5th century BC. Air battles have been far less common, due to their late conception, the most prominent being the Battle of Britain in 1940. Since the Second World War, land or sea battles have come to rely on air support. During the Battle of Midway, five aircraft carriers were sunk without either fleet coming into direct contact.
Battles are usually hybrids of different types listed above.
A decisive battle is one with political effects, determining the course of the war such as the Battle of Smolensk or bringing hostilities to an end, such as the Battle of Hastings or the Battle of Hattin. A decisive battle can change the balance of power or boundaries between countries. The concept of the decisive battle became popular with the publication in 1851 of Edward Creasy's The Fifteen Decisive Battles of the World. British military historians J.F.C. Fuller (The Decisive Battles of the Western World) and B.H. Liddell Hart (Decisive Wars of History), among many others, have written books in the style of Creasy's work.
There is an obvious difference in the way battles have been fought. Early battles were probably fought between rival hunting bands as unorganized crowds. During the Battle of Megiddo, the first reliably documented battle in the fifteenth century BC, both armies were organised and disciplined; during the many wars of the Roman Empire, barbarians continued to use mob tactics.
As the Age of Enlightenment dawned, armies began to fight in highly disciplined lines. Each would follow the orders from their officers and fight as a unit instead of individuals. Armies were divided into regiments, battalions, companies and platoons. These armies would march, line up and fire in divisions.
Native Americans, on the other hand, did not fight in lines, using guerrilla tactics. American colonists and European forces continued using disciplined lines into the American Civil War.
A new style arose from the 1850s to the First World War, known as trench warfare, which also led to tactical radio. Chemical warfare also began in 1915.
By the Second World War, the use of the smaller divisions, platoons and companies became much more important as precise operations became vital. Instead of the trench stalemate of 1915–1917, in the Second World War, battles developed where small groups encountered other platoons. As a result, elite squads became much more recognized and distinguishable. Maneuver warfare also returned with an astonishing pace with the advent of the tank, replacing the cannon of the Enlightenment Age. Artillery has since gradually replaced the use of frontal troops. Modern battles resemble those of the Second World War, along with indirect combat through the use of aircraft and missiles which has come to constitute a large portion of wars in place of battles, where battles are now mostly reserved for capturing cities.
One significant difference of modern naval battles, as opposed to earlier forms of combat is the use of marines, which introduced amphibious warfare. Today, a marine is actually an infantry regiment that sometimes fights solely on land and is no longer tied to the navy. A good example of an old naval battle is the Battle of Salamis. Most ancient naval battles were fought by fast ships using the battering ram to sink opposing fleets or steer close enough for boarding in hand-to-hand combat. Troops were often used to storm enemy ships as used by Romans and pirates. This tactic was usually used by civilizations that could not beat the enemy with ranged weaponry. Another invention in the late Middle Ages was the use of Greek fire by the Byzantines, which was used to set enemy fleets on fire. Empty demolition ships utilized the tactic to crash into opposing ships and set it afire with an explosion. After the invention of cannons, naval warfare became useful as support units for land warfare. During the 19th century, the development of mines led to a new type of naval warfare. The ironclad, first used in the American Civil War, resistant to cannons, soon made the wooden ship obsolete. The invention of military submarines, during World War I, brought naval warfare to both above and below the surface. With the development of military aircraft during World War II, battles were fought in the sky as well as below the ocean. Aircraft carriers have since become the central unit in naval warfare, acting as a mobile base for lethal aircraft.
Although the use of aircraft has for the most part always been used as a supplement to land or naval engagements, since their first major military use in World War I aircraft have increasingly taken on larger roles in warfare. During World War I, the primary use was for reconnaissance, and small-scale bombardment. Aircraft began becoming much more prominent in the Spanish Civil War and especially World War II. Aircraft design began specializing, primarily into two types: bombers, which carried explosive payloads to bomb land targets or ships; and fighter-interceptors, which were used to either intercept incoming aircraft or to escort and protect bombers (engagements between fighter aircraft were known as dog fights). Some of the more notable aerial battles in this period include the Battle of Britain and the Battle of Midway. Another important use of aircraft came with the development of the helicopter, which first became heavily used during the Vietnam War, and still continues to be widely used today to transport and augment ground forces. Today, direct engagements between aircraft are rare – the most modern fighter-interceptors carry much more extensive bombing payloads, and are used to bomb precision land targets, rather than to fight other aircraft. Anti-aircraft batteries are used much more extensively to defend against incoming aircraft than interceptors. Despite this, aircraft today are much more extensively used as the primary tools for both army and navy, as evidenced by the prominent use of helicopters to transport and support troops, the use of aerial bombardment as the "first strike" in many engagements, and the replacement of the battleship with the aircraft carrier as the center of most modern navies.
Battles are usually named after some feature of the battlefield geography, such as a town, forest or river, commonly prefixed "Battle of...". Occasionally battles are named after the date on which they took place, such as The Glorious First of June. In the Middle Ages it was considered important to settle on a suitable name for a battle which could be used by the chroniclers. After Henry V of England defeated a French army on October 25, 1415, he met with the senior French herald and they agreed to name the battle after the nearby castle and so it was called the Battle of Agincourt. In other cases, the sides adopted different names for the same battle, such as the Battle of Gallipoli which is known in Turkey as the Battle of Çanakkale. During the American Civil War, the Union tended to name the battles after the nearest watercourse, such as the Battle of Wilsons Creek and the Battle of Stones River, whereas the Confederates favoured the nearby towns, as in the Battles of Chancellorsville and Murfreesboro. Occasionally both names for the same battle entered the popular culture, such as the First Battle of Bull Run and the Second Battle of Bull Run, which are also referred to as the First and Second Battles of Manassas.
Sometimes in desert warfare, there is no nearby town name to use; map coordinates gave the name to the Battle of 73 Easting in the First Gulf War. Some place names have become synonymous with battles, such as the Passchendaele, Pearl Harbor, the Alamo, Thermopylae and Waterloo. Military operations, many of which result in battle, are given codenames, which are not necessarily meaningful or indicative of the type or the location of the battle. Operation Market Garden and Operation Rolling Thunder are examples of battles known by their military codenames. When a battleground is the site of more than one battle in the same conflict, the instances are distinguished by ordinal number, such as the First and Second Battles of Bull Run. An extreme case are the twelve Battles of the Isonzo—First to Twelfth—between Italy and Austria-Hungary during the First World War.
Some battles are named for the convenience of military historians so that periods of combat can be neatly distinguished from one another. Following the First World War, the British Battles Nomenclature Committee was formed to decide on standard names for all battles and subsidiary actions. To the soldiers who did the fighting, the distinction was usually academic; a soldier fighting at Beaumont Hamel on November 13, 1916, was probably unaware he was taking part in what the committee named the Battle of the Ancre. Many combats are too small to be battles; terms such as "action", "affair", "skirmish", "firefight", "raid", or "offensive patrol" are used to describe small military encounters. These combats often take place within the time and space of a battle and while they may have an objective, they are not necessarily "decisive". Sometimes the soldiers are unable to immediately gauge the significance of the combat; in the aftermath of the Battle of Waterloo, some British officers were in doubt as to whether the day's events merited the title of "battle" or would be called an "action".
Battles affect the individuals who take part, as well as the political actors. Personal effects of battle range from mild psychological issues to permanent and crippling injuries. Some battle-survivors have nightmares about the conditions they encountered or abnormal reactions to certain sights or sounds and some experience flashbacks. Physical effects of battle can include scars, amputations, lesions, loss of bodily functions, blindness, paralysis and death. Battles affect politics; a decisive battle can cause the losing side to surrender, while a Pyrrhic victory such as the Battle of Asculum can cause the winning side to reconsider its goals. Battles in civil wars have often decided the fate of monarchs or political factions. Famous examples include the Wars of the Roses, as well as the Jacobite risings. Battles affect the commitment of one side or the other to the continuance of a war, for example the Battle of Inchon and the Battle of Huế during the Tet Offensive. | [
{
"paragraph_id": 0,
"text": "A battle is an occurrence of combat in warfare between opposing military units of any number or size. A war usually consists of multiple battles. In general, a battle is a military engagement that is well defined in duration, area, and force commitment. An engagement with only limited commitment between the forces and without decisive results is sometimes called a skirmish.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The word \"battle\" can also be used infrequently to refer to an entire operational campaign, although this usage greatly diverges from its conventional or customary meaning. Generally, the word \"battle\" is used for such campaigns if referring to a protracted combat encounter in which either one or both of the combatants had the same methods, resources, and strategic objectives throughout the encounter. Some prominent examples of this would be the Battle of the Atlantic, Battle of Britain, and Battle of Stalingrad, all in World War II.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Wars and military campaigns are guided by military strategy, whereas battles take place on a level of planning and execution known as operational mobility. German strategist Carl von Clausewitz stated that \"the employment of battles ... to achieve the object of war\" was the essence of strategy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Battle is a loanword from the Old French bataille, first attested in 1297, from Late Latin battualia, meaning \"exercise of soldiers and gladiators in fighting and fencing\", from Late Latin (taken from Germanic) battuere \"beat\", from which the English word battery is also derived via Middle English batri.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "The defining characteristic of the fight as a concept in military science has changed with the variations in the organisation, employment and technology of military forces. The English military historian John Keegan suggested an ideal definition of battle as \"something which happens between two armies leading to the moral then physical disintegration of one or the other of them\" but the origins and outcomes of battles can rarely be summarized so neatly. Battle in the 20th and 21st centuries is defined as the combat between large components of the forces in a military campaign, used to achieve military objectives. Where the duration of the battle is longer than a week, it is often for reasons of planning called an operation. Battles can be planned, encountered or forced by one side when the other is unable to withdraw from combat.",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "A battle always has as its purpose the reaching of a mission goal by use of military force. A victory in the battle is achieved when one of the opposing sides forces the other to abandon its mission and surrender its forces, routs the other (i.e., forces it to retreat or renders it militarily ineffective for further combat operations) or annihilates the latter, resulting in their deaths or capture. A battle may end in a Pyrrhic victory, which ultimately favors the defeated party. If no resolution is reached in a battle, it can result in a stalemate. A conflict in which one side is unwilling to reach a decision by a direct battle using conventional warfare often becomes an insurgency.",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "Until the 19th century the majority of battles were of short duration, many lasting a part of a day. (The Battle of Preston (1648), the Battle of Nations (1813) and the Battle of Gettysburg (1863) were exceptional in lasting three days.) This was mainly due to the difficulty of supplying armies in the field or conducting night operations. The means of prolonging a battle was typically with siege warfare. Improvements in transport and the sudden evolving of trench warfare, with its siege-like nature during the First World War in the 20th century, lengthened the duration of battles to days and weeks. This created the requirement for unit rotation to prevent combat fatigue, with troops preferably not remaining in a combat area of operations for more than a month.",
"title": "Characteristics"
},
{
"paragraph_id": 7,
"text": "The use of the term \"battle\" in military history has led to its misuse when referring to almost any scale of combat, notably by strategic forces involving hundreds of thousands of troops that may be engaged in either one battle at a time (Battle of Leipzig) or operations (Battle of Kursk). The space a battle occupies depends on the range of the weapons of the combatants. A \"battle\" in this broader sense may be of long duration and take place over a large area, as in the case of the Battle of Britain or the Battle of the Atlantic. Until the advent of artillery and aircraft, battles were fought with the two sides within sight, if not reach, of each other. The depth of the battlefield has also increased in modern warfare with inclusion of the supporting units in the rear areas; supply, artillery, medical personnel etc. often outnumber the front-line combat troops.",
"title": "Characteristics"
},
{
"paragraph_id": 8,
"text": "Battles are made up of a multitude of individual combats, skirmishes and small engagements and the combatants will usually only experience a small part of the battle. To the infantryman, there may be little to distinguish between combat as part of a minor raid or a big offensive, nor is it likely that he anticipates the future course of the battle; few of the British infantry who went over the top on the first day on the Somme, 1 July 1916, would have anticipated that the battle would last five months. Some of the Allied infantry who had just dealt a crushing defeat to the French at the Battle of Waterloo fully expected to have to fight again the next day (at the Battle of Wavre).",
"title": "Characteristics"
},
{
"paragraph_id": 9,
"text": "Battlespace is a unified strategic concept to integrate and combine armed forces for the military theatre of operations, including air, information, land, sea and space. It includes the environment, factors and conditions that must be understood to apply combat power, protect the force or complete the mission, comprising enemy and friendly armed forces; facilities; weather; terrain; and the electromagnetic spectrum.",
"title": "Battlespace"
},
{
"paragraph_id": 10,
"text": "Battles are decided by various factors, the number and quality of combatants and equipment, the skill of commanders and terrain are among the most prominent. Weapons and armour can be decisive; on many occasions armies have achieved victory through more advanced weapons than those of their opponents. An extreme example was in the Battle of Omdurman, in which a large army of Sudanese Mahdists armed in a traditional manner were destroyed by an Anglo-Egyptian force equipped with Maxim machine guns and artillery.",
"title": "Factors"
},
{
"paragraph_id": 11,
"text": "On some occasions, simple weapons employed in an unorthodox fashion have proven advantageous; Swiss pikemen gained many victories through their ability to transform a traditionally defensive weapon into an offensive one. Zulus in the early 19th century were victorious in battles against their rivals in part because they adopted a new kind of spear, the iklwa. Forces with inferior weapons have still emerged victorious at times, for example in the Wars of Scottish Independence. Disciplined troops are often of greater importance; at the Battle of Alesia, the Romans were greatly outnumbered but won because of superior training.",
"title": "Factors"
},
{
"paragraph_id": 12,
"text": "Battles can also be determined by terrain. Capturing high ground has been the main tactic in innumerable battles. An army that holds the high ground forces the enemy to climb and thus wear themselves down. Areas of jungle and forest, with dense vegetation act as force-multipliers, of benefit to inferior armies. Terrain may have lost importance in modern warfare, due to the advent of aircraft, though the terrain is still vital for camouflage, especially for guerrilla warfare.",
"title": "Factors"
},
{
"paragraph_id": 13,
"text": "Generals and commanders also play an important role, Hannibal, Julius Caesar, Khalid ibn Walid, Subutai and Napoleon Bonaparte were all skilled generals and their armies were extremely successful at times. An army that can trust the commands of their leaders with conviction in its success invariably has a higher morale than an army that doubts its every move. The British in the naval Battle of Trafalgar owed its success to the reputation of Admiral Lord Nelson.",
"title": "Factors"
},
{
"paragraph_id": 14,
"text": "Battles can be fought on land, at sea, and in the air. Naval battles have occurred since before the 5th century BC. Air battles have been far less common, due to their late conception, the most prominent being the Battle of Britain in 1940. Since the Second World War, land or sea battles have come to rely on air support. During the Battle of Midway, five aircraft carriers were sunk without either fleet coming into direct contact.",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "Battles are usually hybrids of different types listed above.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "A decisive battle is one with political effects, determining the course of the war such as the Battle of Smolensk or bringing hostilities to an end, such as the Battle of Hastings or the Battle of Hattin. A decisive battle can change the balance of power or boundaries between countries. The concept of the decisive battle became popular with the publication in 1851 of Edward Creasy's The Fifteen Decisive Battles of the World. British military historians J.F.C. Fuller (The Decisive Battles of the Western World) and B.H. Liddell Hart (Decisive Wars of History), among many others, have written books in the style of Creasy's work.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "There is an obvious difference in the way battles have been fought. Early battles were probably fought between rival hunting bands as unorganized crowds. During the Battle of Megiddo, the first reliably documented battle in the fifteenth century BC, both armies were organised and disciplined; during the many wars of the Roman Empire, barbarians continued to use mob tactics.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "As the Age of Enlightenment dawned, armies began to fight in highly disciplined lines. Each would follow the orders from their officers and fight as a unit instead of individuals. Armies were divided into regiments, battalions, companies and platoons. These armies would march, line up and fire in divisions.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "Native Americans, on the other hand, did not fight in lines, using guerrilla tactics. American colonists and European forces continued using disciplined lines into the American Civil War.",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "A new style arose from the 1850s to the First World War, known as trench warfare, which also led to tactical radio. Chemical warfare also began in 1915.",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "By the Second World War, the use of the smaller divisions, platoons and companies became much more important as precise operations became vital. Instead of the trench stalemate of 1915–1917, in the Second World War, battles developed where small groups encountered other platoons. As a result, elite squads became much more recognized and distinguishable. Maneuver warfare also returned with an astonishing pace with the advent of the tank, replacing the cannon of the Enlightenment Age. Artillery has since gradually replaced the use of frontal troops. Modern battles resemble those of the Second World War, along with indirect combat through the use of aircraft and missiles which has come to constitute a large portion of wars in place of battles, where battles are now mostly reserved for capturing cities.",
"title": "Types"
},
{
"paragraph_id": 22,
"text": "One significant difference of modern naval battles, as opposed to earlier forms of combat is the use of marines, which introduced amphibious warfare. Today, a marine is actually an infantry regiment that sometimes fights solely on land and is no longer tied to the navy. A good example of an old naval battle is the Battle of Salamis. Most ancient naval battles were fought by fast ships using the battering ram to sink opposing fleets or steer close enough for boarding in hand-to-hand combat. Troops were often used to storm enemy ships as used by Romans and pirates. This tactic was usually used by civilizations that could not beat the enemy with ranged weaponry. Another invention in the late Middle Ages was the use of Greek fire by the Byzantines, which was used to set enemy fleets on fire. Empty demolition ships utilized the tactic to crash into opposing ships and set it afire with an explosion. After the invention of cannons, naval warfare became useful as support units for land warfare. During the 19th century, the development of mines led to a new type of naval warfare. The ironclad, first used in the American Civil War, resistant to cannons, soon made the wooden ship obsolete. The invention of military submarines, during World War I, brought naval warfare to both above and below the surface. With the development of military aircraft during World War II, battles were fought in the sky as well as below the ocean. Aircraft carriers have since become the central unit in naval warfare, acting as a mobile base for lethal aircraft.",
"title": "Types"
},
{
"paragraph_id": 23,
"text": "Although the use of aircraft has for the most part always been used as a supplement to land or naval engagements, since their first major military use in World War I aircraft have increasingly taken on larger roles in warfare. During World War I, the primary use was for reconnaissance, and small-scale bombardment. Aircraft began becoming much more prominent in the Spanish Civil War and especially World War II. Aircraft design began specializing, primarily into two types: bombers, which carried explosive payloads to bomb land targets or ships; and fighter-interceptors, which were used to either intercept incoming aircraft or to escort and protect bombers (engagements between fighter aircraft were known as dog fights). Some of the more notable aerial battles in this period include the Battle of Britain and the Battle of Midway. Another important use of aircraft came with the development of the helicopter, which first became heavily used during the Vietnam War, and still continues to be widely used today to transport and augment ground forces. Today, direct engagements between aircraft are rare – the most modern fighter-interceptors carry much more extensive bombing payloads, and are used to bomb precision land targets, rather than to fight other aircraft. Anti-aircraft batteries are used much more extensively to defend against incoming aircraft than interceptors. Despite this, aircraft today are much more extensively used as the primary tools for both army and navy, as evidenced by the prominent use of helicopters to transport and support troops, the use of aerial bombardment as the \"first strike\" in many engagements, and the replacement of the battleship with the aircraft carrier as the center of most modern navies.",
"title": "Types"
},
{
"paragraph_id": 24,
"text": "Battles are usually named after some feature of the battlefield geography, such as a town, forest or river, commonly prefixed \"Battle of...\". Occasionally battles are named after the date on which they took place, such as The Glorious First of June. In the Middle Ages it was considered important to settle on a suitable name for a battle which could be used by the chroniclers. After Henry V of England defeated a French army on October 25, 1415, he met with the senior French herald and they agreed to name the battle after the nearby castle and so it was called the Battle of Agincourt. In other cases, the sides adopted different names for the same battle, such as the Battle of Gallipoli which is known in Turkey as the Battle of Çanakkale. During the American Civil War, the Union tended to name the battles after the nearest watercourse, such as the Battle of Wilsons Creek and the Battle of Stones River, whereas the Confederates favoured the nearby towns, as in the Battles of Chancellorsville and Murfreesboro. Occasionally both names for the same battle entered the popular culture, such as the First Battle of Bull Run and the Second Battle of Bull Run, which are also referred to as the First and Second Battles of Manassas.",
"title": "Naming"
},
{
"paragraph_id": 25,
"text": "Sometimes in desert warfare, there is no nearby town name to use; map coordinates gave the name to the Battle of 73 Easting in the First Gulf War. Some place names have become synonymous with battles, such as the Passchendaele, Pearl Harbor, the Alamo, Thermopylae and Waterloo. Military operations, many of which result in battle, are given codenames, which are not necessarily meaningful or indicative of the type or the location of the battle. Operation Market Garden and Operation Rolling Thunder are examples of battles known by their military codenames. When a battleground is the site of more than one battle in the same conflict, the instances are distinguished by ordinal number, such as the First and Second Battles of Bull Run. An extreme case are the twelve Battles of the Isonzo—First to Twelfth—between Italy and Austria-Hungary during the First World War.",
"title": "Naming"
},
{
"paragraph_id": 26,
"text": "Some battles are named for the convenience of military historians so that periods of combat can be neatly distinguished from one another. Following the First World War, the British Battles Nomenclature Committee was formed to decide on standard names for all battles and subsidiary actions. To the soldiers who did the fighting, the distinction was usually academic; a soldier fighting at Beaumont Hamel on November 13, 1916, was probably unaware he was taking part in what the committee named the Battle of the Ancre. Many combats are too small to be battles; terms such as \"action\", \"affair\", \"skirmish\", \"firefight\", \"raid\", or \"offensive patrol\" are used to describe small military encounters. These combats often take place within the time and space of a battle and while they may have an objective, they are not necessarily \"decisive\". Sometimes the soldiers are unable to immediately gauge the significance of the combat; in the aftermath of the Battle of Waterloo, some British officers were in doubt as to whether the day's events merited the title of \"battle\" or would be called an \"action\".",
"title": "Naming"
},
{
"paragraph_id": 27,
"text": "Battles affect the individuals who take part, as well as the political actors. Personal effects of battle range from mild psychological issues to permanent and crippling injuries. Some battle-survivors have nightmares about the conditions they encountered or abnormal reactions to certain sights or sounds and some experience flashbacks. Physical effects of battle can include scars, amputations, lesions, loss of bodily functions, blindness, paralysis and death. Battles affect politics; a decisive battle can cause the losing side to surrender, while a Pyrrhic victory such as the Battle of Asculum can cause the winning side to reconsider its goals. Battles in civil wars have often decided the fate of monarchs or political factions. Famous examples include the Wars of the Roses, as well as the Jacobite risings. Battles affect the commitment of one side or the other to the continuance of a war, for example the Battle of Inchon and the Battle of Huế during the Tet Offensive.",
"title": "Effects"
},
{
"paragraph_id": 28,
"text": "",
"title": "External links"
}
] | A battle is an occurrence of combat in warfare between opposing military units of any number or size. A war usually consists of multiple battles. In general, a battle is a military engagement that is well defined in duration, area, and force commitment. An engagement with only limited commitment between the forces and without decisive results is sometimes called a skirmish. The word "battle" can also be used infrequently to refer to an entire operational campaign, although this usage greatly diverges from its conventional or customary meaning. Generally, the word "battle" is used for such campaigns if referring to a protracted combat encounter in which either one or both of the combatants had the same methods, resources, and strategic objectives throughout the encounter. Some prominent examples of this would be the Battle of the Atlantic, Battle of Britain, and Battle of Stalingrad, all in World War II. Wars and military campaigns are guided by military strategy, whereas battles take place on a level of planning and execution known as operational mobility. German strategist Carl von Clausewitz stated that "the employment of battles ... to achieve the object of war" was the essence of strategy. | 2001-09-14T17:17:57Z | 2023-10-12T17:24:41Z | [
"Template:Short description",
"Template:More citations needed",
"Template:Reflist",
"Template:Interlanguage link",
"Template:Cite book",
"Template:Refbegin",
"Template:About",
"Template:Citation needed",
"Template:Refend",
"Template:Authority control",
"Template:Military and war",
"Template:Lang",
"Template:History of war",
"Template:Main",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Battle |
4,182 | Berry Berenson | Berinthia "Berry" Berenson-Perkins (née Berenson; April 14, 1948 – September 11, 2001) was an American actress, model and photographer. She was the widow of actor Anthony Perkins. She died in the September 11 attacks as a passenger on American Airlines Flight 11.
Berry Berenson was born in Murray Hill, Manhattan, New York City. Her mother was born Maria-Luisa Yvonne Radha de Wendt de Kerlor, better known as Gogo Schiaparelli, a socialite of Italian, Swiss, French, and Egyptian ancestry. Her father, Robert Lawrence Berenson, was an American career diplomat turned shipping executive; he was of Lithuanian-Jewish descent, and his family's original surname was "Valvrojenski".
Berenson's maternal grandmother was the Italian-born fashion designer Elsa Schiaparelli, and her maternal grandfather was Wilhelm de Wendt de Kerlor, a Theosophist and psychic medium. Her elder sister, Marisa Berenson, became a well-known model and actress. She also was a great-grandniece of Giovanni Schiaparelli, an Italian astronomer who believed he had discovered the supposed canals of Mars, and a second cousin, once removed, of art expert Bernard Berenson (1865–1959) and his sister Senda Berenson (1868–1954), an athlete and educator who was one of the first two women elected to the Basketball Hall of Fame.
Following a brief modeling career in the late 1960s, Berenson became a freelance photographer. By 1973, her photographs had been published in Life, Glamour, Vogue and Newsweek.
Berenson studied acting at New York's The American Place Theatre with Wynn Handman along with Richard Gere, Philip Anglim, Penelope Milford, Robert Ozn, Ingrid Boulting and her sister Marisa.
Berenson also appeared in several motion pictures. She starred opposite Anthony Perkins in the 1978 Alan Rudolph film Remember My Name, and appeared with Jeff Bridges in the 1979 film Winter Kills and Malcolm McDowell in Cat People (1982).
On August 9, 1973, in Cape Cod, Massachusetts, Berenson, three months pregnant, married her future Remember My Name co-star Anthony Perkins. The couple raised two sons: actor-director Oz Perkins and folk/rock singer-songwriter Elvis Perkins. They remained married until Perkins died from AIDS-related complications on September 12, 1992.
Berenson died on September 11, 2001, as she was returning home to Los Angeles following a holiday on Cape Cod. She and the rest of the passengers and crew aboard American Airlines Flight 11 died when it was hijacked and crashed into the World Trade Center during the September 11 attacks in Manhattan.
At the National September 11 Memorial & Museum, Berenson is memorialized at the North Pool, on Panel N-76.
Media related to Berry Berenson at Wikimedia Commons | [
{
"paragraph_id": 0,
"text": "Berinthia \"Berry\" Berenson-Perkins (née Berenson; April 14, 1948 – September 11, 2001) was an American actress, model and photographer. She was the widow of actor Anthony Perkins. She died in the September 11 attacks as a passenger on American Airlines Flight 11.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Berry Berenson was born in Murray Hill, Manhattan, New York City. Her mother was born Maria-Luisa Yvonne Radha de Wendt de Kerlor, better known as Gogo Schiaparelli, a socialite of Italian, Swiss, French, and Egyptian ancestry. Her father, Robert Lawrence Berenson, was an American career diplomat turned shipping executive; he was of Lithuanian-Jewish descent, and his family's original surname was \"Valvrojenski\".",
"title": "Early life"
},
{
"paragraph_id": 2,
"text": "Berenson's maternal grandmother was the Italian-born fashion designer Elsa Schiaparelli, and her maternal grandfather was Wilhelm de Wendt de Kerlor, a Theosophist and psychic medium. Her elder sister, Marisa Berenson, became a well-known model and actress. She also was a great-grandniece of Giovanni Schiaparelli, an Italian astronomer who believed he had discovered the supposed canals of Mars, and a second cousin, once removed, of art expert Bernard Berenson (1865–1959) and his sister Senda Berenson (1868–1954), an athlete and educator who was one of the first two women elected to the Basketball Hall of Fame.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "Following a brief modeling career in the late 1960s, Berenson became a freelance photographer. By 1973, her photographs had been published in Life, Glamour, Vogue and Newsweek.",
"title": "Career"
},
{
"paragraph_id": 4,
"text": "Berenson studied acting at New York's The American Place Theatre with Wynn Handman along with Richard Gere, Philip Anglim, Penelope Milford, Robert Ozn, Ingrid Boulting and her sister Marisa.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "Berenson also appeared in several motion pictures. She starred opposite Anthony Perkins in the 1978 Alan Rudolph film Remember My Name, and appeared with Jeff Bridges in the 1979 film Winter Kills and Malcolm McDowell in Cat People (1982).",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "On August 9, 1973, in Cape Cod, Massachusetts, Berenson, three months pregnant, married her future Remember My Name co-star Anthony Perkins. The couple raised two sons: actor-director Oz Perkins and folk/rock singer-songwriter Elvis Perkins. They remained married until Perkins died from AIDS-related complications on September 12, 1992.",
"title": "Personal life and death"
},
{
"paragraph_id": 7,
"text": "Berenson died on September 11, 2001, as she was returning home to Los Angeles following a holiday on Cape Cod. She and the rest of the passengers and crew aboard American Airlines Flight 11 died when it was hijacked and crashed into the World Trade Center during the September 11 attacks in Manhattan.",
"title": "Personal life and death"
},
{
"paragraph_id": 8,
"text": "At the National September 11 Memorial & Museum, Berenson is memorialized at the North Pool, on Panel N-76.",
"title": "Personal life and death"
},
{
"paragraph_id": 9,
"text": "Media related to Berry Berenson at Wikimedia Commons",
"title": "External links"
}
] | Berinthia "Berry" Berenson-Perkins was an American actress, model and photographer. She was the widow of actor Anthony Perkins. She died in the September 11 attacks as a passenger on American Airlines Flight 11. | 2001-09-15T04:49:53Z | 2023-12-16T16:44:18Z | [
"Template:Use American English",
"Template:IMDb name",
"Template:Short description",
"Template:Use mdy dates",
"Template:Infobox person",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Reflist",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:Portal",
"Template:Allmovie name",
"Template:Née",
"Template:Commons category-inline",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Berry_Berenson |
4,183 | Botany | Botany, also called plant science (or plant sciences), plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term "botany" comes from the Ancient Greek word βοτάνη (botanē) meaning "pasture", "herbs" "grass", or "fodder"; βοτάνη is in turn derived from βόσκειν (boskein), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants of which some 391,000 species are vascular plants (including approximately 369,000 species of flowering plants), and approximately 20,000 are bryophytes.
Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species.
In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately.
Modern botany is a broad, multidisciplinary subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity.
Botany originated as herbalism, the study and use of plants for their possible medicinal properties. The early recorded history of botany includes many ancient writings and plant classifications. Examples of early botanical works have been found in ancient texts from India dating back to before 1100 BCE, Ancient Egypt, in archaic Ancient Iranic Avestan writings, and in works from China purportedly from before 221 BCE.
Modern botany traces its roots back to Ancient Greece specifically to Theophrastus (c. 371–287 BCE), a student of Aristotle who invented and described many of its principles and is widely regarded in the scientific community as the "Father of Botany". His major works, Enquiry into Plants and On the Causes of Plants, constitute the most important contributions to botanical science until the Middle Ages, almost seventeen centuries later.
Another work from Ancient Greece that made an early impact on botany is De materia medica, a five-volume encyclopedia about preliminary herbal medicine written in the middle of the first century by Greek physician and pharmacologist Pedanius Dioscorides. De materia medica was widely read for more than 1,500 years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century, Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner.
In the mid-16th century, botanical gardens were founded in a number of Italian universities. The Padua botanical garden in 1545 is usually considered to be the first which is still in its original location. These gardens continued the practical value of earlier "physic gardens", often associated with monasteries, in which plants were cultivated for suspected medicinal uses. They supported the growth of botany as an academic subject. Lectures were given about the plants grown in the gardens. Botanical gardens came much later to northern Europe; the first in England was the University of Oxford Botanic Garden in 1621.
German physician Leonhart Fuchs (1501–1566) was one of "the three German fathers of botany", along with theologian Otto Brunfels (1489–1534) and physician Hieronymus Bock (1498–1554) (also called Hieronymus Tragus). Fuchs and Brunfels broke away from the tradition of copying earlier works to make original observations of their own. Bock created his own system of plant classification.
Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad von Gesner (1516–1565) and herbalist John Gerard (1545–c. 1611) published herbals covering the supposed medicinal uses of plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells, a term he coined, in cork, and a short time later in living plant tissue.
During the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family, genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753, Carl Linnaeus published his Species Plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme where the first name represented the genus and the second identified the species within the genus. For the purposes of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts, ferns, algae and fungi.
Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation that there were more natural affinities between plants than the artificial sexual system of Linnaeus. Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected his ideas of the progression of morphological complexity and the later Bentham & Hooker system, which was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary relationships as distinct from mere morphological similarity.
Botany was greatly stimulated by the appearance of the first "modern" textbook, Matthias Schleiden's Grundzüge der Wissenschaftlichen Botanik, published in English in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled the calculation of the rates of molecular diffusion in biological systems.
Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century.
The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants.
Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research. The discovery and identification of the auxin plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals. Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones. The synthetic auxin 2,4-dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides.
20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield.
Modern morphology recognises a continuum between the major morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasises structural dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern Molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research.
The study of plants is vital because they underpin almost all animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil.
Historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. Botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. At each of these levels, a botanist may be concerned with the classification (taxonomy), phylogeny and evolution, structure (anatomy and morphology), or function (physiology) of plant life.
The strictest definition of "plant" includes only the "land plants" or embryophytes, which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology), non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists, and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses.
Palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria, the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years.
Among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life's basic ingredients: energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability.
Virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics.
Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists.
Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds.
Plants and various other groups of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product.
The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that's used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant.
Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out.
Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period. The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the C4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common C3 carbon fixation pathway. These biochemical strategies are unique to land plants.
Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery.
Plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder.
Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin.
Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change.
Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest.
Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds.
Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric CO2 concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction.
Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, "jumping genes" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms.
Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes.
Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent.
Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed.
As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants.
A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally.
Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in C4 plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology.
Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops.
Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells.
Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others.
Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate.
Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other.
The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as "blue-green algae") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident.
The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina.
Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms.
Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis.
Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes.
Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids.
The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About the same time, the role of auxins (from the Greek auxein, to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification.
Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops.
Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack.
In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light.
Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division.
The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts.
The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant. In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots.
Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means.
Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations.
Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress.
Kingdom Plantae belongs to Domain Eukaryota and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. For example, the tiger lily is Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined when italics are not available).
The evolutionary relationships and heredity of a group of organisms is called its phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two genera are indeed related.
Judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies) – and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary branching and descent.
From the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. Clive Stace describes this as having "direct access to the genetic basis of evolution." As a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants.
In 1998, the Angiosperm Phylogeny Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms, have now been answered. Investigating how plant species are related to each other allows botanists to better understand the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological developments such as computers and electron microscopes have greatly increased the level of detail studied and speed at which data can be analysed.
A few symbols are in current use in botany. A number of others are obsolete; for example, Linnaeus used planetary symbols ⟨♂⟩ (Mars) for biennial plants, ⟨♃⟩ (Jupiter) for herbaceous perennials and ⟨♄⟩ (Saturn) for woody perennials, based on the planets' orbital periods of 2, 12 and 30 years; and Willd used ⟨♄⟩ (Saturn) for neuter in addition to ⟨☿⟩ (Mercury) for hermaphroditic. The following symbols are still used: | [
{
"paragraph_id": 0,
"text": "Botany, also called plant science (or plant sciences), plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term \"botany\" comes from the Ancient Greek word βοτάνη (botanē) meaning \"pasture\", \"herbs\" \"grass\", or \"fodder\"; βοτάνη is in turn derived from βόσκειν (boskein), \"to feed\" or \"to graze\". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants of which some 391,000 species are vascular plants (including approximately 369,000 species of flowering plants), and approximately 20,000 are bryophytes.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Modern botany is a broad, multidisciplinary subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Botany originated as herbalism, the study and use of plants for their possible medicinal properties. The early recorded history of botany includes many ancient writings and plant classifications. Examples of early botanical works have been found in ancient texts from India dating back to before 1100 BCE, Ancient Egypt, in archaic Ancient Iranic Avestan writings, and in works from China purportedly from before 221 BCE.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Modern botany traces its roots back to Ancient Greece specifically to Theophrastus (c. 371–287 BCE), a student of Aristotle who invented and described many of its principles and is widely regarded in the scientific community as the \"Father of Botany\". His major works, Enquiry into Plants and On the Causes of Plants, constitute the most important contributions to botanical science until the Middle Ages, almost seventeen centuries later.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Another work from Ancient Greece that made an early impact on botany is De materia medica, a five-volume encyclopedia about preliminary herbal medicine written in the middle of the first century by Greek physician and pharmacologist Pedanius Dioscorides. De materia medica was widely read for more than 1,500 years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century, Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the mid-16th century, botanical gardens were founded in a number of Italian universities. The Padua botanical garden in 1545 is usually considered to be the first which is still in its original location. These gardens continued the practical value of earlier \"physic gardens\", often associated with monasteries, in which plants were cultivated for suspected medicinal uses. They supported the growth of botany as an academic subject. Lectures were given about the plants grown in the gardens. Botanical gardens came much later to northern Europe; the first in England was the University of Oxford Botanic Garden in 1621.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "German physician Leonhart Fuchs (1501–1566) was one of \"the three German fathers of botany\", along with theologian Otto Brunfels (1489–1534) and physician Hieronymus Bock (1498–1554) (also called Hieronymus Tragus). Fuchs and Brunfels broke away from the tradition of copying earlier works to make original observations of their own. Bock created his own system of plant classification.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad von Gesner (1516–1565) and herbalist John Gerard (1545–c. 1611) published herbals covering the supposed medicinal uses of plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells, a term he coined, in cork, and a short time later in living plant tissue.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "During the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family, genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753, Carl Linnaeus published his Species Plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme where the first name represented the genus and the second identified the species within the genus. For the purposes of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts, ferns, algae and fungi.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation that there were more natural affinities between plants than the artificial sexual system of Linnaeus. Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected his ideas of the progression of morphological complexity and the later Bentham & Hooker system, which was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary relationships as distinct from mere morphological similarity.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Botany was greatly stimulated by the appearance of the first \"modern\" textbook, Matthias Schleiden's Grundzüge der Wissenschaftlichen Botanik, published in English in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled the calculation of the rates of molecular diffusion in biological systems.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research. The discovery and identification of the auxin plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals. Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones. The synthetic auxin 2,4-dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Modern morphology recognises a continuum between the major morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasises structural dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern Molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The study of plants is vital because they underpin almost all animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil.",
"title": "Scope and importance"
},
{
"paragraph_id": 19,
"text": "Historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. Botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. At each of these levels, a botanist may be concerned with the classification (taxonomy), phylogeny and evolution, structure (anatomy and morphology), or function (physiology) of plant life.",
"title": "Scope and importance"
},
{
"paragraph_id": 20,
"text": "The strictest definition of \"plant\" includes only the \"land plants\" or embryophytes, which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology), non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists, and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses.",
"title": "Scope and importance"
},
{
"paragraph_id": 21,
"text": "Palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria, the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years.",
"title": "Scope and importance"
},
{
"paragraph_id": 22,
"text": "Among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life's basic ingredients: energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability.",
"title": "Scope and importance"
},
{
"paragraph_id": 23,
"text": "Virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics.",
"title": "Scope and importance"
},
{
"paragraph_id": 24,
"text": "Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists.",
"title": "Scope and importance"
},
{
"paragraph_id": 25,
"text": "Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds.",
"title": "Plant biochemistry"
},
{
"paragraph_id": 26,
"text": "Plants and various other groups of photosynthetic eukaryotes collectively known as \"algae\" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product.",
"title": "Plant biochemistry"
},
{
"paragraph_id": 27,
"text": "The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that's used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant.",
"title": "Plant biochemistry"
},
{
"paragraph_id": 28,
"text": "Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out.",
"title": "Plant biochemistry"
},
{
"paragraph_id": 29,
"text": "Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period. The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the C4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common C3 carbon fixation pathway. These biochemical strategies are unique to land plants.",
"title": "Plant biochemistry"
},
{
"paragraph_id": 30,
"text": "Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery.",
"title": "Plant biochemistry"
},
{
"paragraph_id": 31,
"text": "Plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder.",
"title": "Plant biochemistry"
},
{
"paragraph_id": 32,
"text": "Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin.",
"title": "Plant biochemistry"
},
{
"paragraph_id": 33,
"text": "Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change.",
"title": "Plant ecology"
},
{
"paragraph_id": 34,
"text": "Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest.",
"title": "Plant ecology"
},
{
"paragraph_id": 35,
"text": "Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds.",
"title": "Plant ecology"
},
{
"paragraph_id": 36,
"text": "Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric CO2 concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction.",
"title": "Plant ecology"
},
{
"paragraph_id": 37,
"text": "Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, \"jumping genes\" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms.",
"title": "Genetics"
},
{
"paragraph_id": 38,
"text": "Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes.",
"title": "Genetics"
},
{
"paragraph_id": 39,
"text": "Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent.",
"title": "Genetics"
},
{
"paragraph_id": 40,
"text": "Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed.",
"title": "Genetics"
},
{
"paragraph_id": 41,
"text": "As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants.",
"title": "Genetics"
},
{
"paragraph_id": 42,
"text": "A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally.",
"title": "Genetics"
},
{
"paragraph_id": 43,
"text": "Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in C4 plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology.",
"title": "Genetics"
},
{
"paragraph_id": 44,
"text": "Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops.",
"title": "Genetics"
},
{
"paragraph_id": 45,
"text": "Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or \"express themselves\") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells.",
"title": "Genetics"
},
{
"paragraph_id": 46,
"text": "Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others.",
"title": "Genetics"
},
{
"paragraph_id": 47,
"text": "Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate.",
"title": "Genetics"
},
{
"paragraph_id": 48,
"text": "Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other.",
"title": "Genetics"
},
{
"paragraph_id": 49,
"text": "The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as \"blue-green algae\") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident.",
"title": "Plant evolution"
},
{
"paragraph_id": 50,
"text": "The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina.",
"title": "Plant evolution"
},
{
"paragraph_id": 51,
"text": "Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved \"megaspory\" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce \"naked seeds\" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms.",
"title": "Plant evolution"
},
{
"paragraph_id": 52,
"text": "Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis.",
"title": "Plant physiology"
},
{
"paragraph_id": 53,
"text": "Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes.",
"title": "Plant physiology"
},
{
"paragraph_id": 54,
"text": "Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids.",
"title": "Plant physiology"
},
{
"paragraph_id": 55,
"text": "The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded \"It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements\". About the same time, the role of auxins (from the Greek auxein, to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification.",
"title": "Plant physiology"
},
{
"paragraph_id": 56,
"text": "Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops.",
"title": "Plant physiology"
},
{
"paragraph_id": 57,
"text": "Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack.",
"title": "Plant physiology"
},
{
"paragraph_id": 58,
"text": "In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light.",
"title": "Plant physiology"
},
{
"paragraph_id": 59,
"text": "Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division.",
"title": "Plant anatomy and morphology"
},
{
"paragraph_id": 60,
"text": "The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts.",
"title": "Plant anatomy and morphology"
},
{
"paragraph_id": 61,
"text": "The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant. In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots.",
"title": "Plant anatomy and morphology"
},
{
"paragraph_id": 62,
"text": "Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means.",
"title": "Plant anatomy and morphology"
},
{
"paragraph_id": 63,
"text": "Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations.",
"title": "Plant anatomy and morphology"
},
{
"paragraph_id": 64,
"text": "Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress.",
"title": "Systematic botany"
},
{
"paragraph_id": 65,
"text": "Kingdom Plantae belongs to Domain Eukaryota and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. For example, the tiger lily is Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined when italics are not available).",
"title": "Systematic botany"
},
{
"paragraph_id": 66,
"text": "The evolutionary relationships and heredity of a group of organisms is called its phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two genera are indeed related.",
"title": "Systematic botany"
},
{
"paragraph_id": 67,
"text": "Judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies) – and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary branching and descent.",
"title": "Systematic botany"
},
{
"paragraph_id": 68,
"text": "From the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. Clive Stace describes this as having \"direct access to the genetic basis of evolution.\" As a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants.",
"title": "Systematic botany"
},
{
"paragraph_id": 69,
"text": "In 1998, the Angiosperm Phylogeny Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms, have now been answered. Investigating how plant species are related to each other allows botanists to better understand the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological developments such as computers and electron microscopes have greatly increased the level of detail studied and speed at which data can be analysed.",
"title": "Systematic botany"
},
{
"paragraph_id": 70,
"text": "A few symbols are in current use in botany. A number of others are obsolete; for example, Linnaeus used planetary symbols ⟨♂⟩ (Mars) for biennial plants, ⟨♃⟩ (Jupiter) for herbaceous perennials and ⟨♄⟩ (Saturn) for woody perennials, based on the planets' orbital periods of 2, 12 and 30 years; and Willd used ⟨♄⟩ (Saturn) for neuter in addition to ⟨☿⟩ (Mercury) for hermaphroditic. The following symbols are still used:",
"title": "Symbols"
}
] | Botany, also called plant science, plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term "botany" comes from the Ancient Greek word βοτάνη (botanē) meaning "pasture", "herbs" "grass", or "fodder"; βοτάνη is in turn derived from βόσκειν (boskein), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. Nowadays, botanists study approximately 410,000 species of land plants of which some 391,000 species are vascular plants, and approximately 20,000 are bryophytes. Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species. In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately. Modern botany is a broad, multidisciplinary subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity. | 2001-09-24T08:25:32Z | 2023-12-09T10:34:57Z | [
"Template:Circa",
"Template:Further",
"Template:Cite journal",
"Template:Wikiquote",
"Template:Redirect-several",
"Template:Sfn",
"Template:Clade",
"Template:Cite book",
"Template:Commonscatinline",
"Template:Main",
"Template:CO2",
"Template:Authority control",
"Template:-",
"Template:Reflist",
"Template:Multiple image",
"Template:Div col end",
"Template:Cbignore",
"Template:Biology nav",
"Template:TopicTOC-Biology",
"Template:C3",
"Template:Efn",
"Template:Angbr",
"Template:Notelist",
"Template:Citation",
"Template:Refbegin",
"Template:Branches of biology",
"Template:Use British English",
"Template:Transl",
"Template:Plant classification",
"Template:C4",
"Template:Cite web",
"Template:Cite dictionary",
"Template:Good article",
"Template:Lang",
"Template:Calvin cycle",
"Template:Transliteration",
"Template:Div col",
"Template:Portal",
"Template:Cite news",
"Template:Webarchive",
"Template:Short description",
"Template:Plain image",
"Template:History of botany",
"Template:Refend",
"Template:Botany"
] | https://en.wikipedia.org/wiki/Botany |
4,184 | Bacillus thuringiensis | Bacillus thuringiensis (or Bt) is a gram-positive, soil-dwelling bacterium, the most commonly used biological pesticide worldwide. B. thuringiensis also occurs naturally in the gut of caterpillars of various types of moths and butterflies, as well on leaf surfaces, aquatic environments, animal feces, insect-rich environments, and flour mills and grain-storage facilities. It has also been observed to parasitize other moths such as Cadra calidella—in laboratory experiments working with C. calidella, many of the moths were diseased due to this parasite.
During sporulation, many Bt strains produce crystal proteins (proteinaceous inclusions), called delta endotoxins, that have insecticidal action. This has led to their use as insecticides, and more recently to genetically modified crops using Bt genes, such as Bt corn. Many crystal-producing Bt strains, though, do not have insecticidal properties. The subspecies israelensis is commonly used for control of mosquitoes and of fungus gnats.
As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. Other organisms (including humans, other animals and non-targeted insects) that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt.
In 1902, B. thuringiensis was first discovered in silkworms by Japanese sericultural engineer Ishiwatari Shigetane (石渡 繁胤). He named it B. sotto, using the Japanese word sottō (卒倒, 'collapse'), here referring to bacillary paralysis. In 1911, German microbiologist Ernst Berliner rediscovered it when he isolated it as the cause of a disease called Schlaffsucht in flour moth caterpillars in Thuringia (hence the specific name thuringiensis, "Thuringian"). B. sotto would later be reassigned as B. thuringiensis var. sotto.
In 1976, Robert A. Zakharyan reported the presence of a plasmid in a strain of B. thuringiensis and suggested the plasmid's involvement in endospore and crystal formation. B. thuringiensis is closely related to B. cereus, a soil bacterium, and B. anthracis, the cause of anthrax; the three organisms differ mainly in their plasmids. Like other members of the genus, all three are capable of producing endospores.
B. thuringiensis is placed in the Bacillus cereus group which is variously defined as: seven closely related species: B. cereus sensu stricto (B. cereus), B. anthracis, B. thuringiensis, B. mycoides, B. pseudomycoides, and B. cytotoxicus; or as six species in a Bacillus cereus sensu lato: B. weihenstephanensis, B. mycoides, B. pseudomycoides, B. cereus, B. thuringiensis, and B. anthracis. Within this grouping B.t. is more closely related to B.ce. It is more distantly related to B.w., B.m., B.p., and B.cy.
There are several dozen recognized subspecies of B. thuringiensis. Subspecies commonly used as insecticides include B. thuringiensis subspecies kurstaki (Btk), subspecies israelensis (Bti) and subspecies aizawai (Bta). Some Bti lineages are clonal.
Some strains are known to carry the same genes that produce enterotoxins in B. cereus, and so it is possible that the entire B. cereus sensu lato group may have the potential to be enteropathogens.
The proteins that B. thuringiensis is most known for are encoded by cry genes. In most strains of B. thuringiensis, these genes are located on a plasmid (in other words cry is not a chromosomal gene in most strains). If these plasmids are lost it becomes indistinguishable from B. cereus as B. thuringiensis has no other species characteristics. Plasmid exchange has been observed both naturally and experimentally both within B.t. and between B.t. and two congeners, B. cereus and B. mycoides.
plcR is an indispensable transcription regulator of most virulence factors, its absence greatly reducing virulence and toxicity. Some strains do naturally complete their life cycle with an inactivated plcR. It is half of a two-gene operon along with the heptapeptide papR. papR is part of quorum sensing in B. thuringiensis.
Various strains including Btk ATCC 33679 carry plasmids belonging to the wider pXO1-like family. (The pXO1 family being a B. cereus-common family with members of ≈330kb length. They differ from pXO1 by replacement of the pXO1 pathogenicity island.) The insect parasite Btk HD73 carries a pXO2-like plasmid (pBT9727) lacking the 35kb pathogenicity island of pXO2 itself, and in fact having no identifiable virulence factors. (The pXO2 family does not have replacement of the pathogenicity island, instead simply lacking that part of pXO2.)
The genomes of the B. cereus group may contain two types of introns, dubbed group I and group II. B.t strains have variously 0-5 group Is and 0-13 group IIs.
There is still insufficient information to determine whether chromosome-plasmid coevolution to enable adaptation to particular environmental niches has occurred or is even possible.
Common with B. cereus but so far not found elsewhere - including in other members of the species group - are the efflux pump BC3663, the N-acyl-L-amino-acid amidohydrolase BC3664, and the methyl-accepting chemotaxis protein BC5034.
Has similar proteome diversity to close relative B. cereus.
Into the BT Cotton protein is 'Crystal protein'
Upon sporulation, B. thuringiensis forms crystals of two types of proteinaceous insecticidal delta endotoxins (δ-endotoxins) called crystal proteins or Cry proteins, which are encoded by cry genes, and Cyt proteins.
Cry toxins have specific activities against insect species of the orders Lepidoptera (moths and butterflies), Diptera (flies and mosquitoes), Coleoptera (beetles) and Hymenoptera (wasps, bees, ants and sawflies), as well as against nematodes. Thus, B. thuringiensis serves as an important reservoir of Cry toxins for production of biological insecticides and insect-resistant genetically modified crops. When insects ingest toxin crystals, their alkaline digestive tracts denature the insoluble crystals, making them soluble and thus amenable to being cut with proteases found in the insect gut, which liberate the toxin from the crystal. The Cry toxin is then inserted into the insect gut cell membrane, paralyzing the digestive tract and forming a pore. The insect stops eating and starves to death; live Bt bacteria may also colonize the insect, which can contribute to death. Death occurs within a few hours or weeks. The midgut bacteria of susceptible larvae may be required for B. thuringiensis insecticidal activity.
A B. thuringiensis small RNA called BtsR1 can silence the Cry5Ba toxin expression when outside the host by binding to the RBS site of the Cry5Ba toxin transcript to avoid nematode behavioral defenses. The silencing results in an increase of the bacteria ingestion by C. elegans. The expression of BtsR1 is then reduced after ingestion, resulting in Cry5Ba toxin production and host death.
In 1996 another class of insecticidal proteins in Bt was discovered: the vegetative insecticidal proteins (Vip; InterPro: IPR022180). Vip proteins do not share sequence homology with Cry proteins, in general do not compete for the same receptors, and some kill different insects than do Cry proteins.
In 2000, a novel subgroup of Cry protein, designated parasporin, was discovered from non-insecticidal B. thuringiensis isolates. The proteins of parasporin group are defined as B. thuringiensis and related bacterial parasporal proteins that are not hemolytic, but capable of preferentially killing cancer cells. As of January 2013, parasporins comprise six subfamilies: PS1 to PS6.
Spores and crystalline insecticidal proteins produced by B. thuringiensis have been used to control insect pests since the 1920s and are often applied as liquid sprays. They are now used as specific insecticides under trade names such as DiPel and Thuricide. Because of their specificity, these pesticides are regarded as environmentally friendly, with little or no effect on humans, wildlife, pollinators, and most other beneficial insects, and are used in organic farming; however, the manuals for these products do contain many environmental and human health warnings, and a 2012 European regulatory peer review of five approved strains found, while data exist to support some claims of low toxicity to humans and the environment, the data are insufficient to justify many of these claims.
New strains of Bt are developed and introduced over time as insects develop resistance to Bt, or the desire occurs to force mutations to modify organism characteristics, or to use homologous recombinant genetic engineering to improve crystal size and increase pesticidal activity, or broaden the host range of Bt and obtain more effective formulations. Each new strain is given a unique number and registered with the U.S. EPA and allowances may be given for genetic modification depending on "its parental strains, the proposed pesticide use pattern, and the manner and extent to which the organism has been genetically modified". Formulations of Bt that are approved for organic farming in the US are listed at the website of the Organic Materials Review Institute (OMRI) and several university extension websites offer advice on how to use Bt spore or protein preparations in organic farming.
The Belgian company Plant Genetic Systems (now part of Bayer CropScience) was the first company (in 1985) to develop genetically modified crops (tobacco) with insect tolerance by expressing cry genes from B. thuringiensis; the resulting crops contain delta endotoxin. The Bt tobacco was never commercialized; tobacco plants are used to test genetic modifications since they are easy to manipulate genetically and are not part of the food supply.
In 1985, potato plants producing CRY 3A Bt toxin were approved safe by the Environmental Protection Agency, making it the first human-modified pesticide-producing crop to be approved in the US, though many plants produce pesticides naturally, including tobacco, coffee plants, cocoa, cotton and black walnut. This was the 'New Leaf' potato, and it was removed from the market in 2001 due to lack of interest.
In 1996, genetically modified maize producing Bt Cry protein was approved, which killed the European corn borer and related species; subsequent Bt genes were introduced that killed corn rootworm larvae.
The Bt genes engineered into crops and approved for release include, singly and stacked: Cry1A.105, CryIAb, CryIF, Cry2Ab, Cry3Bb1, Cry34Ab1, Cry35Ab1, mCry3A, and VIP, and the engineered crops include corn and cotton.
Corn genetically modified to produce VIP was first approved in the US in 2010.
In India, by 2014, more than seven million cotton farmers, occupying twenty-six million acres, had adopted Bt cotton.
Monsanto developed a soybean expressing Cry1Ac and the glyphosate-resistance gene for the Brazilian market, which completed the Brazilian regulatory process in 2010.
Bt aspen - specifically Populus hybrids - have been developed. They do suffer lesser leaf damage from insect herbivory. The results have not been entirely positive however: The intended result - better timber yield - was not achieved, with no growth advantage despite that reduction in herbivore damage; one of their major pests still preys upon the transgenic trees; and besides that, their leaf litter decomposes differently due to the transgenic toxins, resulting in alterations to the aquatic insect populations nearby.
The use of Bt toxins as plant-incorporated protectants prompted the need for extensive evaluation of their safety for use in foods and potential unintended impacts on the environment.
Concerns over the safety of consumption of genetically modified plant materials that contain Cry proteins have been addressed in extensive dietary risk assessment studies. As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. While the target pests are exposed to the toxins primarily through leaf and stalk material, Cry proteins are also expressed in other parts of the plant, including trace amounts in maize kernels which are ultimately consumed by both humans and animals. However, other organisms (including humans, other animals and non-targeted insects) that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt.
Animal models have been used to assess human health risk from consumption of products containing Cry proteins. The United States Environmental Protection Agency recognizes mouse acute oral feeding studies where doses as high as 5,000 mg/kg body weight resulted in no observed adverse effects. Research on other known toxic proteins suggests that toxicity occurs at much lower doses, further suggesting that Bt toxins are not toxic to mammals. The results of toxicology studies are further strengthened by the lack of observed toxicity from decades of use of B. thuringiensis and its crystalline proteins as an insecticidal spray.
Introduction of a new protein raised concerns regarding the potential for allergic responses in sensitive individuals. Bioinformatic analysis of known allergens has indicated there is no concern of allergic reactions as a result of consumption of Bt toxins. Additionally, skin prick testing using purified Bt protein resulted in no detectable production of toxin-specific IgE antibodies, even in atopic patients.
Studies have been conducted to evaluate the fate of Bt toxins that are ingested in foods. Bt toxin proteins have been shown to digest within minutes of exposure to simulated gastric fluids. The instability of the proteins in digestive fluids is an additional indication that Cry proteins are unlikely to be allergenic, since most known food allergens resist degradation and are ultimately absorbed in the small intestine.
Ecological risk assessment aims to ensure there is no unintended impact on non-target organisms and no contamination of natural resources as a result of the use of a new substance, such as the use of Bt in genetically modified crops. The impact of Bt toxins on the environments where transgenic plants are grown has been evaluated to ensure no adverse effects outside of targeted crop pests.
Concerns over possible environmental impact from accumulation of Bt toxins from plant tissues, pollen dispersal, and direct secretion from roots have been investigated. Bt toxins may persist in soil for over 200 days, with half-lives between 1.6 and 22 days. Much of the toxin is initially degraded rapidly by microorganisms in the environment, while some is adsorbed by organic matter and persists longer. Some studies, in contrast, claim that the toxins do not persist in the soil. Bt toxins are less likely to accumulate in bodies of water, but pollen shed or soil runoff may deposit them in an aquatic ecosystem. Fish species are not susceptible to Bt toxins if exposed.
The toxic nature of Bt proteins has an adverse impact on many major crop pests, but ecological risk assessments have been conducted to ensure safety of beneficial non-target organisms that may come into contact with the toxins. Widespread concerns over toxicity in non-target lepidopterans, such as the monarch butterfly, have been disproved through proper exposure characterization, where it was determined that non-target organisms are not exposed to high enough amounts of the Bt toxins to have an adverse effect on the population. Soil-dwelling organisms, potentially exposed to Bt toxins through root exudates, are not impacted by the growth of Bt crops.
Multiple insects have developed a resistance to B. thuringiensis. In November 2009, Monsanto scientists found the pink bollworm had become resistant to the first-generation Bt cotton in parts of Gujarat, India - that generation expresses one Bt gene, Cry1Ac. This was the first instance of Bt resistance confirmed by Monsanto anywhere in the world. Monsanto responded by introducing a second-generation cotton with multiple Bt proteins, which was rapidly adopted. Bollworm resistance to first-generation Bt cotton was also identified in Australia, China, Spain, and the United States. Additionally, resistance to Bt was documented in field population of diamondback moth in Hawaii, the continental US, and Asia. Studies in the cabbage looper have suggested that a mutation in the membrane transporter ABCC2 can confer resistance to Bt Cry1Ac.
Several studies have documented surges in "sucking pests" (which are not affected by Bt toxins) within a few years of adoption of Bt cotton. In China, the main problem has been with mirids, which have in some cases "completely eroded all benefits from Bt cotton cultivation". The increase in sucking pests depended on local temperature and rainfall conditions and increased in half the villages studied. The increase in insecticide use for the control of these secondary insects was far smaller than the reduction in total insecticide use due to Bt cotton adoption. Another study in five provinces in China found the reduction in pesticide use in Bt cotton cultivars is significantly lower than that reported in research elsewhere, consistent with the hypothesis suggested by recent studies that more pesticide sprayings are needed over time to control emerging secondary pests, such as aphids, spider mites, and lygus bugs.
Similar problems have been reported in India, with both mealy bugs and aphids although a survey of small Indian farms between 2002 and 2008 concluded Bt cotton adoption has led to higher yields and lower pesticide use, decreasing over time.
The controversies surrounding Bt use are among the many genetically modified food controversies more widely.
The most publicised problem associated with Bt crops is the claim that pollen from Bt maize could kill the monarch butterfly. The paper produced a public uproar and demonstrations against Bt maize; however by 2001 several follow-up studies coordinated by the USDA had asserted that "the most common types of Bt maize pollen are not toxic to monarch larvae in concentrations the insects would encounter in the fields." Similarly, B. thuringiensis has been widely used for controlling Spodoptera littoralis larvae growth due to their detrimental pest activities in Africa and Southern Europe. However, S. littoralis showed resistance to many strains of B. thuriginesis and were only effectively controlled by a few strains.
A study published in Nature in 2001 reported Bt-containing maize genes were found in maize in its center of origin, Oaxaca, Mexico. Another Nature paper published in 2002 claimed that the previous paper's conclusion was the result of an artifact caused by an inverse polymerase chain reaction and that "the evidence available is not sufficient to justify the publication of the original paper." A significant controversy happened over the paper and Nature's unprecedented notice.
A subsequent large-scale study in 2005 failed to find any evidence of genetic mixing in Oaxaca. A 2007 study found the "transgenic proteins expressed in maize were found in two (0.96%) of 208 samples from farmers' fields, located in two (8%) of 25 sampled communities." Mexico imports a substantial amount of maize from the U.S., and due to formal and informal seed networks among rural farmers, many potential routes are available for transgenic maize to enter into food and feed webs. One study found small-scale (about 1%) introduction of transgenic sequences in sampled fields in Mexico; it did not find evidence for or against this introduced genetic material being inherited by the next generation of plants. That study was immediately criticized, with the reviewer writing, "Genetically, any given plant should be either non-transgenic or transgenic, therefore for leaf tissue of a single transgenic plant, a GMO level close to 100% is expected. In their study, the authors chose to classify leaf samples as transgenic despite GMO levels of about 0.1%. We contend that results such as these are incorrectly interpreted as positive and are more likely to be indicative of contamination in the laboratory."
As of 2007, a new phenomenon called colony collapse disorder (CCD) began affecting bee hives all over North America. Initial speculation on possible causes included new parasites, pesticide use, and the use of Bt transgenic crops. The Mid-Atlantic Apiculture Research and Extension Consortium found no evidence that pollen from Bt crops is adversely affecting bees. According to the USDA, "Genetically modified (GM) crops, most commonly Bt corn, have been offered up as the cause of CCD. But there is no correlation between where GM crops are planted and the pattern of CCD incidents. Also, GM crops have been widely planted since the late 1990s, but CCD did not appear until 2006. In addition, CCD has been reported in countries that do not allow GM crops to be planted, such as Switzerland. German researchers have noted in one study a possible correlation between exposure to Bt pollen and compromised immunity to Nosema." The actual cause of CCD was unknown in 2007, and scientists believe it may have multiple exacerbating causes.
Some isolates of B. thuringiensis produce a class of insecticidal small molecules called beta-exotoxin, the common name for which is thuringiensin. A consensus document produced by the OECD says: "Beta-exotoxins are known to be toxic to humans and almost all other forms of life and its presence is prohibited in B. thuringiensis microbial products". Thuringiensins are nucleoside analogues. They inhibit RNA polymerase activity, a process common to all forms of life, in rats and bacteria alike.
Opportunistic pathogen of animals other than insects, causing necrosis, pulmonary infection, and/or food poisoning. How common this is, is unknown, because these are always taken to be B. cereus infections and are rarely tested for the Cry and Cyt proteins that are the only factor distinguishing B. thuringiensis from B. cereus.
Bacillus thuringiensis is no longer the sole source of pesticidal proteins. The Bacterial Pesticidal Protein Resource Center (BPPRC) provides information on the rapidly expanding field of pesticidal proteins for academics, regulators, and research and development personnel | [
{
"paragraph_id": 0,
"text": "Bacillus thuringiensis (or Bt) is a gram-positive, soil-dwelling bacterium, the most commonly used biological pesticide worldwide. B. thuringiensis also occurs naturally in the gut of caterpillars of various types of moths and butterflies, as well on leaf surfaces, aquatic environments, animal feces, insect-rich environments, and flour mills and grain-storage facilities. It has also been observed to parasitize other moths such as Cadra calidella—in laboratory experiments working with C. calidella, many of the moths were diseased due to this parasite.",
"title": ""
},
{
"paragraph_id": 1,
"text": "During sporulation, many Bt strains produce crystal proteins (proteinaceous inclusions), called delta endotoxins, that have insecticidal action. This has led to their use as insecticides, and more recently to genetically modified crops using Bt genes, such as Bt corn. Many crystal-producing Bt strains, though, do not have insecticidal properties. The subspecies israelensis is commonly used for control of mosquitoes and of fungus gnats.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. Other organisms (including humans, other animals and non-targeted insects) that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 1902, B. thuringiensis was first discovered in silkworms by Japanese sericultural engineer Ishiwatari Shigetane (石渡 繁胤). He named it B. sotto, using the Japanese word sottō (卒倒, 'collapse'), here referring to bacillary paralysis. In 1911, German microbiologist Ernst Berliner rediscovered it when he isolated it as the cause of a disease called Schlaffsucht in flour moth caterpillars in Thuringia (hence the specific name thuringiensis, \"Thuringian\"). B. sotto would later be reassigned as B. thuringiensis var. sotto.",
"title": "Taxonomy and discovery"
},
{
"paragraph_id": 4,
"text": "In 1976, Robert A. Zakharyan reported the presence of a plasmid in a strain of B. thuringiensis and suggested the plasmid's involvement in endospore and crystal formation. B. thuringiensis is closely related to B. cereus, a soil bacterium, and B. anthracis, the cause of anthrax; the three organisms differ mainly in their plasmids. Like other members of the genus, all three are capable of producing endospores.",
"title": "Taxonomy and discovery"
},
{
"paragraph_id": 5,
"text": "B. thuringiensis is placed in the Bacillus cereus group which is variously defined as: seven closely related species: B. cereus sensu stricto (B. cereus), B. anthracis, B. thuringiensis, B. mycoides, B. pseudomycoides, and B. cytotoxicus; or as six species in a Bacillus cereus sensu lato: B. weihenstephanensis, B. mycoides, B. pseudomycoides, B. cereus, B. thuringiensis, and B. anthracis. Within this grouping B.t. is more closely related to B.ce. It is more distantly related to B.w., B.m., B.p., and B.cy.",
"title": "Taxonomy and discovery"
},
{
"paragraph_id": 6,
"text": "There are several dozen recognized subspecies of B. thuringiensis. Subspecies commonly used as insecticides include B. thuringiensis subspecies kurstaki (Btk), subspecies israelensis (Bti) and subspecies aizawai (Bta). Some Bti lineages are clonal.",
"title": "Taxonomy and discovery"
},
{
"paragraph_id": 7,
"text": "Some strains are known to carry the same genes that produce enterotoxins in B. cereus, and so it is possible that the entire B. cereus sensu lato group may have the potential to be enteropathogens.",
"title": "Genetics"
},
{
"paragraph_id": 8,
"text": "The proteins that B. thuringiensis is most known for are encoded by cry genes. In most strains of B. thuringiensis, these genes are located on a plasmid (in other words cry is not a chromosomal gene in most strains). If these plasmids are lost it becomes indistinguishable from B. cereus as B. thuringiensis has no other species characteristics. Plasmid exchange has been observed both naturally and experimentally both within B.t. and between B.t. and two congeners, B. cereus and B. mycoides.",
"title": "Genetics"
},
{
"paragraph_id": 9,
"text": "plcR is an indispensable transcription regulator of most virulence factors, its absence greatly reducing virulence and toxicity. Some strains do naturally complete their life cycle with an inactivated plcR. It is half of a two-gene operon along with the heptapeptide papR. papR is part of quorum sensing in B. thuringiensis.",
"title": "Genetics"
},
{
"paragraph_id": 10,
"text": "Various strains including Btk ATCC 33679 carry plasmids belonging to the wider pXO1-like family. (The pXO1 family being a B. cereus-common family with members of ≈330kb length. They differ from pXO1 by replacement of the pXO1 pathogenicity island.) The insect parasite Btk HD73 carries a pXO2-like plasmid (pBT9727) lacking the 35kb pathogenicity island of pXO2 itself, and in fact having no identifiable virulence factors. (The pXO2 family does not have replacement of the pathogenicity island, instead simply lacking that part of pXO2.)",
"title": "Genetics"
},
{
"paragraph_id": 11,
"text": "The genomes of the B. cereus group may contain two types of introns, dubbed group I and group II. B.t strains have variously 0-5 group Is and 0-13 group IIs.",
"title": "Genetics"
},
{
"paragraph_id": 12,
"text": "There is still insufficient information to determine whether chromosome-plasmid coevolution to enable adaptation to particular environmental niches has occurred or is even possible.",
"title": "Genetics"
},
{
"paragraph_id": 13,
"text": "Common with B. cereus but so far not found elsewhere - including in other members of the species group - are the efflux pump BC3663, the N-acyl-L-amino-acid amidohydrolase BC3664, and the methyl-accepting chemotaxis protein BC5034.",
"title": "Genetics"
},
{
"paragraph_id": 14,
"text": "Has similar proteome diversity to close relative B. cereus.",
"title": "Proteome"
},
{
"paragraph_id": 15,
"text": "Into the BT Cotton protein is 'Crystal protein'",
"title": "Proteome"
},
{
"paragraph_id": 16,
"text": "Upon sporulation, B. thuringiensis forms crystals of two types of proteinaceous insecticidal delta endotoxins (δ-endotoxins) called crystal proteins or Cry proteins, which are encoded by cry genes, and Cyt proteins.",
"title": "Mechanism of insecticidal action"
},
{
"paragraph_id": 17,
"text": "Cry toxins have specific activities against insect species of the orders Lepidoptera (moths and butterflies), Diptera (flies and mosquitoes), Coleoptera (beetles) and Hymenoptera (wasps, bees, ants and sawflies), as well as against nematodes. Thus, B. thuringiensis serves as an important reservoir of Cry toxins for production of biological insecticides and insect-resistant genetically modified crops. When insects ingest toxin crystals, their alkaline digestive tracts denature the insoluble crystals, making them soluble and thus amenable to being cut with proteases found in the insect gut, which liberate the toxin from the crystal. The Cry toxin is then inserted into the insect gut cell membrane, paralyzing the digestive tract and forming a pore. The insect stops eating and starves to death; live Bt bacteria may also colonize the insect, which can contribute to death. Death occurs within a few hours or weeks. The midgut bacteria of susceptible larvae may be required for B. thuringiensis insecticidal activity.",
"title": "Mechanism of insecticidal action"
},
{
"paragraph_id": 18,
"text": "A B. thuringiensis small RNA called BtsR1 can silence the Cry5Ba toxin expression when outside the host by binding to the RBS site of the Cry5Ba toxin transcript to avoid nematode behavioral defenses. The silencing results in an increase of the bacteria ingestion by C. elegans. The expression of BtsR1 is then reduced after ingestion, resulting in Cry5Ba toxin production and host death.",
"title": "Mechanism of insecticidal action"
},
{
"paragraph_id": 19,
"text": "In 1996 another class of insecticidal proteins in Bt was discovered: the vegetative insecticidal proteins (Vip; InterPro: IPR022180). Vip proteins do not share sequence homology with Cry proteins, in general do not compete for the same receptors, and some kill different insects than do Cry proteins.",
"title": "Mechanism of insecticidal action"
},
{
"paragraph_id": 20,
"text": "In 2000, a novel subgroup of Cry protein, designated parasporin, was discovered from non-insecticidal B. thuringiensis isolates. The proteins of parasporin group are defined as B. thuringiensis and related bacterial parasporal proteins that are not hemolytic, but capable of preferentially killing cancer cells. As of January 2013, parasporins comprise six subfamilies: PS1 to PS6.",
"title": "Mechanism of insecticidal action"
},
{
"paragraph_id": 21,
"text": "Spores and crystalline insecticidal proteins produced by B. thuringiensis have been used to control insect pests since the 1920s and are often applied as liquid sprays. They are now used as specific insecticides under trade names such as DiPel and Thuricide. Because of their specificity, these pesticides are regarded as environmentally friendly, with little or no effect on humans, wildlife, pollinators, and most other beneficial insects, and are used in organic farming; however, the manuals for these products do contain many environmental and human health warnings, and a 2012 European regulatory peer review of five approved strains found, while data exist to support some claims of low toxicity to humans and the environment, the data are insufficient to justify many of these claims.",
"title": "Use of spores and proteins in pest control"
},
{
"paragraph_id": 22,
"text": "New strains of Bt are developed and introduced over time as insects develop resistance to Bt, or the desire occurs to force mutations to modify organism characteristics, or to use homologous recombinant genetic engineering to improve crystal size and increase pesticidal activity, or broaden the host range of Bt and obtain more effective formulations. Each new strain is given a unique number and registered with the U.S. EPA and allowances may be given for genetic modification depending on \"its parental strains, the proposed pesticide use pattern, and the manner and extent to which the organism has been genetically modified\". Formulations of Bt that are approved for organic farming in the US are listed at the website of the Organic Materials Review Institute (OMRI) and several university extension websites offer advice on how to use Bt spore or protein preparations in organic farming.",
"title": "Use of spores and proteins in pest control"
},
{
"paragraph_id": 23,
"text": "The Belgian company Plant Genetic Systems (now part of Bayer CropScience) was the first company (in 1985) to develop genetically modified crops (tobacco) with insect tolerance by expressing cry genes from B. thuringiensis; the resulting crops contain delta endotoxin. The Bt tobacco was never commercialized; tobacco plants are used to test genetic modifications since they are easy to manipulate genetically and are not part of the food supply.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 24,
"text": "In 1985, potato plants producing CRY 3A Bt toxin were approved safe by the Environmental Protection Agency, making it the first human-modified pesticide-producing crop to be approved in the US, though many plants produce pesticides naturally, including tobacco, coffee plants, cocoa, cotton and black walnut. This was the 'New Leaf' potato, and it was removed from the market in 2001 due to lack of interest.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 25,
"text": "In 1996, genetically modified maize producing Bt Cry protein was approved, which killed the European corn borer and related species; subsequent Bt genes were introduced that killed corn rootworm larvae.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 26,
"text": "The Bt genes engineered into crops and approved for release include, singly and stacked: Cry1A.105, CryIAb, CryIF, Cry2Ab, Cry3Bb1, Cry34Ab1, Cry35Ab1, mCry3A, and VIP, and the engineered crops include corn and cotton.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 27,
"text": "Corn genetically modified to produce VIP was first approved in the US in 2010.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 28,
"text": "In India, by 2014, more than seven million cotton farmers, occupying twenty-six million acres, had adopted Bt cotton.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 29,
"text": "Monsanto developed a soybean expressing Cry1Ac and the glyphosate-resistance gene for the Brazilian market, which completed the Brazilian regulatory process in 2010.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 30,
"text": "Bt aspen - specifically Populus hybrids - have been developed. They do suffer lesser leaf damage from insect herbivory. The results have not been entirely positive however: The intended result - better timber yield - was not achieved, with no growth advantage despite that reduction in herbivore damage; one of their major pests still preys upon the transgenic trees; and besides that, their leaf litter decomposes differently due to the transgenic toxins, resulting in alterations to the aquatic insect populations nearby.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 31,
"text": "The use of Bt toxins as plant-incorporated protectants prompted the need for extensive evaluation of their safety for use in foods and potential unintended impacts on the environment.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 32,
"text": "Concerns over the safety of consumption of genetically modified plant materials that contain Cry proteins have been addressed in extensive dietary risk assessment studies. As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. While the target pests are exposed to the toxins primarily through leaf and stalk material, Cry proteins are also expressed in other parts of the plant, including trace amounts in maize kernels which are ultimately consumed by both humans and animals. However, other organisms (including humans, other animals and non-targeted insects) that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 33,
"text": "Animal models have been used to assess human health risk from consumption of products containing Cry proteins. The United States Environmental Protection Agency recognizes mouse acute oral feeding studies where doses as high as 5,000 mg/kg body weight resulted in no observed adverse effects. Research on other known toxic proteins suggests that toxicity occurs at much lower doses, further suggesting that Bt toxins are not toxic to mammals. The results of toxicology studies are further strengthened by the lack of observed toxicity from decades of use of B. thuringiensis and its crystalline proteins as an insecticidal spray.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 34,
"text": "Introduction of a new protein raised concerns regarding the potential for allergic responses in sensitive individuals. Bioinformatic analysis of known allergens has indicated there is no concern of allergic reactions as a result of consumption of Bt toxins. Additionally, skin prick testing using purified Bt protein resulted in no detectable production of toxin-specific IgE antibodies, even in atopic patients.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 35,
"text": "Studies have been conducted to evaluate the fate of Bt toxins that are ingested in foods. Bt toxin proteins have been shown to digest within minutes of exposure to simulated gastric fluids. The instability of the proteins in digestive fluids is an additional indication that Cry proteins are unlikely to be allergenic, since most known food allergens resist degradation and are ultimately absorbed in the small intestine.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 36,
"text": "Ecological risk assessment aims to ensure there is no unintended impact on non-target organisms and no contamination of natural resources as a result of the use of a new substance, such as the use of Bt in genetically modified crops. The impact of Bt toxins on the environments where transgenic plants are grown has been evaluated to ensure no adverse effects outside of targeted crop pests.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 37,
"text": "Concerns over possible environmental impact from accumulation of Bt toxins from plant tissues, pollen dispersal, and direct secretion from roots have been investigated. Bt toxins may persist in soil for over 200 days, with half-lives between 1.6 and 22 days. Much of the toxin is initially degraded rapidly by microorganisms in the environment, while some is adsorbed by organic matter and persists longer. Some studies, in contrast, claim that the toxins do not persist in the soil. Bt toxins are less likely to accumulate in bodies of water, but pollen shed or soil runoff may deposit them in an aquatic ecosystem. Fish species are not susceptible to Bt toxins if exposed.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 38,
"text": "The toxic nature of Bt proteins has an adverse impact on many major crop pests, but ecological risk assessments have been conducted to ensure safety of beneficial non-target organisms that may come into contact with the toxins. Widespread concerns over toxicity in non-target lepidopterans, such as the monarch butterfly, have been disproved through proper exposure characterization, where it was determined that non-target organisms are not exposed to high enough amounts of the Bt toxins to have an adverse effect on the population. Soil-dwelling organisms, potentially exposed to Bt toxins through root exudates, are not impacted by the growth of Bt crops.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 39,
"text": "Multiple insects have developed a resistance to B. thuringiensis. In November 2009, Monsanto scientists found the pink bollworm had become resistant to the first-generation Bt cotton in parts of Gujarat, India - that generation expresses one Bt gene, Cry1Ac. This was the first instance of Bt resistance confirmed by Monsanto anywhere in the world. Monsanto responded by introducing a second-generation cotton with multiple Bt proteins, which was rapidly adopted. Bollworm resistance to first-generation Bt cotton was also identified in Australia, China, Spain, and the United States. Additionally, resistance to Bt was documented in field population of diamondback moth in Hawaii, the continental US, and Asia. Studies in the cabbage looper have suggested that a mutation in the membrane transporter ABCC2 can confer resistance to Bt Cry1Ac.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 40,
"text": "Several studies have documented surges in \"sucking pests\" (which are not affected by Bt toxins) within a few years of adoption of Bt cotton. In China, the main problem has been with mirids, which have in some cases \"completely eroded all benefits from Bt cotton cultivation\". The increase in sucking pests depended on local temperature and rainfall conditions and increased in half the villages studied. The increase in insecticide use for the control of these secondary insects was far smaller than the reduction in total insecticide use due to Bt cotton adoption. Another study in five provinces in China found the reduction in pesticide use in Bt cotton cultivars is significantly lower than that reported in research elsewhere, consistent with the hypothesis suggested by recent studies that more pesticide sprayings are needed over time to control emerging secondary pests, such as aphids, spider mites, and lygus bugs.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 41,
"text": "Similar problems have been reported in India, with both mealy bugs and aphids although a survey of small Indian farms between 2002 and 2008 concluded Bt cotton adoption has led to higher yields and lower pesticide use, decreasing over time.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 42,
"text": "The controversies surrounding Bt use are among the many genetically modified food controversies more widely.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 43,
"text": "The most publicised problem associated with Bt crops is the claim that pollen from Bt maize could kill the monarch butterfly. The paper produced a public uproar and demonstrations against Bt maize; however by 2001 several follow-up studies coordinated by the USDA had asserted that \"the most common types of Bt maize pollen are not toxic to monarch larvae in concentrations the insects would encounter in the fields.\" Similarly, B. thuringiensis has been widely used for controlling Spodoptera littoralis larvae growth due to their detrimental pest activities in Africa and Southern Europe. However, S. littoralis showed resistance to many strains of B. thuriginesis and were only effectively controlled by a few strains.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 44,
"text": "A study published in Nature in 2001 reported Bt-containing maize genes were found in maize in its center of origin, Oaxaca, Mexico. Another Nature paper published in 2002 claimed that the previous paper's conclusion was the result of an artifact caused by an inverse polymerase chain reaction and that \"the evidence available is not sufficient to justify the publication of the original paper.\" A significant controversy happened over the paper and Nature's unprecedented notice.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 45,
"text": "A subsequent large-scale study in 2005 failed to find any evidence of genetic mixing in Oaxaca. A 2007 study found the \"transgenic proteins expressed in maize were found in two (0.96%) of 208 samples from farmers' fields, located in two (8%) of 25 sampled communities.\" Mexico imports a substantial amount of maize from the U.S., and due to formal and informal seed networks among rural farmers, many potential routes are available for transgenic maize to enter into food and feed webs. One study found small-scale (about 1%) introduction of transgenic sequences in sampled fields in Mexico; it did not find evidence for or against this introduced genetic material being inherited by the next generation of plants. That study was immediately criticized, with the reviewer writing, \"Genetically, any given plant should be either non-transgenic or transgenic, therefore for leaf tissue of a single transgenic plant, a GMO level close to 100% is expected. In their study, the authors chose to classify leaf samples as transgenic despite GMO levels of about 0.1%. We contend that results such as these are incorrectly interpreted as positive and are more likely to be indicative of contamination in the laboratory.\"",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 46,
"text": "As of 2007, a new phenomenon called colony collapse disorder (CCD) began affecting bee hives all over North America. Initial speculation on possible causes included new parasites, pesticide use, and the use of Bt transgenic crops. The Mid-Atlantic Apiculture Research and Extension Consortium found no evidence that pollen from Bt crops is adversely affecting bees. According to the USDA, \"Genetically modified (GM) crops, most commonly Bt corn, have been offered up as the cause of CCD. But there is no correlation between where GM crops are planted and the pattern of CCD incidents. Also, GM crops have been widely planted since the late 1990s, but CCD did not appear until 2006. In addition, CCD has been reported in countries that do not allow GM crops to be planted, such as Switzerland. German researchers have noted in one study a possible correlation between exposure to Bt pollen and compromised immunity to Nosema.\" The actual cause of CCD was unknown in 2007, and scientists believe it may have multiple exacerbating causes.",
"title": "Use of Bt genes in genetic engineering of plants for pest control"
},
{
"paragraph_id": 47,
"text": "Some isolates of B. thuringiensis produce a class of insecticidal small molecules called beta-exotoxin, the common name for which is thuringiensin. A consensus document produced by the OECD says: \"Beta-exotoxins are known to be toxic to humans and almost all other forms of life and its presence is prohibited in B. thuringiensis microbial products\". Thuringiensins are nucleoside analogues. They inhibit RNA polymerase activity, a process common to all forms of life, in rats and bacteria alike.",
"title": "Beta-exotoxins"
},
{
"paragraph_id": 48,
"text": "Opportunistic pathogen of animals other than insects, causing necrosis, pulmonary infection, and/or food poisoning. How common this is, is unknown, because these are always taken to be B. cereus infections and are rarely tested for the Cry and Cyt proteins that are the only factor distinguishing B. thuringiensis from B. cereus.",
"title": "Other hosts"
},
{
"paragraph_id": 49,
"text": "Bacillus thuringiensis is no longer the sole source of pesticidal proteins. The Bacterial Pesticidal Protein Resource Center (BPPRC) provides information on the rapidly expanding field of pesticidal proteins for academics, regulators, and research and development personnel",
"title": "New nomenclature for pesticidal proteins (Bt toxins)"
}
] | Bacillus thuringiensis is a gram-positive, soil-dwelling bacterium, the most commonly used biological pesticide worldwide. B. thuringiensis also occurs naturally in the gut of caterpillars of various types of moths and butterflies, as well on leaf surfaces, aquatic environments, animal feces, insect-rich environments, and flour mills and grain-storage facilities. It has also been observed to parasitize other moths such as Cadra calidella—in laboratory experiments working with C. calidella, many of the moths were diseased due to this parasite. During sporulation, many Bt strains produce crystal proteins, called delta endotoxins, that have insecticidal action. This has led to their use as insecticides, and more recently to genetically modified crops using Bt genes, such as Bt corn. Many crystal-producing Bt strains, though, do not have insecticidal properties. The subspecies israelensis is commonly used for control of mosquitoes and of fungus gnats. As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. Other organisms that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt. | 2002-02-25T15:51:15Z | 2023-12-12T10:59:16Z | [
"Template:Cite news",
"Template:Refbegin",
"Template:Nihongo",
"Template:Clear",
"Template:Cite book",
"Template:Cite magazine",
"Template:Cite conference",
"Template:Lang",
"Template:Clarify",
"Template:Authority control",
"Template:Small",
"Template:Short description",
"Template:Rp",
"Template:Refend",
"Template:Cite journal",
"Template:Reflist",
"Template:Speciesbox",
"Template:Toclimit",
"Template:Citation needed",
"Template:InterPro",
"Template:'",
"Template:Page needed",
"Template:Cite thesis",
"Template:Visible anchor",
"Template:Taxonbar",
"Template:Cite web",
"Template:Cite patent"
] | https://en.wikipedia.org/wiki/Bacillus_thuringiensis |
4,185 | Bacteriophage | A bacteriophage (/bækˈtɪərioʊfeɪdʒ/), also known informally as a phage (/ˈfeɪdʒ/), is a virus that infects and replicates within bacteria and archaea. The term was derived from "bacteria" and the Greek φαγεῖν (phagein), meaning "to devour". Bacteriophages are composed of proteins that encapsulate a DNA or RNA genome, and may have structures that are either simple or elaborate. Their genomes may encode as few as four genes (e.g. MS2) and as many as hundreds of genes. Phages replicate within the bacterium following the injection of their genome into its cytoplasm.
Bacteriophages are among the most common and diverse entities in the biosphere. Bacteriophages are ubiquitous viruses, found wherever bacteria exist. It is estimated there are more than 10 bacteriophages on the planet, more than every other organism on Earth, including bacteria, combined. Viruses are the most abundant biological entity in the water column of the world's oceans, and the second largest component of biomass after prokaryotes, where up to 9x10 virions per millilitre have been found in microbial mats at the surface, and up to 70% of marine bacteria may be infected by phages.
Phages have been used since the late 20th century as an alternative to antibiotics in the former Soviet Union and Central Europe, as well as in France. They are seen as a possible therapy against multi-drug-resistant strains of many bacteria (see phage therapy).
Phages are known to interact with the immune system both indirectly via bacterial expression of phage-encoded proteins and directly by influencing innate immunity and bacterial clearance. Phage–host interactions are becoming increasingly important areas of research.
Bacteriophages occur abundantly in the biosphere, with different genomes and lifestyles. Phages are classified by the International Committee on Taxonomy of Viruses (ICTV) according to morphology and nucleic acid.
It has been suggested that members of Picobirnaviridae infect bacteria, but not mammals.
There are also many unassigned genera of the class Leviviricetes: Chimpavirus, Hohglivirus, Mahrahvirus, Meihzavirus, Nicedsevirus, Sculuvirus, Skrubnovirus, Tetipavirus and Winunavirus containing linear ssRNA genomes and the unassigned genus Lilyvirus of the order Caudovirales containing a linear dsDNA genome.
In 1896, Ernest Hanbury Hankin reported that something in the waters of the Ganges and Yamuna rivers in India had a marked antibacterial action against cholera and it could pass through a very fine porcelain filter. In 1915, British bacteriologist Frederick Twort, superintendent of the Brown Institution of London, discovered a small agent that infected and killed bacteria. He believed the agent must be one of the following:
Twort's research was interrupted by the onset of World War I, as well as a shortage of funding and the discoveries of antibiotics.
Independently, French-Canadian microbiologist Félix d'Hérelle, working at the Pasteur Institute in Paris, announced on 3 September 1917 that he had discovered "an invisible, antagonistic microbe of the dysentery bacillus". For d'Hérelle, there was no question as to the nature of his discovery: "In a flash I had understood: what caused my clear spots was in fact an invisible microbe... a virus parasitic on bacteria." D'Hérelle called the virus a bacteriophage, a bacteria-eater (from the Greek phagein, meaning "to devour"). He also recorded a dramatic account of a man suffering from dysentery who was restored to good health by the bacteriophages. It was d'Hérelle who conducted much research into bacteriophages and introduced the concept of phage therapy. In 1919, in Paris, France, d'Hérelle conducted the first clinical application of a bacteriophage, with the first reported use in the United States being in 1922.
In 1969, Max Delbrück, Alfred Hershey, and Salvador Luria were awarded the Nobel Prize in Physiology or Medicine for their discoveries of the replication of viruses and their genetic structure. Specifically the work of Hershey, as contributor to the Hershey–Chase experiment in 1952, provided convincing evidence that DNA, not protein, was the genetic material of life. Delbrück and Luria carried out the Luria–Delbrück experiment which demonstrated statistically that mutations in bacteria occur randomly and thus follow Darwinian rather than Lamarckian principles.
Phages were discovered to be antibacterial agents and were used in the former Soviet Republic of Georgia (pioneered there by Giorgi Eliava with help from the co-discoverer of bacteriophages, Félix d'Hérelle) during the 1920s and 1930s for treating bacterial infections. They had widespread use, including treatment of soldiers in the Red Army. However, they were abandoned for general use in the West for several reasons:
The use of phages has continued since the end of the Cold War in Russia, Georgia, and elsewhere in Central and Eastern Europe. The first regulated, randomized, double-blind clinical trial was reported in the Journal of Wound Care in June 2009, which evaluated the safety and efficacy of a bacteriophage cocktail to treat infected venous ulcers of the leg in human patients. The FDA approved the study as a Phase I clinical trial. The study's results demonstrated the safety of therapeutic application of bacteriophages, but did not show efficacy. The authors explained that the use of certain chemicals that are part of standard wound care (e.g. lactoferrin or silver) may have interfered with bacteriophage viability. Shortly after that, another controlled clinical trial in Western Europe (treatment of ear infections caused by Pseudomonas aeruginosa) was reported in the journal Clinical Otolaryngology in August 2009. The study concludes that bacteriophage preparations were safe and effective for treatment of chronic ear infections in humans. Additionally, there have been numerous animal and other experimental clinical trials evaluating the efficacy of bacteriophages for various diseases, such as infected burns and wounds, and cystic fibrosis-associated lung infections, among others. On the other hand, phages of Inoviridae have been shown to complicate biofilms involved in pneumonia and cystic fibrosis and to shelter the bacteria from drugs meant to eradicate disease, thus promoting persistent infection.
Meanwhile, bacteriophage researchers have been developing engineered viruses to overcome antibiotic resistance, and engineering the phage genes responsible for coding enzymes that degrade the biofilm matrix, phage structural proteins, and the enzymes responsible for lysis of the bacterial cell wall. There have been results showing that T4 phages that are small in size and short-tailed can be helpful in detecting E. coli in the human body.
Therapeutic efficacy of a phage cocktail was evaluated in a mice model with nasal infection of multidrug-resistant (MDR) A. baumannii. Mice treated with the phage cocktail showed a 2.3-fold higher survival rate compared to those untreated at seven days post-infection. In 2017, a patient with a pancreas compromised by MDR A. baumannii was put on several antibiotics; despite this, the patient's health continued to deteriorate during a four-month period. Without effective antibiotics, the patient was subjected to phage therapy using a phage cocktail containing nine different phages that had been demonstrated to be effective against MDR A. baumannii. Once on this therapy the patient's downward clinical trajectory reversed, and returned to health.
D'Herelle "quickly learned that bacteriophages are found wherever bacteria thrive: in sewers, in rivers that catch waste runoff from pipes, and in the stools of convalescent patients." This includes rivers traditionally thought to have healing powers, including India's Ganges River.
Food industry – Phages have increasingly been used to safen food products and to forestall spoilage bacteria. Since 2006, the United States Food and Drug Administration (FDA) and United States Department of Agriculture (USDA) have approved several bacteriophage products. LMP-102 (Intralytix) was approved for treating ready-to-eat (RTE) poultry and meat products. In that same year, the FDA approved LISTEX (developed and produced by Micreos) using bacteriophages on cheese to kill Listeria monocytogenes bacteria, in order to give them generally recognized as safe (GRAS) status. In July 2007, the same bacteriophage were approved for use on all food products. In 2011 USDA confirmed that LISTEX is a clean label processing aid and is included in USDA. Research in the field of food safety is continuing to see if lytic phages are a viable option to control other food-borne pathogens in various food products.
Diagnostics – In 2011, the FDA cleared the first bacteriophage-based product for in vitro diagnostic use. The KeyPath MRSA/MSSA Blood Culture Test uses a cocktail of bacteriophage to detect Staphylococcus aureus in positive blood cultures and determine methicillin resistance or susceptibility. The test returns results in about five hours, compared to two to three days for standard microbial identification and susceptibility test methods. It was the first accelerated antibiotic-susceptibility test approved by the FDA.
Counteracting bioweapons and toxins – Government agencies in the West have for several years been looking to Georgia and the former Soviet Union for help with exploiting phages for counteracting bioweapons and toxins, such as anthrax and botulism. Developments are continuing among research groups in the U.S. Other uses include spray application in horticulture for protecting plants and vegetable produce from decay and the spread of bacterial disease. Other applications for bacteriophages are as biocides for environmental surfaces, e.g., in hospitals, and as preventative treatments for catheters and medical devices before use in clinical settings. The technology for phages to be applied to dry surfaces, e.g., uniforms, curtains, or even sutures for surgery now exists. Clinical trials reported in Clinical Otolaryngology show success in veterinary treatment of pet dogs with otitis.
The SEPTIC bacterium sensing and identification method uses the ion emission and its dynamics during phage infection and offers high specificity and speed for detection.
Phage display is a different use of phages involving a library of phages with a variable peptide linked to a surface protein. Each phage genome encodes the variant of the protein displayed on its surface (hence the name), providing a link between the peptide variant and its encoding gene. Variant phages from the library may be selected through their binding affinity to an immobilized molecule (e.g., botulism toxin) to neutralize it. The bound, selected phages can be multiplied by reinfecting a susceptible bacterial strain, thus allowing them to retrieve the peptides encoded in them for further study.
Antimicrobial drug discovery – Phage proteins often have antimicrobial activity and may serve as leads for peptidomimetics, i.e. drugs that mimic peptides. Phage-ligand technology makes use of phage proteins for various applications, such as binding of bacteria and bacterial components (e.g. endotoxin) and lysis of bacteria.
Basic research – Bacteriophages are important model organisms for studying principles of evolution and ecology.
Bacteriophages present in the environment can cause cheese to not ferment. In order to avoid this, mixed-strain starter cultures and culture rotation regimes can be used. Genetic engineering of culture microbes – especially Lactococcus lactis and Streptococcus thermophilus – have been studied for genetic analysis and modification to improve phage resistance. This has especially focused on plasmid and recombinant chromosomal modifications.
Some research has focused on the potential of bacteriophages as antimicrobial against foodborne pathogens and biofilm formation within the dairy industry. As the spread of antibiotic resistance is a main concern within the dairy industry, phages can serve as a promising alternative.
The life cycle of bacteriophages tends to be either a lytic cycle or a lysogenic cycle. In addition, some phages display pseudolysogenic behaviors.
With lytic phages such as the T4 phage, bacterial cells are broken open (lysed) and destroyed after immediate replication of the virion. As soon as the cell is destroyed, the phage progeny can find new hosts to infect. Lytic phages are more suitable for phage therapy. Some lytic phages undergo a phenomenon known as lysis inhibition, where completed phage progeny will not immediately lyse out of the cell if extracellular phage concentrations are high. This mechanism is not identical to that of the temperate phage going dormant and usually is temporary.
In contrast, the lysogenic cycle does not result in immediate lysing of the host cell. Those phages able to undergo lysogeny are known as temperate phages. Their viral genome will integrate with host DNA and replicate along with it, relatively harmlessly, or may even become established as a plasmid. The virus remains dormant until host conditions deteriorate, perhaps due to depletion of nutrients, then, the endogenous phages (known as prophages) become active. At this point they initiate the reproductive cycle, resulting in lysis of the host cell. As the lysogenic cycle allows the host cell to continue to survive and reproduce, the virus is replicated in all offspring of the cell. An example of a bacteriophage known to follow the lysogenic cycle and the lytic cycle is the phage lambda of E. coli.
Sometimes prophages may provide benefits to the host bacterium while they are dormant by adding new functions to the bacterial genome, in a phenomenon called lysogenic conversion. Examples are the conversion of harmless strains of Corynebacterium diphtheriae or Vibrio cholerae by bacteriophages to highly virulent ones that cause diphtheria or cholera, respectively. Strategies to combat certain bacterial infections by targeting these toxin-encoding prophages have been proposed.
Bacterial cells are protected by a cell wall of polysaccharides, which are important virulence factors protecting bacterial cells against both immune host defenses and antibiotics. To enter a host cell, bacteriophages bind to specific receptors on the surface of bacteria, including lipopolysaccharides, teichoic acids, proteins, or even flagella. This specificity means a bacteriophage can infect only certain bacteria bearing receptors to which they can bind, which in turn, determines the phage's host range. Polysaccharide-degrading enzymes are virion-associated proteins that enzymatically degrade the capsular outer layer of their hosts at the initial step of a tightly programmed phage infection process. Host growth conditions also influence the ability of the phage to attach and invade them. As phage virions do not move independently, they must rely on random encounters with the correct receptors when in solution, such as blood, lymphatic circulation, irrigation, soil water, etc.
Myovirus bacteriophages use a hypodermic syringe-like motion to inject their genetic material into the cell. After contacting the appropriate receptor, the tail fibers flex to bring the base plate closer to the surface of the cell. This is known as reversible binding. Once attached completely, irreversible binding is initiated and the tail contracts, possibly with the help of ATP present in the tail, injecting genetic material through the bacterial membrane. The injection is accomplished through a sort of bending motion in the shaft by going to the side, contracting closer to the cell and pushing back up. Podoviruses lack an elongated tail sheath like that of a myovirus, so instead, they use their small, tooth-like tail fibers enzymatically to degrade a portion of the cell membrane before inserting their genetic material.
Within minutes, bacterial ribosomes start translating viral mRNA into protein. For RNA-based phages, RNA replicase is synthesized early in the process. Proteins modify the bacterial RNA polymerase so it preferentially transcribes viral mRNA. The host's normal synthesis of proteins and nucleic acids is disrupted, and it is forced to manufacture viral products instead. These products go on to become part of new virions within the cell, helper proteins that contribute to the assemblage of new virions, or proteins involved in cell lysis. In 1972, Walter Fiers (University of Ghent, Belgium) was the first to establish the complete nucleotide sequence of a gene and in 1976, of the viral genome of bacteriophage MS2. Some dsDNA bacteriophages encode ribosomal proteins, which are thought to modulate protein translation during phage infection.
In the case of the T4 phage, the construction of new virus particles involves the assistance of helper proteins that act catalytically during phage morphogenesis. The base plates are assembled first, with the tails being built upon them afterward. The head capsids, constructed separately, will spontaneously assemble with the tails. During assembly of the phage T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. The DNA is packed efficiently within the heads. The whole process takes about 15 minutes.
Phages may be released via cell lysis, by extrusion, or, in a few cases, by budding. Lysis, by tailed phages, is achieved by an enzyme called endolysin, which attacks and breaks down the cell wall peptidoglycan. An altogether different phage type, the filamentous phage, makes the host cell continually secrete new virus particles. Released virions are described as free, and, unless defective, are capable of infecting a new bacterium. Budding is associated with certain Mycoplasma phages. In contrast to virion release, phages displaying a lysogenic cycle do not kill the host and instead become long-term residents as prophages.
Research in 2017 revealed that the bacteriophage Φ3T makes a short viral protein that signals other bacteriophages to lie dormant instead of killing the host bacterium. Arbitrium is the name given to this protein by the researchers who discovered it.
Given the millions of different phages in the environment, phage genomes come in a variety of forms and sizes. RNA phages such as MS2 have the smallest genomes, with only a few kilobases. However, some DNA phages such as T4 may have large genomes with hundreds of genes; the size and shape of the capsid varies along with the size of the genome. The largest bacteriophage genomes reach a size of 735 kb.
Bacteriophage genomes can be highly mosaic, i.e. the genome of many phage species appear to be composed of numerous individual modules. These modules may be found in other phage species in different arrangements. Mycobacteriophages, bacteriophages with mycobacterial hosts, have provided excellent examples of this mosaicism. In these mycobacteriophages, genetic assortment may be the result of repeated instances of site-specific recombination and illegitimate recombination (the result of phage genome acquisition of bacterial host genetic sequences). Evolutionary mechanisms shaping the genomes of bacterial viruses vary between different families and depend upon the type of the nucleic acid, characteristics of the virion structure, as well as the mode of the viral life cycle.
Some marine roseobacter phages contain deoxyuridine (dU) instead of deoxythymidine (dT) in their genomic DNA. There is some evidence that this unusual component is a mechanism to evade bacterial defense mechanisms such as restriction endonucleases and CRISPR/Cas systems which evolved to recognize and cleave sequences within invading phages, thereby inactivating them. Other phages have long been known to use unusual nucleotides. In 1963, Takahashi and Marmur identified a Bacillus phage that has dU substituting dT in its genome, and in 1977, Kirnos et al. identified a cyanophage containing 2-aminoadenine (Z) instead of adenine (A).
The field of systems biology investigates the complex networks of interactions within an organism, usually using computational tools and modeling. For example, a phage genome that enters into a bacterial host cell may express hundreds of phage proteins which will affect the expression of numerous host genes or the host's metabolism. All of these complex interactions can be described and simulated in computer models.
For instance, infection of Pseudomonas aeruginosa by the temperate phage PaP3 changed the expression of 38% (2160/5633) of its host's genes. Many of these effects are probably indirect, hence the challenge becomes to identify the direct interactions among bacteria and phage.
Several attempts have been made to map protein–protein interactions among phage and their host. For instance, bacteriophage lambda was found to interact with its host, E. coli, by dozens of interactions. Again, the significance of many of these interactions remains unclear, but these studies suggest that there most likely are several key interactions and many indirect interactions whose role remains uncharacterized.
Bacteriophages are a major threat to bacteria and prokaryotes have evolved numerous mechanisms to block infection or to block the replication of bacteriophages within host cells. The CRISPR system is one such mechanism as are retrons and the anti-toxin system encoded by them. The Thoeris defense system is known to deploy a unique strategy for bacterial antiphage resistance via NAD+ degradation.
Temperate phages are bacteriophages that integrate their genetic material into the host as extrachromosomal episomes or as a prophage during a lysogenic cycle. Some temperate phages can confer fitness advantages to their host in numerous ways, including giving antibiotic resistance through the transfer or introduction of antibiotic resistance genes (ARGs), protecting hosts from phagocytosis, protecting hosts from secondary infection through superinfection exclusion, enhancing host pathogenicity, or enhancing bacterial metabolism or growth. Bacteriophage–host symbiosis may benefit bacteria by providing selective advantages while passively replicating the phage genome.
Metagenomics has allowed the in-water detection of bacteriophages that was not possible previously.
Also, bacteriophages have been used in hydrological tracing and modelling in river systems, especially where surface water and groundwater interactions occur. The use of phages is preferred to the more conventional dye marker because they are significantly less absorbed when passing through ground waters and they are readily detected at very low concentrations. Non-polluted water may contain approximately 2×10 bacteriophages per ml.
Bacteriophages are thought to contribute extensively to horizontal gene transfer in natural environments, principally via transduction, but also via transformation. Metagenomics-based studies also have revealed that viromes from a variety of environments harbor antibiotic-resistance genes, including those that could confer multidrug resistance.
Although phages do not infect humans, there are countless phage particles in the human body, given our extensive microbiome. Our phage population has been called the human phageome, including the "healthy gut phageome" (HGP) and the "diseased human phageome" (DHP). The active phageome of a healthy human (i.e., actively replicating as opposed to nonreplicating, integrated prophage) has been estimated to comprise dozens to thousands of different viruses. There is evidence that bacteriophages and bacteria interact in the human gut microbiome both antagonistically and beneficially.
Preliminary studies have indicated that common bacteriophages are found in 62% of healthy individuals on average, while their prevalence was reduced by 42% and 54% on average in patients with ulcerative colitis (UC) and Crohn's disease (CD). Abundance of phages may also decline in the elderly.
The most common phages in the human intestine, found worldwide, are crAssphages. CrAssphages are transmitted from mother to child soon after birth, and there is some evidence suggesting that they may be transmitted locally. Each person develops their own unique crAssphage clusters. CrAss-like phages also may be present in primates besides humans.
Among the countless phage, only a few have been studied in detail, including some historically important phage that were discovered in the early days of microbial genetics. These, especially the T-phage, helped to discover important principles of gene structure and function. | [
{
"paragraph_id": 0,
"text": "A bacteriophage (/bækˈtɪərioʊfeɪdʒ/), also known informally as a phage (/ˈfeɪdʒ/), is a virus that infects and replicates within bacteria and archaea. The term was derived from \"bacteria\" and the Greek φαγεῖν (phagein), meaning \"to devour\". Bacteriophages are composed of proteins that encapsulate a DNA or RNA genome, and may have structures that are either simple or elaborate. Their genomes may encode as few as four genes (e.g. MS2) and as many as hundreds of genes. Phages replicate within the bacterium following the injection of their genome into its cytoplasm.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bacteriophages are among the most common and diverse entities in the biosphere. Bacteriophages are ubiquitous viruses, found wherever bacteria exist. It is estimated there are more than 10 bacteriophages on the planet, more than every other organism on Earth, including bacteria, combined. Viruses are the most abundant biological entity in the water column of the world's oceans, and the second largest component of biomass after prokaryotes, where up to 9x10 virions per millilitre have been found in microbial mats at the surface, and up to 70% of marine bacteria may be infected by phages.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Phages have been used since the late 20th century as an alternative to antibiotics in the former Soviet Union and Central Europe, as well as in France. They are seen as a possible therapy against multi-drug-resistant strains of many bacteria (see phage therapy).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Phages are known to interact with the immune system both indirectly via bacterial expression of phage-encoded proteins and directly by influencing innate immunity and bacterial clearance. Phage–host interactions are becoming increasingly important areas of research.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Bacteriophages occur abundantly in the biosphere, with different genomes and lifestyles. Phages are classified by the International Committee on Taxonomy of Viruses (ICTV) according to morphology and nucleic acid.",
"title": "Classification"
},
{
"paragraph_id": 5,
"text": "It has been suggested that members of Picobirnaviridae infect bacteria, but not mammals.",
"title": "Classification"
},
{
"paragraph_id": 6,
"text": "There are also many unassigned genera of the class Leviviricetes: Chimpavirus, Hohglivirus, Mahrahvirus, Meihzavirus, Nicedsevirus, Sculuvirus, Skrubnovirus, Tetipavirus and Winunavirus containing linear ssRNA genomes and the unassigned genus Lilyvirus of the order Caudovirales containing a linear dsDNA genome.",
"title": "Classification"
},
{
"paragraph_id": 7,
"text": "In 1896, Ernest Hanbury Hankin reported that something in the waters of the Ganges and Yamuna rivers in India had a marked antibacterial action against cholera and it could pass through a very fine porcelain filter. In 1915, British bacteriologist Frederick Twort, superintendent of the Brown Institution of London, discovered a small agent that infected and killed bacteria. He believed the agent must be one of the following:",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Twort's research was interrupted by the onset of World War I, as well as a shortage of funding and the discoveries of antibiotics.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Independently, French-Canadian microbiologist Félix d'Hérelle, working at the Pasteur Institute in Paris, announced on 3 September 1917 that he had discovered \"an invisible, antagonistic microbe of the dysentery bacillus\". For d'Hérelle, there was no question as to the nature of his discovery: \"In a flash I had understood: what caused my clear spots was in fact an invisible microbe... a virus parasitic on bacteria.\" D'Hérelle called the virus a bacteriophage, a bacteria-eater (from the Greek phagein, meaning \"to devour\"). He also recorded a dramatic account of a man suffering from dysentery who was restored to good health by the bacteriophages. It was d'Hérelle who conducted much research into bacteriophages and introduced the concept of phage therapy. In 1919, in Paris, France, d'Hérelle conducted the first clinical application of a bacteriophage, with the first reported use in the United States being in 1922.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1969, Max Delbrück, Alfred Hershey, and Salvador Luria were awarded the Nobel Prize in Physiology or Medicine for their discoveries of the replication of viruses and their genetic structure. Specifically the work of Hershey, as contributor to the Hershey–Chase experiment in 1952, provided convincing evidence that DNA, not protein, was the genetic material of life. Delbrück and Luria carried out the Luria–Delbrück experiment which demonstrated statistically that mutations in bacteria occur randomly and thus follow Darwinian rather than Lamarckian principles.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Phages were discovered to be antibacterial agents and were used in the former Soviet Republic of Georgia (pioneered there by Giorgi Eliava with help from the co-discoverer of bacteriophages, Félix d'Hérelle) during the 1920s and 1930s for treating bacterial infections. They had widespread use, including treatment of soldiers in the Red Army. However, they were abandoned for general use in the West for several reasons:",
"title": "Uses"
},
{
"paragraph_id": 12,
"text": "The use of phages has continued since the end of the Cold War in Russia, Georgia, and elsewhere in Central and Eastern Europe. The first regulated, randomized, double-blind clinical trial was reported in the Journal of Wound Care in June 2009, which evaluated the safety and efficacy of a bacteriophage cocktail to treat infected venous ulcers of the leg in human patients. The FDA approved the study as a Phase I clinical trial. The study's results demonstrated the safety of therapeutic application of bacteriophages, but did not show efficacy. The authors explained that the use of certain chemicals that are part of standard wound care (e.g. lactoferrin or silver) may have interfered with bacteriophage viability. Shortly after that, another controlled clinical trial in Western Europe (treatment of ear infections caused by Pseudomonas aeruginosa) was reported in the journal Clinical Otolaryngology in August 2009. The study concludes that bacteriophage preparations were safe and effective for treatment of chronic ear infections in humans. Additionally, there have been numerous animal and other experimental clinical trials evaluating the efficacy of bacteriophages for various diseases, such as infected burns and wounds, and cystic fibrosis-associated lung infections, among others. On the other hand, phages of Inoviridae have been shown to complicate biofilms involved in pneumonia and cystic fibrosis and to shelter the bacteria from drugs meant to eradicate disease, thus promoting persistent infection.",
"title": "Uses"
},
{
"paragraph_id": 13,
"text": "Meanwhile, bacteriophage researchers have been developing engineered viruses to overcome antibiotic resistance, and engineering the phage genes responsible for coding enzymes that degrade the biofilm matrix, phage structural proteins, and the enzymes responsible for lysis of the bacterial cell wall. There have been results showing that T4 phages that are small in size and short-tailed can be helpful in detecting E. coli in the human body.",
"title": "Uses"
},
{
"paragraph_id": 14,
"text": "Therapeutic efficacy of a phage cocktail was evaluated in a mice model with nasal infection of multidrug-resistant (MDR) A. baumannii. Mice treated with the phage cocktail showed a 2.3-fold higher survival rate compared to those untreated at seven days post-infection. In 2017, a patient with a pancreas compromised by MDR A. baumannii was put on several antibiotics; despite this, the patient's health continued to deteriorate during a four-month period. Without effective antibiotics, the patient was subjected to phage therapy using a phage cocktail containing nine different phages that had been demonstrated to be effective against MDR A. baumannii. Once on this therapy the patient's downward clinical trajectory reversed, and returned to health.",
"title": "Uses"
},
{
"paragraph_id": 15,
"text": "D'Herelle \"quickly learned that bacteriophages are found wherever bacteria thrive: in sewers, in rivers that catch waste runoff from pipes, and in the stools of convalescent patients.\" This includes rivers traditionally thought to have healing powers, including India's Ganges River.",
"title": "Uses"
},
{
"paragraph_id": 16,
"text": "Food industry – Phages have increasingly been used to safen food products and to forestall spoilage bacteria. Since 2006, the United States Food and Drug Administration (FDA) and United States Department of Agriculture (USDA) have approved several bacteriophage products. LMP-102 (Intralytix) was approved for treating ready-to-eat (RTE) poultry and meat products. In that same year, the FDA approved LISTEX (developed and produced by Micreos) using bacteriophages on cheese to kill Listeria monocytogenes bacteria, in order to give them generally recognized as safe (GRAS) status. In July 2007, the same bacteriophage were approved for use on all food products. In 2011 USDA confirmed that LISTEX is a clean label processing aid and is included in USDA. Research in the field of food safety is continuing to see if lytic phages are a viable option to control other food-borne pathogens in various food products.",
"title": "Uses"
},
{
"paragraph_id": 17,
"text": "Diagnostics – In 2011, the FDA cleared the first bacteriophage-based product for in vitro diagnostic use. The KeyPath MRSA/MSSA Blood Culture Test uses a cocktail of bacteriophage to detect Staphylococcus aureus in positive blood cultures and determine methicillin resistance or susceptibility. The test returns results in about five hours, compared to two to three days for standard microbial identification and susceptibility test methods. It was the first accelerated antibiotic-susceptibility test approved by the FDA.",
"title": "Uses"
},
{
"paragraph_id": 18,
"text": "Counteracting bioweapons and toxins – Government agencies in the West have for several years been looking to Georgia and the former Soviet Union for help with exploiting phages for counteracting bioweapons and toxins, such as anthrax and botulism. Developments are continuing among research groups in the U.S. Other uses include spray application in horticulture for protecting plants and vegetable produce from decay and the spread of bacterial disease. Other applications for bacteriophages are as biocides for environmental surfaces, e.g., in hospitals, and as preventative treatments for catheters and medical devices before use in clinical settings. The technology for phages to be applied to dry surfaces, e.g., uniforms, curtains, or even sutures for surgery now exists. Clinical trials reported in Clinical Otolaryngology show success in veterinary treatment of pet dogs with otitis.",
"title": "Uses"
},
{
"paragraph_id": 19,
"text": "The SEPTIC bacterium sensing and identification method uses the ion emission and its dynamics during phage infection and offers high specificity and speed for detection.",
"title": "Uses"
},
{
"paragraph_id": 20,
"text": "Phage display is a different use of phages involving a library of phages with a variable peptide linked to a surface protein. Each phage genome encodes the variant of the protein displayed on its surface (hence the name), providing a link between the peptide variant and its encoding gene. Variant phages from the library may be selected through their binding affinity to an immobilized molecule (e.g., botulism toxin) to neutralize it. The bound, selected phages can be multiplied by reinfecting a susceptible bacterial strain, thus allowing them to retrieve the peptides encoded in them for further study.",
"title": "Uses"
},
{
"paragraph_id": 21,
"text": "Antimicrobial drug discovery – Phage proteins often have antimicrobial activity and may serve as leads for peptidomimetics, i.e. drugs that mimic peptides. Phage-ligand technology makes use of phage proteins for various applications, such as binding of bacteria and bacterial components (e.g. endotoxin) and lysis of bacteria.",
"title": "Uses"
},
{
"paragraph_id": 22,
"text": "Basic research – Bacteriophages are important model organisms for studying principles of evolution and ecology.",
"title": "Uses"
},
{
"paragraph_id": 23,
"text": "Bacteriophages present in the environment can cause cheese to not ferment. In order to avoid this, mixed-strain starter cultures and culture rotation regimes can be used. Genetic engineering of culture microbes – especially Lactococcus lactis and Streptococcus thermophilus – have been studied for genetic analysis and modification to improve phage resistance. This has especially focused on plasmid and recombinant chromosomal modifications.",
"title": "Detriments"
},
{
"paragraph_id": 24,
"text": "Some research has focused on the potential of bacteriophages as antimicrobial against foodborne pathogens and biofilm formation within the dairy industry. As the spread of antibiotic resistance is a main concern within the dairy industry, phages can serve as a promising alternative.",
"title": "Detriments"
},
{
"paragraph_id": 25,
"text": "The life cycle of bacteriophages tends to be either a lytic cycle or a lysogenic cycle. In addition, some phages display pseudolysogenic behaviors.",
"title": "Replication"
},
{
"paragraph_id": 26,
"text": "With lytic phages such as the T4 phage, bacterial cells are broken open (lysed) and destroyed after immediate replication of the virion. As soon as the cell is destroyed, the phage progeny can find new hosts to infect. Lytic phages are more suitable for phage therapy. Some lytic phages undergo a phenomenon known as lysis inhibition, where completed phage progeny will not immediately lyse out of the cell if extracellular phage concentrations are high. This mechanism is not identical to that of the temperate phage going dormant and usually is temporary.",
"title": "Replication"
},
{
"paragraph_id": 27,
"text": "In contrast, the lysogenic cycle does not result in immediate lysing of the host cell. Those phages able to undergo lysogeny are known as temperate phages. Their viral genome will integrate with host DNA and replicate along with it, relatively harmlessly, or may even become established as a plasmid. The virus remains dormant until host conditions deteriorate, perhaps due to depletion of nutrients, then, the endogenous phages (known as prophages) become active. At this point they initiate the reproductive cycle, resulting in lysis of the host cell. As the lysogenic cycle allows the host cell to continue to survive and reproduce, the virus is replicated in all offspring of the cell. An example of a bacteriophage known to follow the lysogenic cycle and the lytic cycle is the phage lambda of E. coli.",
"title": "Replication"
},
{
"paragraph_id": 28,
"text": "Sometimes prophages may provide benefits to the host bacterium while they are dormant by adding new functions to the bacterial genome, in a phenomenon called lysogenic conversion. Examples are the conversion of harmless strains of Corynebacterium diphtheriae or Vibrio cholerae by bacteriophages to highly virulent ones that cause diphtheria or cholera, respectively. Strategies to combat certain bacterial infections by targeting these toxin-encoding prophages have been proposed.",
"title": "Replication"
},
{
"paragraph_id": 29,
"text": "Bacterial cells are protected by a cell wall of polysaccharides, which are important virulence factors protecting bacterial cells against both immune host defenses and antibiotics. To enter a host cell, bacteriophages bind to specific receptors on the surface of bacteria, including lipopolysaccharides, teichoic acids, proteins, or even flagella. This specificity means a bacteriophage can infect only certain bacteria bearing receptors to which they can bind, which in turn, determines the phage's host range. Polysaccharide-degrading enzymes are virion-associated proteins that enzymatically degrade the capsular outer layer of their hosts at the initial step of a tightly programmed phage infection process. Host growth conditions also influence the ability of the phage to attach and invade them. As phage virions do not move independently, they must rely on random encounters with the correct receptors when in solution, such as blood, lymphatic circulation, irrigation, soil water, etc.",
"title": "Replication"
},
{
"paragraph_id": 30,
"text": "Myovirus bacteriophages use a hypodermic syringe-like motion to inject their genetic material into the cell. After contacting the appropriate receptor, the tail fibers flex to bring the base plate closer to the surface of the cell. This is known as reversible binding. Once attached completely, irreversible binding is initiated and the tail contracts, possibly with the help of ATP present in the tail, injecting genetic material through the bacterial membrane. The injection is accomplished through a sort of bending motion in the shaft by going to the side, contracting closer to the cell and pushing back up. Podoviruses lack an elongated tail sheath like that of a myovirus, so instead, they use their small, tooth-like tail fibers enzymatically to degrade a portion of the cell membrane before inserting their genetic material.",
"title": "Replication"
},
{
"paragraph_id": 31,
"text": "Within minutes, bacterial ribosomes start translating viral mRNA into protein. For RNA-based phages, RNA replicase is synthesized early in the process. Proteins modify the bacterial RNA polymerase so it preferentially transcribes viral mRNA. The host's normal synthesis of proteins and nucleic acids is disrupted, and it is forced to manufacture viral products instead. These products go on to become part of new virions within the cell, helper proteins that contribute to the assemblage of new virions, or proteins involved in cell lysis. In 1972, Walter Fiers (University of Ghent, Belgium) was the first to establish the complete nucleotide sequence of a gene and in 1976, of the viral genome of bacteriophage MS2. Some dsDNA bacteriophages encode ribosomal proteins, which are thought to modulate protein translation during phage infection.",
"title": "Replication"
},
{
"paragraph_id": 32,
"text": "In the case of the T4 phage, the construction of new virus particles involves the assistance of helper proteins that act catalytically during phage morphogenesis. The base plates are assembled first, with the tails being built upon them afterward. The head capsids, constructed separately, will spontaneously assemble with the tails. During assembly of the phage T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. The DNA is packed efficiently within the heads. The whole process takes about 15 minutes.",
"title": "Replication"
},
{
"paragraph_id": 33,
"text": "Phages may be released via cell lysis, by extrusion, or, in a few cases, by budding. Lysis, by tailed phages, is achieved by an enzyme called endolysin, which attacks and breaks down the cell wall peptidoglycan. An altogether different phage type, the filamentous phage, makes the host cell continually secrete new virus particles. Released virions are described as free, and, unless defective, are capable of infecting a new bacterium. Budding is associated with certain Mycoplasma phages. In contrast to virion release, phages displaying a lysogenic cycle do not kill the host and instead become long-term residents as prophages.",
"title": "Replication"
},
{
"paragraph_id": 34,
"text": "Research in 2017 revealed that the bacteriophage Φ3T makes a short viral protein that signals other bacteriophages to lie dormant instead of killing the host bacterium. Arbitrium is the name given to this protein by the researchers who discovered it.",
"title": "Replication"
},
{
"paragraph_id": 35,
"text": "Given the millions of different phages in the environment, phage genomes come in a variety of forms and sizes. RNA phages such as MS2 have the smallest genomes, with only a few kilobases. However, some DNA phages such as T4 may have large genomes with hundreds of genes; the size and shape of the capsid varies along with the size of the genome. The largest bacteriophage genomes reach a size of 735 kb.",
"title": "Genome structure"
},
{
"paragraph_id": 36,
"text": "Bacteriophage genomes can be highly mosaic, i.e. the genome of many phage species appear to be composed of numerous individual modules. These modules may be found in other phage species in different arrangements. Mycobacteriophages, bacteriophages with mycobacterial hosts, have provided excellent examples of this mosaicism. In these mycobacteriophages, genetic assortment may be the result of repeated instances of site-specific recombination and illegitimate recombination (the result of phage genome acquisition of bacterial host genetic sequences). Evolutionary mechanisms shaping the genomes of bacterial viruses vary between different families and depend upon the type of the nucleic acid, characteristics of the virion structure, as well as the mode of the viral life cycle.",
"title": "Genome structure"
},
{
"paragraph_id": 37,
"text": "Some marine roseobacter phages contain deoxyuridine (dU) instead of deoxythymidine (dT) in their genomic DNA. There is some evidence that this unusual component is a mechanism to evade bacterial defense mechanisms such as restriction endonucleases and CRISPR/Cas systems which evolved to recognize and cleave sequences within invading phages, thereby inactivating them. Other phages have long been known to use unusual nucleotides. In 1963, Takahashi and Marmur identified a Bacillus phage that has dU substituting dT in its genome, and in 1977, Kirnos et al. identified a cyanophage containing 2-aminoadenine (Z) instead of adenine (A).",
"title": "Genome structure"
},
{
"paragraph_id": 38,
"text": "The field of systems biology investigates the complex networks of interactions within an organism, usually using computational tools and modeling. For example, a phage genome that enters into a bacterial host cell may express hundreds of phage proteins which will affect the expression of numerous host genes or the host's metabolism. All of these complex interactions can be described and simulated in computer models.",
"title": "Systems biology"
},
{
"paragraph_id": 39,
"text": "For instance, infection of Pseudomonas aeruginosa by the temperate phage PaP3 changed the expression of 38% (2160/5633) of its host's genes. Many of these effects are probably indirect, hence the challenge becomes to identify the direct interactions among bacteria and phage.",
"title": "Systems biology"
},
{
"paragraph_id": 40,
"text": "Several attempts have been made to map protein–protein interactions among phage and their host. For instance, bacteriophage lambda was found to interact with its host, E. coli, by dozens of interactions. Again, the significance of many of these interactions remains unclear, but these studies suggest that there most likely are several key interactions and many indirect interactions whose role remains uncharacterized.",
"title": "Systems biology"
},
{
"paragraph_id": 41,
"text": "Bacteriophages are a major threat to bacteria and prokaryotes have evolved numerous mechanisms to block infection or to block the replication of bacteriophages within host cells. The CRISPR system is one such mechanism as are retrons and the anti-toxin system encoded by them. The Thoeris defense system is known to deploy a unique strategy for bacterial antiphage resistance via NAD+ degradation.",
"title": "Host resistance"
},
{
"paragraph_id": 42,
"text": "Temperate phages are bacteriophages that integrate their genetic material into the host as extrachromosomal episomes or as a prophage during a lysogenic cycle. Some temperate phages can confer fitness advantages to their host in numerous ways, including giving antibiotic resistance through the transfer or introduction of antibiotic resistance genes (ARGs), protecting hosts from phagocytosis, protecting hosts from secondary infection through superinfection exclusion, enhancing host pathogenicity, or enhancing bacterial metabolism or growth. Bacteriophage–host symbiosis may benefit bacteria by providing selective advantages while passively replicating the phage genome.",
"title": "Bacteriophage–host symbiosis"
},
{
"paragraph_id": 43,
"text": "Metagenomics has allowed the in-water detection of bacteriophages that was not possible previously.",
"title": "In the environment"
},
{
"paragraph_id": 44,
"text": "Also, bacteriophages have been used in hydrological tracing and modelling in river systems, especially where surface water and groundwater interactions occur. The use of phages is preferred to the more conventional dye marker because they are significantly less absorbed when passing through ground waters and they are readily detected at very low concentrations. Non-polluted water may contain approximately 2×10 bacteriophages per ml.",
"title": "In the environment"
},
{
"paragraph_id": 45,
"text": "Bacteriophages are thought to contribute extensively to horizontal gene transfer in natural environments, principally via transduction, but also via transformation. Metagenomics-based studies also have revealed that viromes from a variety of environments harbor antibiotic-resistance genes, including those that could confer multidrug resistance.",
"title": "In the environment"
},
{
"paragraph_id": 46,
"text": "Although phages do not infect humans, there are countless phage particles in the human body, given our extensive microbiome. Our phage population has been called the human phageome, including the \"healthy gut phageome\" (HGP) and the \"diseased human phageome\" (DHP). The active phageome of a healthy human (i.e., actively replicating as opposed to nonreplicating, integrated prophage) has been estimated to comprise dozens to thousands of different viruses. There is evidence that bacteriophages and bacteria interact in the human gut microbiome both antagonistically and beneficially.",
"title": "In humans"
},
{
"paragraph_id": 47,
"text": "Preliminary studies have indicated that common bacteriophages are found in 62% of healthy individuals on average, while their prevalence was reduced by 42% and 54% on average in patients with ulcerative colitis (UC) and Crohn's disease (CD). Abundance of phages may also decline in the elderly.",
"title": "In humans"
},
{
"paragraph_id": 48,
"text": "The most common phages in the human intestine, found worldwide, are crAssphages. CrAssphages are transmitted from mother to child soon after birth, and there is some evidence suggesting that they may be transmitted locally. Each person develops their own unique crAssphage clusters. CrAss-like phages also may be present in primates besides humans.",
"title": "In humans"
},
{
"paragraph_id": 49,
"text": "Among the countless phage, only a few have been studied in detail, including some historically important phage that were discovered in the early days of microbial genetics. These, especially the T-phage, helped to discover important principles of gene structure and function.",
"title": "Commonly studied bacteriophage"
}
] | A bacteriophage, also known informally as a phage, is a virus that infects and replicates within bacteria and archaea. The term was derived from "bacteria" and the Greek φαγεῖν (phagein), meaning "to devour". Bacteriophages are composed of proteins that encapsulate a DNA or RNA genome, and may have structures that are either simple or elaborate. Their genomes may encode as few as four genes (e.g. MS2) and as many as hundreds of genes. Phages replicate within the bacterium following the injection of their genome into its cytoplasm. Bacteriophages are among the most common and diverse entities in the biosphere. Bacteriophages are ubiquitous viruses, found wherever bacteria exist. It is estimated there are more than 1031 bacteriophages on the planet, more than every other organism on Earth, including bacteria, combined. Viruses are the most abundant biological entity in the water column of the world's oceans, and the second largest component of biomass after prokaryotes, where up to 9x108 virions per millilitre have been found in microbial mats at the surface, and up to 70% of marine bacteria may be infected by phages. Phages have been used since the late 20th century as an alternative to antibiotics in the former Soviet Union and Central Europe, as well as in France. They are seen as a possible therapy against multi-drug-resistant strains of many bacteria (see phage therapy). Phages are known to interact with the immune system both indirectly via bacterial expression of phage-encoded proteins and directly by influencing innate immunity and bacterial clearance. Phage–host interactions are becoming increasingly important areas of research. | 2001-09-14T20:48:51Z | 2023-12-31T13:47:25Z | [
"Template:Refend",
"Template:Commons category",
"Template:Short description",
"Template:IPAc-en",
"Template:Main",
"Template:Portal",
"Template:Cite book",
"Template:Refbegin",
"Template:YouTube",
"Template:Redirect",
"Template:Citation needed",
"Template:Div col end",
"Template:Cite news",
"Template:Citation",
"Template:Cite journal",
"Template:Cite web",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Lang",
"Template:Citation needed span",
"Template:Div col",
"Template:Reflist",
"Template:Cite magazine",
"Template:Wikiquote",
"Template:Modelling ecosystems",
"Template:Virus topics"
] | https://en.wikipedia.org/wiki/Bacteriophage |
4,187 | Bactericide | A bactericide or bacteriocide, sometimes abbreviated Bcidal, is a substance which kills bacteria. Bactericides are disinfectants, antiseptics, or antibiotics. However, material surfaces can also have bactericidal properties based solely on their physical surface structure, as for example biomaterials like insect wings.
The most used disinfectants are those applying
As antiseptics (i.e., germicide agents that can be used on human or animal body, skin, mucoses, wounds and the like), few of the above-mentioned disinfectants can be used, under proper conditions (mainly concentration, pH, temperature and toxicity toward humans and animals). Among them, some important are
Others are generally not applicable as safe antiseptics, either because of their corrosive or toxic nature.
Bactericidal antibiotics kill bacteria; bacteriostatic antibiotics slow their growth or reproduction.
Bactericidal antibiotics that inhibit cell wall synthesis: the beta-lactam antibiotics (penicillin derivatives (penams), cephalosporins (cephems), monobactams, and carbapenems) and vancomycin.
Also bactericidal are daptomycin, fluoroquinolones, metronidazole, nitrofurantoin, co-trimoxazole, telithromycin.
Aminoglycosidic antibiotics are usually considered bactericidal, although they may be bacteriostatic with some organisms.
As of 2004, the distinction between bactericidal and bacteriostatic agents appeared to be clear according to the basic/clinical definition, but this only applies under strict laboratory conditions and it is important to distinguish microbiological and clinical definitions. The distinction is more arbitrary when agents are categorized in clinical situations. The supposed superiority of bactericidal agents over bacteriostatic agents is of little relevance when treating the vast majority of infections with gram-positive bacteria, particularly in patients with uncomplicated infections and noncompromised immune systems. Bacteriostatic agents have been effectively used for treatment that are considered to require bactericidal activity. Furthermore, some broad classes of antibacterial agents considered bacteriostatic can exhibit bactericidal activity against some bacteria on the basis of in vitro determination of MBC/MIC values. At high concentrations, bacteriostatic agents are often bactericidal against some susceptible organisms. The ultimate guide to treatment of any infection must be clinical outcome.
Material surfaces can exhibit bactericidal properties because of their crystallographic surface structure.
Somewhere in the mid-2000s it was shown that metallic nanoparticles can kill bacteria. The effect of a silver nanoparticle for example depends on its size with a preferential diameter of about 1-10 nm to interact with bacteria.
In 2013, cicada wings were found to have a selective anti-gram-negative bactericidal effect based on their physical surface structure. Mechanical deformation of the more or less rigid nanopillars found on the wing releases energy, striking and killing bacteria within minutes, hence called a mechano-bactericidal effect.
In 2020 researchers have combined cationic polymer adsorption and femtosecond laser surface structuring to generate a bactericidal effect against both gram-positive Staphylococcus aureus and gram-negative Escherichia coli bacteria on borosilicate glass surfaces, providing a practical platform for the study of the bacteria-surface interaction. | [
{
"paragraph_id": 0,
"text": "A bactericide or bacteriocide, sometimes abbreviated Bcidal, is a substance which kills bacteria. Bactericides are disinfectants, antiseptics, or antibiotics. However, material surfaces can also have bactericidal properties based solely on their physical surface structure, as for example biomaterials like insect wings.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The most used disinfectants are those applying",
"title": "Disinfectants"
},
{
"paragraph_id": 2,
"text": "As antiseptics (i.e., germicide agents that can be used on human or animal body, skin, mucoses, wounds and the like), few of the above-mentioned disinfectants can be used, under proper conditions (mainly concentration, pH, temperature and toxicity toward humans and animals). Among them, some important are",
"title": "Antiseptics"
},
{
"paragraph_id": 3,
"text": "Others are generally not applicable as safe antiseptics, either because of their corrosive or toxic nature.",
"title": "Antiseptics"
},
{
"paragraph_id": 4,
"text": "Bactericidal antibiotics kill bacteria; bacteriostatic antibiotics slow their growth or reproduction.",
"title": "Antibiotics"
},
{
"paragraph_id": 5,
"text": "Bactericidal antibiotics that inhibit cell wall synthesis: the beta-lactam antibiotics (penicillin derivatives (penams), cephalosporins (cephems), monobactams, and carbapenems) and vancomycin.",
"title": "Antibiotics"
},
{
"paragraph_id": 6,
"text": "Also bactericidal are daptomycin, fluoroquinolones, metronidazole, nitrofurantoin, co-trimoxazole, telithromycin.",
"title": "Antibiotics"
},
{
"paragraph_id": 7,
"text": "Aminoglycosidic antibiotics are usually considered bactericidal, although they may be bacteriostatic with some organisms.",
"title": "Antibiotics"
},
{
"paragraph_id": 8,
"text": "As of 2004, the distinction between bactericidal and bacteriostatic agents appeared to be clear according to the basic/clinical definition, but this only applies under strict laboratory conditions and it is important to distinguish microbiological and clinical definitions. The distinction is more arbitrary when agents are categorized in clinical situations. The supposed superiority of bactericidal agents over bacteriostatic agents is of little relevance when treating the vast majority of infections with gram-positive bacteria, particularly in patients with uncomplicated infections and noncompromised immune systems. Bacteriostatic agents have been effectively used for treatment that are considered to require bactericidal activity. Furthermore, some broad classes of antibacterial agents considered bacteriostatic can exhibit bactericidal activity against some bacteria on the basis of in vitro determination of MBC/MIC values. At high concentrations, bacteriostatic agents are often bactericidal against some susceptible organisms. The ultimate guide to treatment of any infection must be clinical outcome.",
"title": "Antibiotics"
},
{
"paragraph_id": 9,
"text": "Material surfaces can exhibit bactericidal properties because of their crystallographic surface structure.",
"title": "Surfaces"
},
{
"paragraph_id": 10,
"text": "Somewhere in the mid-2000s it was shown that metallic nanoparticles can kill bacteria. The effect of a silver nanoparticle for example depends on its size with a preferential diameter of about 1-10 nm to interact with bacteria.",
"title": "Surfaces"
},
{
"paragraph_id": 11,
"text": "In 2013, cicada wings were found to have a selective anti-gram-negative bactericidal effect based on their physical surface structure. Mechanical deformation of the more or less rigid nanopillars found on the wing releases energy, striking and killing bacteria within minutes, hence called a mechano-bactericidal effect.",
"title": "Surfaces"
},
{
"paragraph_id": 12,
"text": "In 2020 researchers have combined cationic polymer adsorption and femtosecond laser surface structuring to generate a bactericidal effect against both gram-positive Staphylococcus aureus and gram-negative Escherichia coli bacteria on borosilicate glass surfaces, providing a practical platform for the study of the bacteria-surface interaction.",
"title": "Surfaces"
}
] | A bactericide or bacteriocide, sometimes abbreviated Bcidal, is a substance which kills bacteria. Bactericides are disinfectants, antiseptics, or antibiotics.
However, material surfaces can also have bactericidal properties based solely on their physical surface structure, as for example biomaterials like insect wings. | 2002-02-25T15:43:11Z | 2023-10-26T00:40:03Z | [
"Template:Authority control",
"Template:Short description",
"Template:Reflist",
"Template:Cite journal",
"Template:Wiktionary",
"Template:Pharmacology"
] | https://en.wikipedia.org/wiki/Bactericide |
4,188 | Brion Gysin | Brion Gysin (19 January 1916 – 13 July 1986) was a British-Canadian painter, writer, sound poet, performance artist and inventor of experimental devices.
He is best known for his use of the cut-up technique, alongside his close friend, the novelist William S. Burroughs. With the engineer Ian Sommerville he also invented the Dreamachine, a flicker device designed as an art object to be viewed with the eyes closed. It was in painting and drawing, however, that Gysin devoted his greatest efforts, creating calligraphic works inspired by cursive Japanese "grass" script and Arabic script. Burroughs later stated that "Brion Gysin was the only man I ever respected."
John Clifford Brian Gysin was born at the Canadian military hospital in Taplow, Buckinghamshire, England. His mother, Stella Margaret Martin, was a Canadian from Deseronto, Ontario. His father, Leonard Gysin, a captain with the Canadian Expeditionary Force, was killed in action eight months after his son's birth. Stella returned to Canada and settled in Edmonton, Alberta where her son became "the only Catholic day-boy at an Anglican boarding school". Graduating at fifteen, Gysin was sent to Downside School in Stratton-on-the-Fosse, near Bath, Somerset in England, a prestigious college run by the Benedictines and known as "the Eton of Catholic public schools". Despite, or because of, attending a Catholic school, Gysin became an atheist.
In 1934, he moved to Paris to study La Civilisation Française, an open course given at the Sorbonne where he made literary and artistic contacts through Marie Berthe Aurenche, Max Ernst's second wife. He joined the Surrealist Group and began associating with Valentine Hugo, Leonor Fini, Salvador Dalí, Picasso and Dora Maar. A year later, he had his first exhibition at the Galérie Quatre Chemins in Paris with Ernst, Picasso, Hans Arp, Hans Bellmer, Victor Brauner, Giorgio de Chirico, Dalí, Marcel Duchamp, René Magritte, Man Ray and Yves Tanguy. On the day of the preview, however, he was expelled from the Surrealist Group by André Breton, who ordered the poet Paul Éluard to take down his pictures. Gysin was 19 years old. His biographer, John Geiger, suggests the arbitrary expulsion "had the effect of a curse. Years later, he blamed other failures on the Breton incident. It gave rise to conspiracy theories about the powerful interests who seek control of the art world. He gave various explanations for the expulsion, the more elaborate involving 'insubordination' or lèse majesté towards Breton".
After serving in the U.S. army during World War II, Gysin published a biography of Josiah "Uncle Tom" Henson titled, To Master, a Long Goodnight: The History of Slavery in Canada (1946). A gifted draughtsman, he took an 18-month course learning the Japanese language (including calligraphy) that would greatly influence his artwork. In 1949, he was among the first Fulbright Fellows. His goal was to research, at the University of Bordeaux and in the Archivo de Indias in Seville, Spain, the history of slavery, a project that he later abandoned. He moved to Tangier, Morocco, after visiting the city with novelist and composer Paul Bowles in 1950. In 1952/3 he met the travel writer and sexual adventurer Anne Cumming and they remained friends until his death.
In 1954 in Tangier, Gysin opened a restaurant called The 1001 Nights, with his friend Mohamed Hamri, who was the cook. Gysin hired the Master Musicians of Jajouka from the village of Jajouka to perform alongside entertainment that included acrobats, a dancing boy and fire eaters. The musicians performed there for an international clientele that included William S. Burroughs. Gysin lost the business in 1958, and the restaurant closed permanently. That same year, Gysin returned to Paris, taking lodgings in a flophouse located at 9 rue Gît-le-Cœur that would become famous as the Beat Hotel. Working on a drawing, he discovered a Dada technique by accident:
William Burroughs and I first went into techniques of writing, together, back in room No. 15 of the Beat Hotel during the cold Paris spring of 1958... Burroughs was more intent on Scotch-taping his photos together into one great continuum on the wall, where scenes faded and slipped into one another, than occupied with editing the monster manuscript... Naked Lunch appeared and Burroughs disappeared. He kicked his habit with Apomorphine and flew off to London to see Dr Dent, who had first turned him on to the cure. While cutting a mount for a drawing in room No. 15, I sliced through a pile of newspapers with my Stanley blade and thought of what I had said to Burroughs some six months earlier about the necessity for turning painters' techniques directly into writing. I picked up the raw words and began to piece together texts that later appeared as "First Cut-Ups" in Minutes to Go (Two Cities, Paris 1960).
When Burroughs returned from London in September 1959, Gysin not only shared his discovery with his friend but the new techniques he had developed for it. Burroughs then put the techniques to use while completing Naked Lunch and the experiment dramatically changed the landscape of American literature. Gysin helped Burroughs with the editing of several of his novels including Interzone, and wrote a script for a film version of Naked Lunch, which was never produced. The pair collaborated on a large manuscript for Grove Press titled The Third Mind but it was determined that it would be impractical to publish it as originally envisioned. The book later published under that title incorporates little of this material. Interviewed for The Guardian in 1997, Burroughs explained that Gysin was "the only man that I've ever respected in my life. I've admired people, I've liked them, but he's the only man I've ever respected." In 1969, Gysin completed his finest novel, The Process, a work judged by critic Robert Palmer as "a classic of 20th century modernism".
A consummate innovator, Gysin altered the cut-up technique to produce what he called permutation poems in which a single phrase was repeated several times with the words rearranged in a different order with each reiteration. An example of this is "I don't dig work, man/Man, work I don't dig." Many of these permutations were derived using a random sequence generator in an early computer program written by Ian Sommerville. Commissioned by the BBC in 1960 to produce material for broadcast, Gysin's results included "Pistol Poem", which was created by recording a gun firing at different distances and then splicing the sounds. That year, the piece was subsequently used as a theme for the Paris performance of Le Domaine Poetique, a showcase for experimental works by people like Gysin, François Dufrêne, Bernard Heidsieck, and Henri Chopin.
With Sommerville, he built the Dreamachine in 1961. Described as "the first art object to be seen with the eyes closed", the flicker device uses alpha waves in the 8–16 Hz range to produce a change of consciousness in receptive viewers.
In April 1974, while sitting at a social engagement, Gysin had a very noticeable rectal bleeding. In May he wrote to Burroughs complaining he was not feeling well. A short time later he was diagnosed with colon cancer and began to receive cobalt treatment. Between December 1974 and April 1975, Gysin had to undergo several surgeries, among them a very traumatic colostomy, that drove him to extreme depression and to a suicide attempt. Later, in Fire: Words by Day – Images by Night (1975), a crudely lucid text, he would describe the horrendous ordeal he went through.
In 1985 Gysin was made an American Commander of the French Ordre des Arts et des Lettres. He'd begun to work extensively with noted jazz soprano saxophonist Steve Lacy. They recorded an album in 1986 with French musician Ramuntcho Matta, featuring Gysin singing/rapping his own texts, with performances by Lacy, Don Cherry, Elli Medeiros, Lizzy Mercier Descloux and more. The album was reissued on CD in 1993 by Crammed Discs, under the title Self-Portrait Jumping.
On 13 July 1986 Brion Gysin died of lung cancer. Anne Cumming arranged his funeral and for his ashes to be scattered at the Caves of Hercules in Morocco. An obituary by Robert Palmer published in The New York Times described him as a man who "threw off the sort of ideas that ordinary artists would parlay into a lifetime career, great clumps of ideas, as casually as a locomotive throws off sparks". Later that year a heavily edited version of his novel, The Last Museum, was published posthumously by Faber & Faber (London) and by Grove Press (New York).
As a joke, Gysin had contributed a recipe for marijuana fudge to a cookbook by Alice B. Toklas; it was included for publication, becoming famous under the name Alice B. Toklas brownies.
In a 1966 interview by Conrad Knickerbocker for The Paris Review, William S. Burroughs explained that Brion Gysin was, to his knowledge, "the first to create cut-ups":
A friend, Brion Gysin, an American poet and painter, who has lived in Europe for thirty years, was, as far as I know, the first to create cut-ups. His cut-up poem, Minutes to Go, was broadcast by the BBC and later published in a pamphlet. I was in Paris in the summer of 1960; this was after the publication there of Naked Lunch. I became interested in the possibilities of this technique, and I began experimenting myself. Of course, when you think of it, The Waste Land was the first great cut-up collage, and Tristan Tzara had done a bit along the same lines. Dos Passos used the same idea in 'The Camera Eye' sequences in USA. I felt I had been working toward the same goal; thus it was a major revelation to me when I actually saw it being done.
According to José Férez Kuri, author of Brion Gysin: Tuning in to the Multimedia Age (2003) and co-curator of a major retrospective of the artist's work at The Edmonton Art Gallery in 1998, Gysin's wide range of "radical ideas would become a source of inspiration for artists of the Beat Generation, as well as for their successors (among them David Bowie, Mick Jagger, Keith Haring, and Laurie Anderson)". Other artists include Genesis P-Orridge, John Zorn (as displayed on the 2013's Dreamachines album) and Brian Jones.
Gysin is the subject of John Geiger's biography, Nothing Is True Everything Is Permitted: The Life of Brion Gysin, and features in Chapel of Extreme Experience: A Short History of Stroboscopic Light and the Dream Machine, also by Geiger. Man From Nowhere: Storming the Citadels of Enlightenment with William Burroughs and Brion Gysin, a biographical study of Burroughs and Gysin with a collection of homages to Gysin, was authored by Joe Ambrose, Frank Rynne, and Terry Wilson with contributions by Marianne Faithfull, John Cale, William S. Burroughs, John Giorno, Stanley Booth, Bill Laswell, Mohamed Hamri, Keith Haring and Paul Bowles. A monograph on Gysin was published in 2003 by Thames and Hudson. | [
{
"paragraph_id": 0,
"text": "Brion Gysin (19 January 1916 – 13 July 1986) was a British-Canadian painter, writer, sound poet, performance artist and inventor of experimental devices.",
"title": ""
},
{
"paragraph_id": 1,
"text": "He is best known for his use of the cut-up technique, alongside his close friend, the novelist William S. Burroughs. With the engineer Ian Sommerville he also invented the Dreamachine, a flicker device designed as an art object to be viewed with the eyes closed. It was in painting and drawing, however, that Gysin devoted his greatest efforts, creating calligraphic works inspired by cursive Japanese \"grass\" script and Arabic script. Burroughs later stated that \"Brion Gysin was the only man I ever respected.\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "John Clifford Brian Gysin was born at the Canadian military hospital in Taplow, Buckinghamshire, England. His mother, Stella Margaret Martin, was a Canadian from Deseronto, Ontario. His father, Leonard Gysin, a captain with the Canadian Expeditionary Force, was killed in action eight months after his son's birth. Stella returned to Canada and settled in Edmonton, Alberta where her son became \"the only Catholic day-boy at an Anglican boarding school\". Graduating at fifteen, Gysin was sent to Downside School in Stratton-on-the-Fosse, near Bath, Somerset in England, a prestigious college run by the Benedictines and known as \"the Eton of Catholic public schools\". Despite, or because of, attending a Catholic school, Gysin became an atheist.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "In 1934, he moved to Paris to study La Civilisation Française, an open course given at the Sorbonne where he made literary and artistic contacts through Marie Berthe Aurenche, Max Ernst's second wife. He joined the Surrealist Group and began associating with Valentine Hugo, Leonor Fini, Salvador Dalí, Picasso and Dora Maar. A year later, he had his first exhibition at the Galérie Quatre Chemins in Paris with Ernst, Picasso, Hans Arp, Hans Bellmer, Victor Brauner, Giorgio de Chirico, Dalí, Marcel Duchamp, René Magritte, Man Ray and Yves Tanguy. On the day of the preview, however, he was expelled from the Surrealist Group by André Breton, who ordered the poet Paul Éluard to take down his pictures. Gysin was 19 years old. His biographer, John Geiger, suggests the arbitrary expulsion \"had the effect of a curse. Years later, he blamed other failures on the Breton incident. It gave rise to conspiracy theories about the powerful interests who seek control of the art world. He gave various explanations for the expulsion, the more elaborate involving 'insubordination' or lèse majesté towards Breton\".",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "After serving in the U.S. army during World War II, Gysin published a biography of Josiah \"Uncle Tom\" Henson titled, To Master, a Long Goodnight: The History of Slavery in Canada (1946). A gifted draughtsman, he took an 18-month course learning the Japanese language (including calligraphy) that would greatly influence his artwork. In 1949, he was among the first Fulbright Fellows. His goal was to research, at the University of Bordeaux and in the Archivo de Indias in Seville, Spain, the history of slavery, a project that he later abandoned. He moved to Tangier, Morocco, after visiting the city with novelist and composer Paul Bowles in 1950. In 1952/3 he met the travel writer and sexual adventurer Anne Cumming and they remained friends until his death.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "In 1954 in Tangier, Gysin opened a restaurant called The 1001 Nights, with his friend Mohamed Hamri, who was the cook. Gysin hired the Master Musicians of Jajouka from the village of Jajouka to perform alongside entertainment that included acrobats, a dancing boy and fire eaters. The musicians performed there for an international clientele that included William S. Burroughs. Gysin lost the business in 1958, and the restaurant closed permanently. That same year, Gysin returned to Paris, taking lodgings in a flophouse located at 9 rue Gît-le-Cœur that would become famous as the Beat Hotel. Working on a drawing, he discovered a Dada technique by accident:",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "William Burroughs and I first went into techniques of writing, together, back in room No. 15 of the Beat Hotel during the cold Paris spring of 1958... Burroughs was more intent on Scotch-taping his photos together into one great continuum on the wall, where scenes faded and slipped into one another, than occupied with editing the monster manuscript... Naked Lunch appeared and Burroughs disappeared. He kicked his habit with Apomorphine and flew off to London to see Dr Dent, who had first turned him on to the cure. While cutting a mount for a drawing in room No. 15, I sliced through a pile of newspapers with my Stanley blade and thought of what I had said to Burroughs some six months earlier about the necessity for turning painters' techniques directly into writing. I picked up the raw words and began to piece together texts that later appeared as \"First Cut-Ups\" in Minutes to Go (Two Cities, Paris 1960).",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "When Burroughs returned from London in September 1959, Gysin not only shared his discovery with his friend but the new techniques he had developed for it. Burroughs then put the techniques to use while completing Naked Lunch and the experiment dramatically changed the landscape of American literature. Gysin helped Burroughs with the editing of several of his novels including Interzone, and wrote a script for a film version of Naked Lunch, which was never produced. The pair collaborated on a large manuscript for Grove Press titled The Third Mind but it was determined that it would be impractical to publish it as originally envisioned. The book later published under that title incorporates little of this material. Interviewed for The Guardian in 1997, Burroughs explained that Gysin was \"the only man that I've ever respected in my life. I've admired people, I've liked them, but he's the only man I've ever respected.\" In 1969, Gysin completed his finest novel, The Process, a work judged by critic Robert Palmer as \"a classic of 20th century modernism\".",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "A consummate innovator, Gysin altered the cut-up technique to produce what he called permutation poems in which a single phrase was repeated several times with the words rearranged in a different order with each reiteration. An example of this is \"I don't dig work, man/Man, work I don't dig.\" Many of these permutations were derived using a random sequence generator in an early computer program written by Ian Sommerville. Commissioned by the BBC in 1960 to produce material for broadcast, Gysin's results included \"Pistol Poem\", which was created by recording a gun firing at different distances and then splicing the sounds. That year, the piece was subsequently used as a theme for the Paris performance of Le Domaine Poetique, a showcase for experimental works by people like Gysin, François Dufrêne, Bernard Heidsieck, and Henri Chopin.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "With Sommerville, he built the Dreamachine in 1961. Described as \"the first art object to be seen with the eyes closed\", the flicker device uses alpha waves in the 8–16 Hz range to produce a change of consciousness in receptive viewers.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "In April 1974, while sitting at a social engagement, Gysin had a very noticeable rectal bleeding. In May he wrote to Burroughs complaining he was not feeling well. A short time later he was diagnosed with colon cancer and began to receive cobalt treatment. Between December 1974 and April 1975, Gysin had to undergo several surgeries, among them a very traumatic colostomy, that drove him to extreme depression and to a suicide attempt. Later, in Fire: Words by Day – Images by Night (1975), a crudely lucid text, he would describe the horrendous ordeal he went through.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "In 1985 Gysin was made an American Commander of the French Ordre des Arts et des Lettres. He'd begun to work extensively with noted jazz soprano saxophonist Steve Lacy. They recorded an album in 1986 with French musician Ramuntcho Matta, featuring Gysin singing/rapping his own texts, with performances by Lacy, Don Cherry, Elli Medeiros, Lizzy Mercier Descloux and more. The album was reissued on CD in 1993 by Crammed Discs, under the title Self-Portrait Jumping.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "On 13 July 1986 Brion Gysin died of lung cancer. Anne Cumming arranged his funeral and for his ashes to be scattered at the Caves of Hercules in Morocco. An obituary by Robert Palmer published in The New York Times described him as a man who \"threw off the sort of ideas that ordinary artists would parlay into a lifetime career, great clumps of ideas, as casually as a locomotive throws off sparks\". Later that year a heavily edited version of his novel, The Last Museum, was published posthumously by Faber & Faber (London) and by Grove Press (New York).",
"title": "Death"
},
{
"paragraph_id": 13,
"text": "As a joke, Gysin had contributed a recipe for marijuana fudge to a cookbook by Alice B. Toklas; it was included for publication, becoming famous under the name Alice B. Toklas brownies.",
"title": "Death"
},
{
"paragraph_id": 14,
"text": "In a 1966 interview by Conrad Knickerbocker for The Paris Review, William S. Burroughs explained that Brion Gysin was, to his knowledge, \"the first to create cut-ups\":",
"title": "Burroughs on the Gysin cut-up"
},
{
"paragraph_id": 15,
"text": "A friend, Brion Gysin, an American poet and painter, who has lived in Europe for thirty years, was, as far as I know, the first to create cut-ups. His cut-up poem, Minutes to Go, was broadcast by the BBC and later published in a pamphlet. I was in Paris in the summer of 1960; this was after the publication there of Naked Lunch. I became interested in the possibilities of this technique, and I began experimenting myself. Of course, when you think of it, The Waste Land was the first great cut-up collage, and Tristan Tzara had done a bit along the same lines. Dos Passos used the same idea in 'The Camera Eye' sequences in USA. I felt I had been working toward the same goal; thus it was a major revelation to me when I actually saw it being done.",
"title": "Burroughs on the Gysin cut-up"
},
{
"paragraph_id": 16,
"text": "According to José Férez Kuri, author of Brion Gysin: Tuning in to the Multimedia Age (2003) and co-curator of a major retrospective of the artist's work at The Edmonton Art Gallery in 1998, Gysin's wide range of \"radical ideas would become a source of inspiration for artists of the Beat Generation, as well as for their successors (among them David Bowie, Mick Jagger, Keith Haring, and Laurie Anderson)\". Other artists include Genesis P-Orridge, John Zorn (as displayed on the 2013's Dreamachines album) and Brian Jones.",
"title": "Influence"
},
{
"paragraph_id": 17,
"text": "Gysin is the subject of John Geiger's biography, Nothing Is True Everything Is Permitted: The Life of Brion Gysin, and features in Chapel of Extreme Experience: A Short History of Stroboscopic Light and the Dream Machine, also by Geiger. Man From Nowhere: Storming the Citadels of Enlightenment with William Burroughs and Brion Gysin, a biographical study of Burroughs and Gysin with a collection of homages to Gysin, was authored by Joe Ambrose, Frank Rynne, and Terry Wilson with contributions by Marianne Faithfull, John Cale, William S. Burroughs, John Giorno, Stanley Booth, Bill Laswell, Mohamed Hamri, Keith Haring and Paul Bowles. A monograph on Gysin was published in 2003 by Thames and Hudson.",
"title": "Selected bibliography"
}
] | Brion Gysin was a British-Canadian painter, writer, sound poet, performance artist and inventor of experimental devices. He is best known for his use of the cut-up technique, alongside his close friend, the novelist William S. Burroughs. With the engineer Ian Sommerville he also invented the Dreamachine, a flicker device designed as an art object to be viewed with the eyes closed. It was in painting and drawing, however, that Gysin devoted his greatest efforts, creating calligraphic works inspired by cursive Japanese "grass" script and Arabic script. Burroughs later stated that "Brion Gysin was the only man I ever respected." | 2001-09-14T21:09:24Z | 2023-12-28T20:51:30Z | [
"Template:Wikiquote",
"Template:Use dmy dates",
"Template:Multiple image",
"Template:Col-end",
"Template:Col-2",
"Template:Reflist",
"Template:Cite news",
"Template:Original research",
"Template:Use British English",
"Template:Rp",
"Template:Short description",
"Template:ISBN",
"Template:Authority control",
"Template:Infobox writer",
"Template:Col-begin",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Brion_Gysin |
4,190 | Bulgarian | Bulgarian may refer to: | [
{
"paragraph_id": 0,
"text": "Bulgarian may refer to:",
"title": ""
}
] | Bulgarian may refer to: Something of, from, or related to the country of Bulgaria
Bulgarians, a South Slavic ethnic group
Bulgarian language, a Slavic language
Bulgarian alphabet
A citizen of Bulgaria, see Demographics of Bulgaria
Bulgarian culture
Bulgarian cuisine, a representative of the cuisine of Southeastern Europe | 2019-10-18T13:51:02Z | [
"Template:Wiktionary",
"Template:Look from",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Bulgarian |
|
4,191 | BCG vaccine | Bacillus Calmette–Guérin (BCG) vaccine is a vaccine primarily used against tuberculosis (TB). It is named after its inventors Albert Calmette and Camille Guérin. In countries where tuberculosis or leprosy is common, one dose is recommended in healthy babies as soon after birth as possible. In areas where tuberculosis is not common, only children at high risk are typically immunized, while suspected cases of tuberculosis are individually tested for and treated. Adults who do not have tuberculosis and have not been previously immunized, but are frequently exposed, may be immunized, as well. BCG also has some effectiveness against Buruli ulcer infection and other nontuberculous mycobacterial infections. Additionally, it is sometimes used as part of the treatment of bladder cancer.
Rates of protection against tuberculosis infection vary widely and protection lasts up to 20 years. Among children, it prevents about 20% from getting infected and among those who do get infected, it protects half from developing disease. The vaccine is given by injection into the skin. No evidence shows that additional doses are beneficial.
Serious side effects are rare. Often, redness, swelling, and mild pain occur at the site of injection. A small ulcer may also form with some scarring after healing. Side effects are more common and potentially more severe in those with immunosuppression. Although no harmful effects on the fetus have been observed, there is insufficient evidence about the safety of BCG vaccination during pregnancy and therefore, vaccine is not recommended for use during pregnancy. The vaccine was originally developed from Mycobacterium bovis, which is commonly found in cattle. While it has been weakened, it is still live.
The BCG vaccine was first used medically in 1921. It is on the World Health Organization's List of Essential Medicines. As of 2004, the vaccine is given to about 100 million children per year globally.
The main use of BCG is for vaccination against tuberculosis. BCG vaccine can be administered after birth intradermally. BCG vaccination can cause a false positive Mantoux test.
The most controversial aspect of BCG is the variable efficacy found in different clinical trials, which appears to depend on geography. Trials conducted in the UK have consistently shown a protective effect of 60 to 80%, but those conducted elsewhere have shown no protective effect, and efficacy appears to fall the closer one gets to the equator.
A 1994 systematic review found that BCG reduces the risk of getting tuberculosis by about 50%. Differences in effectiveness depend on region, due to factors such as genetic differences in the populations, changes in environment, exposure to other bacterial infections, and conditions in the laboratory where the vaccine is grown, including genetic differences between the strains being cultured and the choice of growth medium.
A systematic review and meta-analysis conducted in 2014 demonstrated that the BCG vaccine reduced infections by 19–27% and reduced progression to active tuberculosis by 71%. The studies included in this review were limited to those that used interferon gamma release assay.
The duration of protection of BCG is not clearly known. In those studies showing a protective effect, the data are inconsistent. The MRC study showed protection waned to 59% after 15 years and to zero after 20 years; however, a study looking at Native Americans immunized in the 1930s found evidence of protection even 60 years after immunization, with only a slight waning in efficacy.
BCG seems to have its greatest effect in preventing miliary tuberculosis or tuberculosis meningitis, so it is still extensively used even in countries where efficacy against pulmonary tuberculosis is negligible.
The 100th anniversary of BCG was in 2021. It remains the only vaccine licensed against tuberculosis, which is an ongoing pandemic. Tuberculosis elimination is a goal of the World Health Organization (WHO), although the development of new vaccines with greater efficacy against adult pulmonary tuberculosis may be needed to make substantial progress.
A number of possible reasons for the variable efficacy of BCG in different countries have been proposed. None has been proven, some have been disproved, and none can explain the lack of efficacy in both low tuberculosis-burden countries (US) and high tuberculosis-burden countries (India). The reasons for variable efficacy have been discussed at length in a WHO document on BCG.
BCG has protective effects against some nontuberculosis mycobacteria.
BCG has been one of the most successful immunotherapies. BCG vaccine has been the "standard of care for patients with bladder cancer (NMIBC)" since 1977. By 2014 there were more than eight different considered biosimilar agents or strains used for the treatment of nonmuscle-invasive bladder cancer.
A tuberculin skin test is usually carried out before administering BCG. A reactive tuberculin skin test is a contraindication to BCG due to the risk of severe local inflammation and scarring; it does not indicate any immunity. BCG is also contraindicated in certain people who have IL-12 receptor pathway defects.
BCG is given as a single intradermal injection at the insertion of the deltoid. If BCG is accidentally given subcutaneously, then a local abscess may form (a "BCG-oma") that can sometimes ulcerate, and may require treatment with antibiotics immediately, otherwise without treatment it could spread the infection, causing severe damage to vital organs. An abscess is not always associated with incorrect administration, and it is one of the more common complications that can occur with the vaccination. Numerous medical studies on treatment of these abscesses with antibiotics have been done with varying results, but the consensus is once pus is aspirated and analysed, provided no unusual bacilli are present, the abscess will generally heal on its own in a matter of weeks.
The characteristic raised scar that BCG immunization leaves is often used as proof of prior immunization. This scar must be distinguished from that of smallpox vaccination, which it may resemble.
When given for bladder cancer, the vaccine is not injected through the skin, but is instilled into the bladder through the urethra using a soft catheter.
BCG immunization generally causes some pain and scarring at the site of injection. The main adverse effects are keloids—large, raised scars. The insertion to the deltoid muscle is most frequently used because the local complication rate is smallest when that site is used. Nonetheless, the buttock is an alternative site of administration because it provides better cosmetic outcomes.
BCG vaccine should be given intradermally. If given subcutaneously, it may induce local infection and spread to the regional lymph nodes, causing either suppurative (production of pus) and nonsuppurative lymphadenitis. Conservative management is usually adequate for nonsuppurative lymphadenitis. If suppuration occurs, it may need needle aspiration. For nonresolving suppuration, surgical excision may be required. Evidence for the treatment of these complications is scarce.
Uncommonly, breast and gluteal abscesses can occur due to haematogenous (carried by the blood) and lymphangiomatous spread. Regional bone infection (BCG osteomyelitis or osteitis) and disseminated BCG infection are rare complications of BCG vaccination, but potentially life-threatening. Systemic antituberculous therapy may be helpful in severe complications.
When BCG is used for bladder cancer, around 2.9% of treated patients discontinue immunotherapy due to a genitourinary or systemic BCG-related infection, however while symptomatic bladder BCG infection is frequent, the involvement of other organs is very uncommon. When systemic involvement occurs, liver and lungs are the first organs to be affected (1 week [median] after the last BCG instillation).
If BCG is accidentally given to an immunocompromised patient (e.g., an infant with severe combined immune deficiency), it can cause disseminated or life-threatening infection. The documented incidence of this happening is less than one per million immunizations given. In 2007, the WHO stopped recommending BCG for infants with HIV, even if the risk of exposure to tuberculosis is high, because of the risk of disseminated BCG infection (which is roughly 400 per 100,000 in that higher risk context).
The age of the person and the frequency with which BCG is given has always varied from country to country. The WHO currently recommends childhood BCG for all countries with a high incidence of tuberculosis and/or high leprosy burden. This is a partial list of historic and current BCG practice around the globe. A complete atlas of past and present practice has been generated.
BCG is prepared from a strain of the attenuated (virulence-reduced) live bovine tuberculosis bacillus, Mycobacterium bovis, that has lost its ability to cause disease in humans. It is specially subcultured in a culture medium, usually Middlebrook 7H9. Because the living bacilli evolve to make the best use of available nutrients, they become less well-adapted to human blood and can no longer induce disease when introduced into a human host. Still, they are similar enough to their wild ancestors to provide some degree of immunity against human tuberculosis. The BCG vaccine can be anywhere from 0 to 80% effective in preventing tuberculosis for a duration of 15 years; however, its protective effect appears to vary according to geography and the lab in which the vaccine strain was grown.
A number of different companies make BCG, sometimes using different genetic strains of the bacterium. This may result in different product characteristics. OncoTICE, used for bladder instillation for bladder cancer, was developed by Organon Laboratories (since acquired by Schering-Plough, and in turn acquired by Merck & Co.). A similar application is the product of Onko BCG of the Polish company Biomed-Lublin, which owns the Brazilian substrain M. bovis BCG Moreau which is less reactogenic than vaccines including other BCG strains. Pacis BCG, made from the Montréal (Institut Armand-Frappier) strain, was first marketed by Urocor in about 2002. Urocor was since acquired by Dianon Systems. Evans Vaccines (a subsidiary of PowderJect Pharmaceuticals). Statens Serum Institut in Denmark markets BCG vaccine prepared using Danish strain 1331. Japan BCG Laboratory markets its vaccine, based on the Tokyo 172 substrain of Pasteur BCG, in 50 countries worldwide.
According to a UNICEF report published in December 2015, on BCG vaccine supply security, global demand increased in 2015 from 123 to 152.2 million doses. To improve security and to [diversify] sources of affordable and flexible supply," UNICEF awarded seven new manufacturers contracts to produce BCG. Along with supply availability from existing manufacturers, and a "new WHO prequalified vaccine" the total supply will be "sufficient to meet both suppressed 2015 demand carried over to 2016, as well as total forecast demand through 2016–2018."
In 2011, the Sanofi Pasteur plant flooded, causing problems with mold. The facility, located in Toronto, Ontario, Canada, produced BCG vaccine products made with substrain Connaught such as a tuberculosis vaccine and ImmuCYST, a BCG immunotherapeutic and bladder cancer drug. By April 2012 the FDA had found dozens of documented problems with sterility at the plant including mold, nesting birds and rusted electrical conduits. The resulting closure of the plant for over two years caused shortages of bladder cancer and tuberculosis vaccines. On 29 October 2014 Health Canada gave the permission for Sanofi to resume production of BCG. A 2018 analysis of the global supply concluded that the supplies are adequate to meet forecast BCG vaccine demand, but that risks of shortages remain, mainly due to dependence of 75 percent of WHO pre-qualified supply on just two suppliers.
Some BCG vaccines are freeze dried and become fine powder. Sometimes the powder is sealed with vacuum in a glass ampoule. Such a glass ampoule has to be opened slowly to prevent the airflow from blowing out the powder. Then the powder has to be diluted with saline water before injecting.
The history of BCG is tied to that of smallpox. By 1865 Jean Antoine Villemin had demonstrated that rabbits could be infected with tuberculosis from humans; by 1868 he had found that rabbits could be infected with tuberculosis from cows, and that rabbits could be infected with tuberculosis from other rabbits. Thus, he concluded that tuberculosis was transmitted via some unidentified microorganism (or "virus", as he called it). In 1882 Robert Koch regarded human and bovine tuberculosis as identical. But in 1895, Theobald Smith presented differences between human and bovine tuberculosis, which he reported to Koch. By 1901 Koch distinguished Mycobacterium bovis from Mycobacterium tuberculosis. Following the success of vaccination in preventing smallpox, established during the 18th century, scientists thought to find a corollary in tuberculosis by drawing a parallel between bovine tuberculosis and cowpox: it was hypothesized that infection with bovine tuberculosis might protect against infection with human tuberculosis. In the late 19th century, clinical trials using M. bovis were conducted in Italy with disastrous results, because M. bovis was found to be just as virulent as M. tuberculosis.
Albert Calmette, a French physician and bacteriologist, and his assistant and later colleague, Camille Guérin, a veterinarian, were working at the Institut Pasteur de Lille (Lille, France) in 1908. Their work included subculturing virulent strains of the tuberculosis bacillus and testing different culture media. They noted a glycerin-bile-potato mixture grew bacilli that seemed less virulent, and changed the course of their research to see if repeated subculturing would produce a strain that was attenuated enough to be considered for use as a vaccine. The BCG strain was isolated after subculturing 239 times during 13 years from virulent strain on glycerine potato medium. The research continued throughout World War I until 1919, when the now avirulent bacilli were unable to cause tuberculosis disease in research animals. Calmette and Guerin transferred to the Paris Pasteur Institute in 1919. The BCG vaccine was first used in humans in 1921.
Public acceptance was slow, and the Lübeck disaster, in particular, did much to harm it. Between 1929 and 1933 in Lübeck, 251 infants were vaccinated in the first 10 days of life; 173 developed tuberculosis and 72 died. It was subsequently discovered that the BCG administered there had been contaminated with a virulent strain that was being stored in the same incubator, which led to legal action against the manufacturers of the vaccine.
Dr. R. G. Ferguson, working at the Fort Qu'Appelle Sanatorium in Saskatchewan, was among the pioneers in developing the practice of vaccination against tuberculosis. In Canada, more than 600 children from residential schools were used as involuntary participants in BCG vaccine trials between 1933 and 1945. In 1928, BCG was adopted by the Health Committee of the League of Nations (predecessor to the World Health Organization (WHO)). Because of opposition, however, it only became widely used after World War II. From 1945 to 1948, relief organizations (International Tuberculosis Campaign or Joint Enterprises) vaccinated over eight million babies in eastern Europe and prevented the predicted typical increase of tuberculosis after a major war.
BCG is very efficacious against tuberculous meningitis in the pediatric age group, but its efficacy against pulmonary tuberculosis appears to be variable. Some countries have removed BCG from routine vaccination. Two countries that have never used it routinely are the United States and the Netherlands (in both countries, it is felt that having a reliable Mantoux test and therefore being able to accurately detect active disease is more beneficial to society than vaccinating against a condition that is now relatively rare there).
Other names include "Vaccin Bilié de Calmette et Guérin vaccine" and "Bacille de Calmette et Guérin vaccine".
Tentative evidence exists for a beneficial non-specific effect of BCG vaccination on overall mortality in low income countries, or for its reducing other health problems including sepsis and respiratory infections when given early, with greater benefit the earlier it is used.
In rhesus macaques, BCG shows improved rates of protection when given intravenously. Some risks must be evaluated before it can be translated to humans.
As of 2017, BCG vaccine is in the early stages of being studied in type 1 diabetes (T1D).
Use of the BCG vaccine may provide protection against COVID-19. However, epidemiologic observations in this respect are ambiguous. The WHO does not recommend its use for prevention as of 12 January 2021.
As of January 2021, twenty BCG trials are in various clinical stages. As of October 2022, the results are extremely mixed. A 15-month trial involving people thrice-vaccinated over the two years before the pandemic shows positive results in preventing infection in BCG-naive people with type 1 diabetes. On the other hand, a 5-month trial shows that re-vaccinating with BCG does not help prevent infection in healthcare workers. Both of these trials were double-blind randomized controlled trials. | [
{
"paragraph_id": 0,
"text": "Bacillus Calmette–Guérin (BCG) vaccine is a vaccine primarily used against tuberculosis (TB). It is named after its inventors Albert Calmette and Camille Guérin. In countries where tuberculosis or leprosy is common, one dose is recommended in healthy babies as soon after birth as possible. In areas where tuberculosis is not common, only children at high risk are typically immunized, while suspected cases of tuberculosis are individually tested for and treated. Adults who do not have tuberculosis and have not been previously immunized, but are frequently exposed, may be immunized, as well. BCG also has some effectiveness against Buruli ulcer infection and other nontuberculous mycobacterial infections. Additionally, it is sometimes used as part of the treatment of bladder cancer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Rates of protection against tuberculosis infection vary widely and protection lasts up to 20 years. Among children, it prevents about 20% from getting infected and among those who do get infected, it protects half from developing disease. The vaccine is given by injection into the skin. No evidence shows that additional doses are beneficial.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Serious side effects are rare. Often, redness, swelling, and mild pain occur at the site of injection. A small ulcer may also form with some scarring after healing. Side effects are more common and potentially more severe in those with immunosuppression. Although no harmful effects on the fetus have been observed, there is insufficient evidence about the safety of BCG vaccination during pregnancy and therefore, vaccine is not recommended for use during pregnancy. The vaccine was originally developed from Mycobacterium bovis, which is commonly found in cattle. While it has been weakened, it is still live.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The BCG vaccine was first used medically in 1921. It is on the World Health Organization's List of Essential Medicines. As of 2004, the vaccine is given to about 100 million children per year globally.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The main use of BCG is for vaccination against tuberculosis. BCG vaccine can be administered after birth intradermally. BCG vaccination can cause a false positive Mantoux test.",
"title": "Medical uses"
},
{
"paragraph_id": 5,
"text": "The most controversial aspect of BCG is the variable efficacy found in different clinical trials, which appears to depend on geography. Trials conducted in the UK have consistently shown a protective effect of 60 to 80%, but those conducted elsewhere have shown no protective effect, and efficacy appears to fall the closer one gets to the equator.",
"title": "Medical uses"
},
{
"paragraph_id": 6,
"text": "A 1994 systematic review found that BCG reduces the risk of getting tuberculosis by about 50%. Differences in effectiveness depend on region, due to factors such as genetic differences in the populations, changes in environment, exposure to other bacterial infections, and conditions in the laboratory where the vaccine is grown, including genetic differences between the strains being cultured and the choice of growth medium.",
"title": "Medical uses"
},
{
"paragraph_id": 7,
"text": "A systematic review and meta-analysis conducted in 2014 demonstrated that the BCG vaccine reduced infections by 19–27% and reduced progression to active tuberculosis by 71%. The studies included in this review were limited to those that used interferon gamma release assay.",
"title": "Medical uses"
},
{
"paragraph_id": 8,
"text": "The duration of protection of BCG is not clearly known. In those studies showing a protective effect, the data are inconsistent. The MRC study showed protection waned to 59% after 15 years and to zero after 20 years; however, a study looking at Native Americans immunized in the 1930s found evidence of protection even 60 years after immunization, with only a slight waning in efficacy.",
"title": "Medical uses"
},
{
"paragraph_id": 9,
"text": "BCG seems to have its greatest effect in preventing miliary tuberculosis or tuberculosis meningitis, so it is still extensively used even in countries where efficacy against pulmonary tuberculosis is negligible.",
"title": "Medical uses"
},
{
"paragraph_id": 10,
"text": "The 100th anniversary of BCG was in 2021. It remains the only vaccine licensed against tuberculosis, which is an ongoing pandemic. Tuberculosis elimination is a goal of the World Health Organization (WHO), although the development of new vaccines with greater efficacy against adult pulmonary tuberculosis may be needed to make substantial progress.",
"title": "Medical uses"
},
{
"paragraph_id": 11,
"text": "A number of possible reasons for the variable efficacy of BCG in different countries have been proposed. None has been proven, some have been disproved, and none can explain the lack of efficacy in both low tuberculosis-burden countries (US) and high tuberculosis-burden countries (India). The reasons for variable efficacy have been discussed at length in a WHO document on BCG.",
"title": "Medical uses"
},
{
"paragraph_id": 12,
"text": "BCG has protective effects against some nontuberculosis mycobacteria.",
"title": "Medical uses"
},
{
"paragraph_id": 13,
"text": "",
"title": "Medical uses"
},
{
"paragraph_id": 14,
"text": "BCG has been one of the most successful immunotherapies. BCG vaccine has been the \"standard of care for patients with bladder cancer (NMIBC)\" since 1977. By 2014 there were more than eight different considered biosimilar agents or strains used for the treatment of nonmuscle-invasive bladder cancer.",
"title": "Medical uses"
},
{
"paragraph_id": 15,
"text": "A tuberculin skin test is usually carried out before administering BCG. A reactive tuberculin skin test is a contraindication to BCG due to the risk of severe local inflammation and scarring; it does not indicate any immunity. BCG is also contraindicated in certain people who have IL-12 receptor pathway defects.",
"title": "Method of administration"
},
{
"paragraph_id": 16,
"text": "BCG is given as a single intradermal injection at the insertion of the deltoid. If BCG is accidentally given subcutaneously, then a local abscess may form (a \"BCG-oma\") that can sometimes ulcerate, and may require treatment with antibiotics immediately, otherwise without treatment it could spread the infection, causing severe damage to vital organs. An abscess is not always associated with incorrect administration, and it is one of the more common complications that can occur with the vaccination. Numerous medical studies on treatment of these abscesses with antibiotics have been done with varying results, but the consensus is once pus is aspirated and analysed, provided no unusual bacilli are present, the abscess will generally heal on its own in a matter of weeks.",
"title": "Method of administration"
},
{
"paragraph_id": 17,
"text": "The characteristic raised scar that BCG immunization leaves is often used as proof of prior immunization. This scar must be distinguished from that of smallpox vaccination, which it may resemble.",
"title": "Method of administration"
},
{
"paragraph_id": 18,
"text": "When given for bladder cancer, the vaccine is not injected through the skin, but is instilled into the bladder through the urethra using a soft catheter.",
"title": "Method of administration"
},
{
"paragraph_id": 19,
"text": "BCG immunization generally causes some pain and scarring at the site of injection. The main adverse effects are keloids—large, raised scars. The insertion to the deltoid muscle is most frequently used because the local complication rate is smallest when that site is used. Nonetheless, the buttock is an alternative site of administration because it provides better cosmetic outcomes.",
"title": "Adverse effects"
},
{
"paragraph_id": 20,
"text": "BCG vaccine should be given intradermally. If given subcutaneously, it may induce local infection and spread to the regional lymph nodes, causing either suppurative (production of pus) and nonsuppurative lymphadenitis. Conservative management is usually adequate for nonsuppurative lymphadenitis. If suppuration occurs, it may need needle aspiration. For nonresolving suppuration, surgical excision may be required. Evidence for the treatment of these complications is scarce.",
"title": "Adverse effects"
},
{
"paragraph_id": 21,
"text": "Uncommonly, breast and gluteal abscesses can occur due to haematogenous (carried by the blood) and lymphangiomatous spread. Regional bone infection (BCG osteomyelitis or osteitis) and disseminated BCG infection are rare complications of BCG vaccination, but potentially life-threatening. Systemic antituberculous therapy may be helpful in severe complications.",
"title": "Adverse effects"
},
{
"paragraph_id": 22,
"text": "When BCG is used for bladder cancer, around 2.9% of treated patients discontinue immunotherapy due to a genitourinary or systemic BCG-related infection, however while symptomatic bladder BCG infection is frequent, the involvement of other organs is very uncommon. When systemic involvement occurs, liver and lungs are the first organs to be affected (1 week [median] after the last BCG instillation).",
"title": "Adverse effects"
},
{
"paragraph_id": 23,
"text": "If BCG is accidentally given to an immunocompromised patient (e.g., an infant with severe combined immune deficiency), it can cause disseminated or life-threatening infection. The documented incidence of this happening is less than one per million immunizations given. In 2007, the WHO stopped recommending BCG for infants with HIV, even if the risk of exposure to tuberculosis is high, because of the risk of disseminated BCG infection (which is roughly 400 per 100,000 in that higher risk context).",
"title": "Adverse effects"
},
{
"paragraph_id": 24,
"text": "The age of the person and the frequency with which BCG is given has always varied from country to country. The WHO currently recommends childhood BCG for all countries with a high incidence of tuberculosis and/or high leprosy burden. This is a partial list of historic and current BCG practice around the globe. A complete atlas of past and present practice has been generated.",
"title": "Usage"
},
{
"paragraph_id": 25,
"text": "BCG is prepared from a strain of the attenuated (virulence-reduced) live bovine tuberculosis bacillus, Mycobacterium bovis, that has lost its ability to cause disease in humans. It is specially subcultured in a culture medium, usually Middlebrook 7H9. Because the living bacilli evolve to make the best use of available nutrients, they become less well-adapted to human blood and can no longer induce disease when introduced into a human host. Still, they are similar enough to their wild ancestors to provide some degree of immunity against human tuberculosis. The BCG vaccine can be anywhere from 0 to 80% effective in preventing tuberculosis for a duration of 15 years; however, its protective effect appears to vary according to geography and the lab in which the vaccine strain was grown.",
"title": "Manufacture"
},
{
"paragraph_id": 26,
"text": "A number of different companies make BCG, sometimes using different genetic strains of the bacterium. This may result in different product characteristics. OncoTICE, used for bladder instillation for bladder cancer, was developed by Organon Laboratories (since acquired by Schering-Plough, and in turn acquired by Merck & Co.). A similar application is the product of Onko BCG of the Polish company Biomed-Lublin, which owns the Brazilian substrain M. bovis BCG Moreau which is less reactogenic than vaccines including other BCG strains. Pacis BCG, made from the Montréal (Institut Armand-Frappier) strain, was first marketed by Urocor in about 2002. Urocor was since acquired by Dianon Systems. Evans Vaccines (a subsidiary of PowderJect Pharmaceuticals). Statens Serum Institut in Denmark markets BCG vaccine prepared using Danish strain 1331. Japan BCG Laboratory markets its vaccine, based on the Tokyo 172 substrain of Pasteur BCG, in 50 countries worldwide.",
"title": "Manufacture"
},
{
"paragraph_id": 27,
"text": "According to a UNICEF report published in December 2015, on BCG vaccine supply security, global demand increased in 2015 from 123 to 152.2 million doses. To improve security and to [diversify] sources of affordable and flexible supply,\" UNICEF awarded seven new manufacturers contracts to produce BCG. Along with supply availability from existing manufacturers, and a \"new WHO prequalified vaccine\" the total supply will be \"sufficient to meet both suppressed 2015 demand carried over to 2016, as well as total forecast demand through 2016–2018.\"",
"title": "Manufacture"
},
{
"paragraph_id": 28,
"text": "In 2011, the Sanofi Pasteur plant flooded, causing problems with mold. The facility, located in Toronto, Ontario, Canada, produced BCG vaccine products made with substrain Connaught such as a tuberculosis vaccine and ImmuCYST, a BCG immunotherapeutic and bladder cancer drug. By April 2012 the FDA had found dozens of documented problems with sterility at the plant including mold, nesting birds and rusted electrical conduits. The resulting closure of the plant for over two years caused shortages of bladder cancer and tuberculosis vaccines. On 29 October 2014 Health Canada gave the permission for Sanofi to resume production of BCG. A 2018 analysis of the global supply concluded that the supplies are adequate to meet forecast BCG vaccine demand, but that risks of shortages remain, mainly due to dependence of 75 percent of WHO pre-qualified supply on just two suppliers.",
"title": "Manufacture"
},
{
"paragraph_id": 29,
"text": "Some BCG vaccines are freeze dried and become fine powder. Sometimes the powder is sealed with vacuum in a glass ampoule. Such a glass ampoule has to be opened slowly to prevent the airflow from blowing out the powder. Then the powder has to be diluted with saline water before injecting.",
"title": "Manufacture"
},
{
"paragraph_id": 30,
"text": "The history of BCG is tied to that of smallpox. By 1865 Jean Antoine Villemin had demonstrated that rabbits could be infected with tuberculosis from humans; by 1868 he had found that rabbits could be infected with tuberculosis from cows, and that rabbits could be infected with tuberculosis from other rabbits. Thus, he concluded that tuberculosis was transmitted via some unidentified microorganism (or \"virus\", as he called it). In 1882 Robert Koch regarded human and bovine tuberculosis as identical. But in 1895, Theobald Smith presented differences between human and bovine tuberculosis, which he reported to Koch. By 1901 Koch distinguished Mycobacterium bovis from Mycobacterium tuberculosis. Following the success of vaccination in preventing smallpox, established during the 18th century, scientists thought to find a corollary in tuberculosis by drawing a parallel between bovine tuberculosis and cowpox: it was hypothesized that infection with bovine tuberculosis might protect against infection with human tuberculosis. In the late 19th century, clinical trials using M. bovis were conducted in Italy with disastrous results, because M. bovis was found to be just as virulent as M. tuberculosis.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Albert Calmette, a French physician and bacteriologist, and his assistant and later colleague, Camille Guérin, a veterinarian, were working at the Institut Pasteur de Lille (Lille, France) in 1908. Their work included subculturing virulent strains of the tuberculosis bacillus and testing different culture media. They noted a glycerin-bile-potato mixture grew bacilli that seemed less virulent, and changed the course of their research to see if repeated subculturing would produce a strain that was attenuated enough to be considered for use as a vaccine. The BCG strain was isolated after subculturing 239 times during 13 years from virulent strain on glycerine potato medium. The research continued throughout World War I until 1919, when the now avirulent bacilli were unable to cause tuberculosis disease in research animals. Calmette and Guerin transferred to the Paris Pasteur Institute in 1919. The BCG vaccine was first used in humans in 1921.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Public acceptance was slow, and the Lübeck disaster, in particular, did much to harm it. Between 1929 and 1933 in Lübeck, 251 infants were vaccinated in the first 10 days of life; 173 developed tuberculosis and 72 died. It was subsequently discovered that the BCG administered there had been contaminated with a virulent strain that was being stored in the same incubator, which led to legal action against the manufacturers of the vaccine.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Dr. R. G. Ferguson, working at the Fort Qu'Appelle Sanatorium in Saskatchewan, was among the pioneers in developing the practice of vaccination against tuberculosis. In Canada, more than 600 children from residential schools were used as involuntary participants in BCG vaccine trials between 1933 and 1945. In 1928, BCG was adopted by the Health Committee of the League of Nations (predecessor to the World Health Organization (WHO)). Because of opposition, however, it only became widely used after World War II. From 1945 to 1948, relief organizations (International Tuberculosis Campaign or Joint Enterprises) vaccinated over eight million babies in eastern Europe and prevented the predicted typical increase of tuberculosis after a major war.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "BCG is very efficacious against tuberculous meningitis in the pediatric age group, but its efficacy against pulmonary tuberculosis appears to be variable. Some countries have removed BCG from routine vaccination. Two countries that have never used it routinely are the United States and the Netherlands (in both countries, it is felt that having a reliable Mantoux test and therefore being able to accurately detect active disease is more beneficial to society than vaccinating against a condition that is now relatively rare there).",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Other names include \"Vaccin Bilié de Calmette et Guérin vaccine\" and \"Bacille de Calmette et Guérin vaccine\".",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Tentative evidence exists for a beneficial non-specific effect of BCG vaccination on overall mortality in low income countries, or for its reducing other health problems including sepsis and respiratory infections when given early, with greater benefit the earlier it is used.",
"title": "Research"
},
{
"paragraph_id": 37,
"text": "In rhesus macaques, BCG shows improved rates of protection when given intravenously. Some risks must be evaluated before it can be translated to humans.",
"title": "Research"
},
{
"paragraph_id": 38,
"text": "As of 2017, BCG vaccine is in the early stages of being studied in type 1 diabetes (T1D).",
"title": "Research"
},
{
"paragraph_id": 39,
"text": "Use of the BCG vaccine may provide protection against COVID-19. However, epidemiologic observations in this respect are ambiguous. The WHO does not recommend its use for prevention as of 12 January 2021.",
"title": "Research"
},
{
"paragraph_id": 40,
"text": "As of January 2021, twenty BCG trials are in various clinical stages. As of October 2022, the results are extremely mixed. A 15-month trial involving people thrice-vaccinated over the two years before the pandemic shows positive results in preventing infection in BCG-naive people with type 1 diabetes. On the other hand, a 5-month trial shows that re-vaccinating with BCG does not help prevent infection in healthcare workers. Both of these trials were double-blind randomized controlled trials.",
"title": "Research"
}
] | Bacillus Calmette–Guérin (BCG) vaccine is a vaccine primarily used against tuberculosis (TB). It is named after its inventors Albert Calmette and Camille Guérin. In countries where tuberculosis or leprosy is common, one dose is recommended in healthy babies as soon after birth as possible. In areas where tuberculosis is not common, only children at high risk are typically immunized, while suspected cases of tuberculosis are individually tested for and treated. Adults who do not have tuberculosis and have not been previously immunized, but are frequently exposed, may be immunized, as well. BCG also has some effectiveness against Buruli ulcer infection and other nontuberculous mycobacterial infections. Additionally, it is sometimes used as part of the treatment of bladder cancer. Rates of protection against tuberculosis infection vary widely and protection lasts up to 20 years. Among children, it prevents about 20% from getting infected and among those who do get infected, it protects half from developing disease. The vaccine is given by injection into the skin. No evidence shows that additional doses are beneficial. Serious side effects are rare. Often, redness, swelling, and mild pain occur at the site of injection. A small ulcer may also form with some scarring after healing. Side effects are more common and potentially more severe in those with immunosuppression. Although no harmful effects on the fetus have been observed, there is insufficient evidence about the safety of
BCG vaccination during pregnancy and therefore, vaccine is not recommended for use during pregnancy. The vaccine was originally developed from Mycobacterium bovis, which is commonly found in cattle. While it has been weakened, it is still live. The BCG vaccine was first used medically in 1921. It is on the World Health Organization's List of Essential Medicines. As of 2004, the vaccine is given to about 100 million children per year globally. | 2001-09-15T12:31:49Z | 2023-12-20T20:09:01Z | [
"Template:Webarchive",
"Template:Cite web",
"Template:Cite report",
"Template:Citation",
"Template:Use dmy dates",
"Template:Anchor",
"Template:Flaglist",
"Template:See also",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite news",
"Template:MeshName",
"Template:Short description",
"Template:Infobox drug",
"Template:As of",
"Template:Portal bar",
"Template:Cite journal",
"Template:Cite book",
"Template:Vaccines",
"Template:Authority control",
"Template:TOC limit",
"Template:Na",
"Template:Ya"
] | https://en.wikipedia.org/wiki/BCG_vaccine |
4,192 | Bunsen | Bunsen may refer to: | [
{
"paragraph_id": 0,
"text": "Bunsen may refer to:",
"title": ""
}
] | Bunsen may refer to: Christian Charles Josias Bunsen (1791–1860), Prussian diplomat and scholar
Frances Bunsen (1791–1876), or Baroness Bunsen, Welsh painter and author, wife of Christian Charles Josias Bunsen
Robert Bunsen (1811–1899), German chemist, after whom is named:
Bunsen burner
Bunsen cell
Bunsen crater on the Moon
10361 Bunsen, an asteroid
Bunsen Reaction
The Bunsen–Kirchhoff Award, a German award for spectroscopy
Sir Maurice de Bunsen (1852–1932), British diplomat
Dr. Bunsen Honeydew, fictional character from the Muppet Show | 2023-06-29T14:56:40Z | [
"Template:Surname"
] | https://en.wikipedia.org/wiki/Bunsen |
|
4,193 | Common buzzard | The common buzzard (Buteo buteo) is a medium-to-large bird of prey which has a large range. It is a member of the genus Buteo in the family Accipitridae. The species lives in most of Europe and extends its breeding range across much of the Palearctic as far as northwestern China (Tian Shan), far western Siberia and northwestern Mongolia. Over much of its range, it is a year-round resident. However, buzzards from the colder parts of the Northern Hemisphere as well as those that breed in the eastern part of their range typically migrate south for the northern winter, many journeying as far as South Africa.
The common buzzard is an opportunistic predator that can take a wide variety of prey, but it feeds mostly on small mammals, especially rodents such as voles. It typically hunts from a perch. Like most accipitrid birds of prey, it builds a nest, typically in trees in this species, and is a devoted parent to a relatively small brood of young. The common buzzard appears to be the most common diurnal raptor in Europe, as estimates of its total global population run well into the millions.
The first formal description of the common buzzard was by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae under the binomial name Falco buteo. The genus Buteo was introduced by the French naturalist Bernard Germain de Lacépède in 1799 by tautonymy with the specific name of this species. The word buteo is Latin for a buzzard. It should not be confused with the Turkey vulture, which is sometimes called a buzzard in American English.
The Buteoninae subfamily originated from and is most diversified in the Americas, with occasional broader radiations that led to common buzzards and other Eurasian and African buzzards. The common buzzard is a member of the genus Buteo, a group of medium-sized raptors with robust bodies and broad wings. The Buteo species of Eurasia and Africa are usually commonly referred to as "buzzards" while those in the Americas are called hawks. Under current classification, the genus includes approximately 28 species, the second most diverse of all extant accipitrid genera behind only Accipiter. DNA testing shows that the common buzzard is fairly closely related to the red-tailed hawk (Buteo jamaicensis) of North America, which occupies a similar ecological niche to the buzzard in that continent. The two species may belong to the same species complex. Three buzzards in Africa are likely closely related to the common buzzard based on genetic materials, the Mountain buzzard (Buteo oreophilus), Forest buzzards (Buteo trizonatus) and the Madagascar buzzard (Buteo brachypterus), to the point where it has been questioned whether they are sufficiently distinct to qualify as full species. However, the distinctiveness of these African buzzards has generally been supported. Genetic studies have further indicated that the modern buzzards of Eurasia and Africa are a relatively young group, showing that they diverged at about 300,000 years ago. Nonetheless, fossils dating earlier than 5 million year old (the late Miocene period) showed Buteo species were present in Europe much earlier than that would imply, although it cannot be stated to a certainty that these would’ve been related to the extant buzzards.
Some 16 subspecies have been described in the past and up to 11 are often considered valid, although some authorities accept as few as seven. Common buzzard subspecies fall into two groups.
The western buteo group is mainly resident or short-distance migrants and includes:
The eastern vulpinus group includes:
At one time, races of the common buzzard were thought to range as far in Asia as a breeding bird well into the Himalayas and as far east as northeastern China, Russia to the Sea of Okhotsk, and all the islands of the Kurile Islands and of Japan, despite both the Himalayan and eastern birds showing a natural gap in distribution from the next nearest breeding common buzzard. However, DNA testing has revealed that the buzzards of these populations probably belong to different species. Most authorities now accept these buzzards as full species: the eastern buzzard (Buteo japonicus; with three subspecies of its own) and the Himalayan buzzard (Buteo refectus). Buzzards found on the islands of Cape Verde off of the coast of western Africa, once referred to as the subspecies B. b. bannermani, and Socotra Island off of the northern peninsula of Arabia, once referred to as the rarely recognized subspecies B. b. socotrae, are now generally thought not to belong to the common buzzard. DNA testing has indicated that these insular buzzards are actually more closely related to the long-legged buzzard (Buteo rufinus) than to the common buzzard. Subsequently, some researchers have advocated full species status for the Cape Verde population, but the placement of these buzzards is generally deemed unclear.
The common buzzard is a medium to large sized raptor that is highly variable in plumage. Most buzzards are distinctly round headed with a somewhat slender bill, relatively long wings that either reach or fall slightly short of the tail tip when perched, a fairly short tail, and somewhat short and mainly bare tarsi. They can appear fairly compact in overall appearance but may also appear large relative to other more common raptorial birds such as kestrels and sparrowhawks. The common buzzard measures between 40 and 58 cm (16 and 23 in) in length with a 109–140 cm (43–55 in) wingspan. Females average about 2–7% larger than males linearly and weigh about 15% more. Body mass can show considerable variation. Buzzards from Great Britain alone can vary from 427 to 1,183 g (0.941 to 2.608 lb) in males, while females there can range from 486 to 1,370 g (1.071 to 3.020 lb).
In Europe, most typical buzzards are dark brown above and on the upperside of the head and mantle, but can become paler and warmer brown with worn plumage. The flight feathers on perched European buzzards are always brown in the nominate subspecies (B. b. buteo). Usually the tail will usually be narrowly barred grey-brown and dark brown with a pale tip and a broad dark subterminal band but the tail in palest birds can show a varying amount a white and reduced subterminal band or even appear almost all white. In European buzzards, the underside coloring can be variable but most typically show a brown-streaked white throat with a somewhat darker chest. A pale U across breast is often present; followed by a pale line running down the belly which separates the dark areas on breast-side and flanks. These pale areas tend to have highly variable markings that tend to form irregular bars. Juvenile buzzards are quite similar to adult in the nominate race, being best told apart by having a paler eye, a narrower subterminal band on the tail and underside markings that appear as streaks rather than bars. Furthermore, juveniles may show variable creamy to rufous fringes to upperwing coverts but these also may not be present. Seen from below in flight, buzzards in Europe typically have a dark trailing edge to the wings. If seen from above, one of the best marks is their broad dark subterminal tail band. Flight feathers of typical European buzzards are largely greyish, the aforementioned dark wing linings at front with contrasting paler band along the median coverts. In flight, paler individuals tend to show dark carpal patches that can appears as blackish arches or commas but these may be indistinct in darker individuals or can appear light brownish or faded in paler individuals. Juvenile nominate buzzards are best told apart from adults in flight by the lack of a distinct subterminal band (instead showing fairly even barring throughout) and below by having less sharp and brownish rather than blackish trailing wing edge. Juvenile buzzards show streaking paler parts of under wing and body showing rather than barring as do adults. Beyond the typical mid-range brownish buzzard, birds in Europe can range from almost uniform black-brown above to mainly white. Extreme dark individuals may range from chocolate brown to blackish with almost no pale showing but a variable, faded U on the breast and with or without faint lighter brown throat streaks. Extreme pale birds are largely whitish with variable widely spaced streaks or arrowheads of light brown about the mid-chest and flanks and may or may not show dark feather-centres on the head, wing-coverts and sometimes all but part of mantle. Individuals can show nearly endless variation of colours and hues in between these extremes and the common buzzard is counted among the most variably plumage diurnal raptors for this reason. One study showed that this variation may actually be the result of diminished single-locus genetic diversity.
Beyond the nominate form (B. b. buteo) that occupies most of the common buzzard's European range, a second main, widely distributed subspecies is known as the steppe buzzard (B. b. vulpinus). The steppe buzzard race shows three main colour morphs, each of which can be predominant in a region of breeding range. It is more distinctly polymorphic rather than just individually very variable like the nominate race. This may be because, unlike the nominate buzzard, the steppe buzzard is highly migratory. Polymorphism has been linked with migratory behaviour. The most common type of steppe buzzard is the rufous morph which gives this subspecies its scientific name (vulpes is Latin for "fox"). This morph comprises a majority of birds seen in passage east of the Mediterranean. Rufous morph buzzards are a paler grey-brown above than most nominate B. b. buteo. Compared to the nominate race, rufous vulpinus show a patterning not dissimilar but generally far more rufous-toned on head, the fringes to mantle wing coverts and, especially, on the tail and the underside. The head is grey-brown with rufous tinges usually while the tail is rufous and can vary from almost unmarked to thinly dark-barred with a subterminal band. The underside can be uniformly pale to dark rufous, barred heavily or lightly with rufous or with dusky barring, usually with darker individuals showing the U as in nominate but with a rufous hue. The pale morph of the steppe buzzard is commonest in the west of its subspecies range, predominantly seen in winter and migration at the various land bridge of the Mediterranean. As in the rufous morph, the pale morph vulpinus is grey-brown above but the tail is generally marked with thin dark bars and a subterminal band, only showing rufous near the tip. The underside in the pale morph is greyish-white with dark grey-brown or somewhat streaked head to chest and barred belly and chest, occasionally showing darker flanks that can be somewhat rufous. Dark morph vulpinus tend to be found in the east and southeast of the subspecies range and are easily outnumbered by rufous morph while largely using similar migration points. Dark morph individuals vary from grey-brown to much darker blackish-brown, and have a tail that is dark grey or somewhat mixed grey and rufous, is distinctly marked with dark barring and has a broad, black subterminal band. Dark morph vulpinus have a head and underside that is mostly uniform dark, from dark brown to blackish-brown to almost pure black. Rufous morph juveniles are often distinctly paler in ground colour (ranging even to creamy-grey) than adults with distinct barring below actually increased in pale morph type juvenile. Pale and rufous morph juveniles can only be distinguished from each other in extreme cases. Dark morph juveniles are more similar to adult dark morph vulpinus but often show a little whitish streaking below, and like all other races have lighter coloured eyes and more evenly barred tails than adults. Steppe buzzards tend to appear smaller and more agile in flight than nominate whose wing beats can look slower and clumsier. In flight, rufous morph vulpinus have their whole body and underwing varying from uniform to patterned rufous (if patterning present, it is variable, but can be on chest and often thighs, sometimes flanks, pale band across median coverts), while the under-tail usually paler rufous than above. Whitish flight feathers are more prominent than in nominate and more marked contrast with the bold dark brown band along the trailing edges. Markings of pale vulpinus as seen in flight are similar to rufous morph (such as paler wing markings) but more greyish both on wings and body. In dark morph vulpinus the broad black trailing edges and colour of body make whitish areas of inner wing stand out further with an often bolder and blacker carpal patch than in other morphs. As in nominate, juvenile vulpinus (rufous/pale) tend to have much less distinct trailing edges, general streaking on body and along median underwing coverts. Dark morph vulpinus resemble adult in flight more so than other morphs.
The common buzzard is often confused with other raptors especially in flight or at a distance. Inexperienced and over-enthusiastic observers have even mistaken darker birds for the far larger and differently proportioned golden eagle (Aquila chrysaetos) and also dark birds for western marsh harrier (Circus aeruginosus) which also flies in a dihedral but is obviously relatively much longer and slenderer winged and tailed and with far different flying methods. Also buzzards may possibly be confused with dark or light morph booted eagles (Hieraeetus pennatus), which are similar in size, but the eagle flies on level, parallel-edged wings which usually appear broader, has a longer squarer tail, with no carpal patch in pale birds and all dark flight feathers but for whitish wedge on inner primaries in dark morph ones. Pale individuals are sometimes also mistaken with pale morph short-toed eagles (Circaetus gallicus) which are much larger with a considerably bigger head, longer wings (which are usually held evenly in flight rather than in a dihedral) and paler underwing lacking any carpal patch or dark wing lining. More serious identification concerns lie in other Buteo species and in flight with honey buzzards, which are quite different looking when seen perched at close range. The European honey buzzard (Pernis apivorus) is thought in engage in mimicry of more powerful raptors, in particular, juveniles may mimic the plumage of the more powerful common buzzard. While less individually variable in Europe, the honey buzzard is more extensive polymorphic on underparts than even the common buzzard. The most common morph of the adult European honey buzzard is heavily and rufous barred on the underside, quite different from the common buzzard, however the brownish juvenile much more resembles an intermediate common buzzard. Honey buzzards flap with distinctively slower and more even wing beats than common buzzard. The wings are also lifted higher on each upstroke, creating a more regular and mechanical effect, furthermore their wings are held slightly arched when soaring but not in a V. On the honey buzzard, the head appears smaller, the body thinner, the tail longer and the wings narrower and more parallel edged. The steppe buzzard race is particularly often mistaken for juvenile European honey buzzards, to the point where early observers of raptor migration in Israel considered distant individuals indistinguishable. However, when compared to a steppe buzzard, the honey buzzard has distinctly darker secondaries on the underwing with fewer and broader bars and more extensive black wing-tips (whole fingers) contrasting with a less extensively pale hand. Found in the same range as the steppe buzzard in some parts of southern Siberia as well as (with wintering steppes) in southwestern India, the Oriental honey buzzard (Pernis ptilorhynchus) is larger than both the European honey buzzard and the common buzzard. The oriental species is with more similar in body plan to common buzzards, being relatively broader winged, shorter tailed and more amply-headed (though the head is still relatively small) relative to the European honey buzzard, but all plumages lack carpal patches.
In much of Europe, the common buzzard is the only type of buzzard. However, the subarctic breeding rough-legged buzzard (Buteo lagopus) comes down to occupy much of the northern part of the continent during winter in the same haunts as the common buzzard. However, the rough-legged buzzard is typically larger and distinctly longer-winged with feathered legs, as well as having a white based tail with a broad subterminal band. Rough-legged buzzards have slower wing beats and hover far more frequently than do common buzzards. The carpal patch marking on the under-wing are also bolder and blacker on all paler forms of rough-legged hawk. Many pale morph rough-legged buzzards have a bold, blackish band across the belly against contrasting paler feathers, a feature which rarely appears in individual common buzzard. Usually the face also appears somewhat whitish in most pale morphs of rough-legged buzzards, which is true of only extremely pale common buzzards. Dark morph rough-legged buzzards are usually distinctly darker (ranging to almost blackish) than even extreme dark individuals of common buzzards in Europe and still have the distinct white-based tail and broad subterminal band of other roughlegs. In eastern Europe and much of the Asian range of common buzzards, the long-legged buzzard (Buteo rufinus) may live alongside the common species. As in the steppe buzzard race, the long-legged buzzard has three main colour morphs that are more or less similar in hue. In both the steppe buzzard race and long-legged buzzard, the main colour is overall fairly rufous. More so than steppe buzzards, long-legged buzzards tend to have a distinctly paler head and neck compared to other feathers, and, more distinctly, a normally unbarred tail. Furthermore, the long-legged buzzard is usually a rather larger bird, often considered fairly eagle-like in appearance (although it does appear gracile and small-billed even compared to smaller true eagles), an effect enhanced by its longer tarsi, somewhat longer neck and relatively elongated wings. The flight style of the latter species is deeper, slower and more aquiline, with much more frequent hovering, showing a more protruding head and a slightly higher V held in a soar. The smaller North African and Arabian race of long-legged buzzard (B. r. cirtensis) is more similar in size and nearly all colour characteristics to steppe buzzard, extending to the heavily streaked juvenile plumage, in some cases such birds can be distinguished only by their proportions and flight patterns which remain unchanged. Hybridization with the latter race (B. r. cirtensis) and nominate common buzzards has been observed in the Strait of Gibraltar, a few such birds have been reported potentially in the southern Mediterranean due to mutually encroaching ranges, which are blurring possibly due to climate change.
Wintering steppe buzzards may live alongside mountain buzzards and especially with forest buzzard while wintering in Africa. The juveniles of steppe and forest buzzards are more or less indistinguishable and only told apart by proportions and flight style, the latter species being smaller, more compact, having a smaller bill, shorter legs and shorter and thinner wings than a steppe buzzard. However, size is not diagnostic unless side by side as the two buzzards overlap in this regard. Most reliable are the species wing proportions and their flight actions. Forest buzzard have more flexible wing beats interspersed with glides, additionally soaring on flatter wings and apparently never engage in hovering. Adult forest buzzards compared to the typical adult steppe buzzard (rufous morph) are also similar, but the forest typically has a whiter underside, sometimes mostly plain white, usually with heavy blotches or drop-shaped marks on abdomen, with barring on thighs, more narrow tear-shaped on chest and more spotted on leading edges of underwing, usually lacking marking on the white U across chest (which is otherwise similar but usually broader than that of vulpinus). In comparison, the mountain buzzard, which is more similar in size to the steppe buzzard and slightly larger than the forest buzzard, is usually duller brown above than a steppe buzzard and is more whitish below with distinctive heavy brown blotches from breasts to the belly, flanks and wing linings while juvenile mountain buzzard is buffy below with smaller and streakier markings. The steppe buzzard when compared to another African species, the red-necked buzzard (Buteo auguralis), which has red tail similar to vulpinus, is distinct in all other plumage aspects despite their similar size. The latter buzzard has a streaky rufous head and is white below with a contrasting bold dark chest in adult plumage and, in juvenile plumage, has heavy, dark blotches on the chest and flanks with pale wing-linings. Jackal and augur buzzards (Buteo rufofuscus & augur), also both rufous on the tail, are larger and bulkier than steppe buzzards and have several distinctive plumage characteristics, most notably both having their own striking, contrasting patterns of black-brown, rufous and cream.
The common buzzard is found throughout several islands in the eastern Atlantic islands, including the Canary Islands and Azores and almost throughout Europe. It is today found in Ireland and in nearly every part of Scotland, Wales and England. In mainland Europe, remarkably, there are no substantial gaps without breeding common buzzards from Portugal and Spain to Greece, Estonia, Belarus and Ukraine, though are present mainly only in the breeding season in much of the eastern half of the latter three countries. They are also present in all larger Mediterranean islands such as Corsica, Sardinia, Sicily and Crete. Further north in Scandinavia, they are found mainly in southeastern Norway (though also some points in southwestern Norway close to the coast and one section north of Trondheim), just over the southern half of Sweden and hugging over the Gulf of Bothnia to Finland where they live as a breeding species over nearly two-thirds of the land.
The common buzzard reaches its northern limits as a breeder in far eastern Finland and over the border to European Russia, continuing as a breeder over to the narrowest straits of the White Sea and nearly to the Kola Peninsula. In these northern quarters, the common buzzard is present typically only in summer but is a year-around resident of a hearty bit of southern Sweden and some of southern Norway. Outside of Europe, it is a resident of northern Turkey (largely close to the Black Sea) otherwise occurring mainly as a passage migrant or winter visitor in the remainder of Turkey, Georgia, sporadically but not rarely in Azerbaijan and Armenia, northern Iran (largely hugging the Caspian Sea) to northern Turkmenistan. Further north though its absent from either side of the northern Caspian Sea, the common buzzard is found in much of western Russia (though exclusively as a breeder) including all of the Central Federal District and the Volga Federal District, all but the northernmost parts of the Northwestern and Ural Federal Districts and nearly the southern half of the Siberian Federal District, its farthest easterly occurrence as a breeder. It also found in northern Kazakhstan, Kyrgyzstan, far northwestern China (Tien Shan) and northwestern Mongolia.
Non-breeding populations occur, either as migrants or wintering birds, in southwestern India, Israel, Lebanon, Syria, Egypt (northeastern), northern Tunisia (and far northwestern Algeria), northern Morocco, near the coasts of The Gambia, Senegal and far southwestern Mauritania and Ivory Coast (and bordering Burkina Faso). In eastern and central Africa, it is found in winter from southeastern Sudan, Eritrea, about two-thirds of Ethiopia, much of Kenya (though apparently absent from the northeast and northwest), Uganda, southern and eastern Democratic Republic of the Congo, and more or less the entirety of southern Africa from Angola across to Tanzania down the remainder of the continent (but for an apparent gap along the coast from southwestern Angola to northwestern South Africa).
The common buzzard generally inhabits the interface of woodlands and open grounds; most typically the species lives in forest edge, small woods or shelterbelts with adjacent grassland, arables or other farmland. It acquits to open moorland as long as there is some trees for perch hunting and nesting use. The woods they inhabit may be coniferous, temperate broadleaf and mixed forests and temperate deciduous forest with occasional preferences for the local dominant tree. It is absent from treeless tundra, as well as the Subarctic where the species almost entirely gives way to the rough-legged buzzard. The common buzzard is sporadic or rare in treeless steppe but can occasionally migrate through it (despite its name, the steppe buzzard subspecies breeds primarily in the wooded fringes of the steppe). The species may be found to some extent in both in mountainous or flat country. Although adaptable to and sometimes seen in wetlands and in coastal areas, buzzards are often considered more of an upland species and neither appear to be regularly attracted to or to strongly avoid bodies of waters in non-migratory times. Buzzards in well-wooded areas of eastern Poland largely used large, mature stands of trees that were more humid, richer and denser than prevalent in surrounding area, but showed preference for those within 30 to 90 m (98 to 295 ft) of openings. Mostly resident buzzards live in lowlands and foothills, but they can live in timbered ridges and uplands as well as rocky coasts, sometimes nesting on cliff ledges rather than trees. Buzzards may live from sea level to elevations of 2,000 m (6,600 ft), breeding mostly below 1,000 m (3,300 ft) but they can winter to an elevation of 2,500 m (8,200 ft) and migrates easily to 4,500 m (14,800 ft). In the mountainous Italian Apennines, buzzard nests were at a mean elevation of 1,399 m (4,590 ft) and were, relative to the surrounding area, further from human developed areas (i.e. roads) and nearer to valley bottoms in rugged, irregularly topographed places, especially ones that faced northeast. Common buzzards are fairly adaptable to agricultural lands but will show can show regional declines in apparent response to agriculture. Changes to more extensive agricultural practices were shown to reduce buzzard populations in western France where reduction of “hedgerows, woodlots and grasslands areas" caused a decline of buzzards and in Hampshire, England where more extensive grazing by free-range cattle and horses led to declines of buzzards, probably largely due to the seeming reduction of small mammal populations there. On the contrary, buzzards in central Poland adapted to removal of pine trees and reduction of rodent prey by changing nest sites and prey for a time with no strong change in their local numbers. Extensive urbanization seems to negatively affect buzzards, this species being generally less adaptable to urban areas than their New World counterparts, the red-tailed hawk. Although peri-urban areas can actually increase potential prey populations in a location at times, individual buzzard mortality, nest disturbances and nest site habitat degradation rises significantly in such areas. Common buzzards are fairly adaptive to rural areas as well as suburban areas with parks and large gardens, in addition to such areas if they're near farms.
The common buzzard is a typical Buteo in much of its behaviour. It is most often seen either soaring at varying heights or perched prominently on tree tops, bare branches, telegraph poles, fence posts, rocks or ledges, or alternately well inside tree canopies. Buzzards will also stand and forage on the ground. In resident populations, it may spend more than half of its day inactively perched. Furthermore, it has been described a "sluggish and not very bold" bird of prey. It is a gifted soarer once aloft and can do so for extended periods but can appear laborious and heavy in level flight, more so nominate buzzards than steppe buzzards. Particularly in migration, as was recorded in the case of steppe buzzards' movement over Israel, buzzards readily adjust their direction, tail and wing placement and flying height to adjust for the surrounding environment and wind conditions. In Israel, migrant buzzards rarely soar all that high (maximum 1,000–2,000 m (3,300–6,600 ft) above ground) due to the lack of mountain ridges that in other areas typically produce flyways; however tail-winds are significant and allow birds to cover a mean of 9.8 metres per second (22 miles per hour).
The common buzzard is aptly described as a partial migrant. The autumn and spring movements of buzzards are subject to extensive variation, even down to the individual level, based on a region's food resources, competition (both from other buzzards and other predators), extent of human disturbance and weather conditions. Short distance movements are the norm for juveniles and some adults in autumn and winter, but more adults in central Europe and the British Isles remain on their year-around residence than do not. Even for first year juvenile buzzards dispersal may not take them very far. In England, 96% of first-years moved in winter to less than 100 km (62 mi) from their natal site. Southwestern Poland was recorded to be a fairly important wintering grounds for central European buzzards in early spring that apparently travelled from somewhat farther north, in winter average density was a locally high 2.12 individual per square kilometer. Habitat and prey availability seemed to be the primary drivers of habitat selection in fall for European buzzards. In northern Germany, buzzards were recorded to show preferences in fall for areas fairly distant from nesting site, with a large quantity of vole-holes and more widely dispersed perches. In Bulgaria, the mean wintering density was 0.34 individual per square kilometer, and buzzards showed a preference for agricultural over forested areas. Similar habitat preferences were recorded in northeastern Romania, where buzzard density was 0.334–0.539 individuals per square kilometer. The nominate buzzards of Scandinavia are somewhat more strongly migratory than most central European populations. However, birds from Sweden show some variation in migratory behaviours. A maximum of 41,000 individuals have been recorded at one of the main migration sites within southern Sweden in Falsterbo. In southern Sweden, winter movements and migration was studied via observation of buzzard colour. White individuals were substantially more common in southern Sweden rather than further north in their Swedish range. The southern population migrates earlier than intermediate to dark buzzards, in both adults and juveniles. A larger proportion of juveniles than of adults migrate in the southern population. Especially adults in the southern population are resident to a higher degree than more northerly breeders.
The entire population of the steppe buzzard is strongly migratory, covering substantial distances during migration. In no part of the range do steppe buzzards use the same summering and wintering grounds. Steppe buzzards are slightly gregarious in migration, and travel in variously sized flocks. This race migrates in September to October often from Asia Minor to the Cape of Africa in about a month but does not cross water, following around the Winam Gulf of Lake Victoria rather than crossing the several kilometer wide gulf. Similarly, they will funnel along both sides of the Black Sea. Migratory behavior of steppe buzzards mirrors those of broad-winged & Swainson's hawks (Buteo platypterus & swainsoni) in every significant way as similar long-distance migrating Buteos, including trans-equatorial movements, avoidance of large bodies of waters and flocking behaviour. Migrating steppe buzzards will rise up with the morning thermals and can cover an average of hundreds of miles a day using the available currents along mountain ridges and other topographic features. The spring migration for steppe buzzards peaks around March–April, but the latest vulpinus arrive in their breeding grounds by late April or early May. Distances covered by migrating steppe buzzards in one way flights from northern Europe (i.e. Finland or Sweden) to southern Africa have ranged over 13,000 km (8,100 mi) within a season . For the steppe buzzards from eastern and northern Europe and western Russia (which compromise a majority of all steppe buzzards), peak migratory numbers occur in differing areas in autumn, when the largest recorded movements occurs through Asia Minor such as Turkey, than in spring, when the largest recorded movement are to the south in the Middle East, especially Israel. The two migratory movements barely differ overall until they reach the Middle East and east Africa, where the largest volume of migrants in autumn occurs at the southern part of the Red Sea, around Djibouti and Yemen, while the main volume in spring is in the northernmost strait, around Egypt and Israel. In autumn, numbers of steppe buzzards recorded in migration have ranged up to 32,000 (recorded 1971) in northwestern Turkey (Bosporus) and in northeastern Turkey (Black Sea) up to 205,000 (recorded 1976). Further down in migration, autumn numbers of up to 98,000 have been recorded in passage in Djibouti. Between 150,000 and nearly 466,000 Steppe Buzzard have been recorded migrating through Israel during spring, making this not only the most abundant migratory raptor here but one of the largest raptor migrations anywhere in the world. Migratory movements of southern Africa buzzards largely occur along the major mountain ranges, such as the Drakensberg and Lebombo Mountains. Wintering steppe buzzards occur far more irregularly in Transvaal than Cape region in winter. The onset of migratory movement for steppe buzzards back to the breeding grounds in southern Africa is mainly in March, peaking in the second week. Steppe buzzard molt their feathers rapidly upon arrival at wintering grounds and seems to split their flight feather molt between breeding ground in Eurasia and wintering ground in southern Africa, the molt pausing during migration. In last 50 years, it was recorded that nominate buzzards are typically migrating shorter distances and wintering further north, possibly in response to climate change, resulting in relatively smaller numbers of them at migration sites. They are also extending their breeding range possibly reducing/supplanting steppe buzzards.
Resident populations of common buzzards tend to vocalize all year around, whereas migrants tend to vocalize only during the breeding season. Both nominate buzzards and steppe buzzards (and their numerous related subspecies within their types) tend to have similar voices. The main call of the species is a plaintive, far-carrying pee-yow or peee-oo, used as both contact call and more excitedly in aerial displays. Their call is sharper, more ringing when used in aggression, tends to be more drawn-out and wavering when chasing intruders, sharper, more yelping when as warning when approaching the nest or shorter and more explosive when called in alarm. Other variations of their vocal performances include a cat-like mew, uttered repeatedly on the wing or when perched, especially in display; a repeated mah has been recorded as uttered by pairs answering each other, further chuckles and croaks have also been recorded at nests. Juveniles can usually be distinguished by the discordant nature of their calls compared to those of adults.
The common buzzard is a generalist predator which hunts a wide variety of prey given the opportunity. Their prey spectrum extents to a wide variety of vertebrates including mammals, birds (from any age from eggs to adult birds), reptiles, amphibians and, rarely, fish, as well as to various invertebrates, mostly insects. Young animals are often attacked, largely the nidifugous young of various vertebrates. In total well over 300 prey species are known to be taken by common buzzards. Furthermore, prey size can vary from tiny beetles, caterpillars and ants to large adult grouse and rabbits up to nearly twice their body mass. Mean body mass of vertebrate prey was estimated at 179.6 g (6.34 oz) in Belarus. At times, they will also subsist partially on carrion, usually of dead mammals or fish. However, dietary studies have shown that they mostly prey upon small mammals, largely small rodents. Like many temperate zone raptorial birds of varied lineages, voles are an essential part of the common buzzard's diet. This bird's preference for the interface between woods and open areas frequently puts them in ideal vole habitat. Hunting in relatively open areas has been found to increase hunting success whereas more complete shrub cover lowered success. A majority of prey is taken by dropping from perch, and is normally taken on ground. Alternately, prey may be hunted in a low flight. This species tends not to hunt in a spectacular stoop but generally drops gently then gradually accelerate at bottom with wings held above the back. Sometimes, the buzzard also forages by random glides or soars over open country, wood edges or clearings. Perch hunting may be done preferentially but buzzards fairly regularly also hunt from a ground position when the habitat demands it. Outside the breeding season, as many 15–30 buzzards have been recorded foraging on ground in a single large field, especially juveniles. Normally the rarest foraging type is hovering. A study from Great Britain indicated that hovering does not seem to increase hunting success.
A high diversity of rodents may be taken given the chance, as around 60 species of rodent have been recorded in the foods of common buzzards. It seems clear that voles are the most significant prey type for European buzzards. Nearly every study from the continent makes reference to the importance, in particular, of the two most numerous and widely distributed European voles: the 28.5 g (1.01 oz) common vole (Microtus arvalis) and the somewhat more northerly ranging 40 g (1.4 oz) field vole (Microtus agrestis). In southern Scotland, field voles were the best-represented species in pellets, accounting for 32.1% of 581 pellets. In southern Norway, field voles were again the main food in years with peak vole numbers, accounting for 40.8% of 179 prey items in 1985 and 24.7% of 332 prey items in 1994. Altogether, rodents amount to 67.6% and 58.4% of the foods in these respective peak vole years. However, in low vole population years, the contribution of rodents to the diet was minor. As far west as the Netherlands, common voles were the most regular prey, amounting to 19.6% of 6624 prey items in a very large study. Common voles were the main foods recorded in central Slovakia, accounting for 26.5% of 606 prey items. The common vole, or other related vole species at times, were the main foods as well in Ukraine (17.2% of 146 prey items) ranging east to Russia in the Privolshky Steppe Nature Reserve (41.8% of 74 prey items) and in Samara (21.4% of 183 prey items). Other records from Russia and Ukraine show voles ranging from slightly secondary prey to as much as 42.2% of the diet. In Belarus, voles, including Microtus species and 18.4 g (0.65 oz) bank voles (Myodes glareolus), accounted for 34.8% of the biomass on average in 1065 prey items from different study areas over 4 years. At least 12 species of the genus Microtus are known to be hunted by common buzzards and even this is probably conservative, moreover similar species like lemmings will be taken if available.
Other rodents are taken largely opportunistically rather than by preference. Several wood mice (Apodemus ssp.) are known to be taken quite frequently but given their preference for activity in deeper woods than the field-forest interfaces preferred, they are rarely more than secondary food items. An exception was in Samara where the yellow-necked mouse (Apodemus flavicollis), one of the largest of its genus at 28.4 g (1.00 oz), made up 20.9%, putting it just behind the common vole in importance. Similarly, tree squirrels are readily taken but rarely important in the foods of buzzards in Europe, as buzzards apparently prefer to avoid taking prey from trees nor do they possess the agility typically necessary to capture significant quantities of tree squirrels. All four ground squirrels that range (mostly) into eastern Europe are also known to be common buzzard prey but little quantitative analysis has gone into how significant such predator-prey relations are. Rodent prey taken have ranged in size from the 7.8 g (0.28 oz) Eurasian harvest mouse (Micromys minutus) to the non-native, 1,100 g (2.4 lb) muskrat (Ondatra zibethicus). Other rodents taken either seldom or in areas where the food habits of buzzards are spottily known include flying squirrels, marmots (presumably very young if taken alive), chipmunks, spiny rats, hamsters, mole-rats, gerbils, jirds and jerboas and occasionally hearty numbers of dormice, although these are nocturnal. Surprisingly little research has gone into the diets of wintering steppe buzzards in southern Africa, considering their numerous status there. However, it has been indicated that the main prey remains consist of rodents such as the four-striped grass mouse (Rhabdomys pumilio) and Cape mole-rats (Georychus capensis).
Other than rodents, two other groups of mammals can be counted as significant to the diet of common buzzards. One of these main prey types of import in the diets of common buzzards are leporids or lagomorphs, especially the European rabbit (Oryctolagus cuniculus) where it is found in numbers in a wild or feral state. In all dietary studies from Scotland, rabbits were highly important to the buzzard's diet. In southern Scotland, rabbits constituted 40.8% of remains at nests and 21.6% of pellet contents, while lagomorphs (mainly rabbits but also some young hares) were present in 99% of remains in Moray, Scotland. The nutritional richness relative to the commonest prey elsewhere, such as voles, might account for the high productivity of buzzards here. For example, clutch sizes were twice as large on average where rabbits were common (Moray) than were where they were rare (Glen Urquhart). In northern Ireland, an area of interest because it is devoid of any native vole species, rabbits were again the main prey. Here, lagomorphs constituted 22.5% of prey items by number and 43.7% by biomass. While rabbits are non-native, albeit long-established, in the British Isles, in their native area of the Iberian peninsula, rabbits are similarly significant to the buzzard's diet. In Murcia, Spain, rabbits were the most common mammal in the diet, making up 16.8% of 167 prey items. In a large study from northeastern Spain, rabbits were dominant in the buzzard's foods, making up 66.5% of 598 prey items. In the Netherlands, European rabbits were second in number (19.1% of 6624 prey items) only to common voles and the largest contributor of biomass to nests (36.7%). Outside of these (at least historically) rabbit-rich areas, leverets of the common hare species found in Europe can be important supplemental prey. European hare (Lepus europaeus) were the fourth most important prey species in central Poland and the third most significant prey species in Stavropol Krai, Russia. Buzzards normally attack the young of European rabbits and hares. Most of the rabbits taken by buzzard variously been estimated from 159 to 550 g (5.6 to 19.4 oz), and infrequently up to 700 g (1.5 lb) in weight. Similarly, in different areas and the mean weight of brown hares taken in Finland was around 500 g (1.1 lb). One young mountain hares (Lepus timidus) taken in Norway was estimated to about 1,000 g (2.2 lb). However, common buzzards have the physical ability to kill adult rabbits. This is supported by remains of relatively large-sized tarsus bones of the rabbit, up to 64mm in length, suggesting prime adult rabbits weigh up to 1,600 g (3.5 lb) can be preyed upon.
The other significant mammalian prey type is insectivores, among which more than 20 species are known to be taken by this species, including nearly all the species of shrew, mole and hedgehog found in Europe. Moles are taken particularly often among this order, since as is the case with "vole-holes", buzzards probably tend to watch molehills in fields for activity and dive quickly from their perch when one of the subterranean mammals pops up. The most widely found mole in the buzzard's northern range is the 98 g (3.5 oz) European mole (Talpa europaea) and this is one of the more important non-rodent prey items for the species. This species was present in 55% of 101 remains in Glen Urquhart, Scotland and was the second most common prey species (18.6%) in 606 prey items in Slovakia. In Bari, Italy, the Roman mole (Talpa romana), of similar size to the European species, was the leading identified mammalian prey, making up 10.7% of the diet. The full-size range of insectivores may be taken by buzzards, ranging from the world's smallest mammal (by weight), the 1.8 g (0.063 oz) Etruscan shrew (Suncus etruscus) to arguably the heaviest insectivore, the 800 g (28 oz) European hedgehog (Erinaceus europaeus). Mammalian prey for common buzzards other than rodents, insectivores, and lagomorphs is rarely taken. Occasionally, some weasels such as least weasel (Mustela nivalis) and stoat (Mustela erminea) are taken, and remains of young pine martens (Martes martes) and adult european polecats (Mustela putorius) was found in buzzard nest. Numerous larger mammals, including medium-sized carnivores such as dogs, cats and foxes and various ungulates, are sometimes eaten as carrion by buzzards, mainly during lean winter months. Still-borns of deer are also visited with some frequency.
When attacking birds, common buzzards chiefly prey on nestlings and fledglings of small to medium-sized birds, largely passerines but also a variety of gamebirds, but sometimes also injured, sickly or unwary but healthy adults. While capable of overpowering birds larger than itself, the common buzzard is usually considered to lack the agility necessary to capture many adult birds, even gamebirds which would presumably be weaker fliers considering their relatively heavy bodies and small wings. The amount of fledgling and younger birds preyed upon relative to adults is variable, however. For example, in the Italian Alps, 72% of birds taken were fledglings or recently fledged juveniles, 19% were nestlings and 8% were adults. On the contrary, in southern Scotland, even though the buzzards were taking relatively large bird prey, largely red grouse (Lagopus lagopus scotica), 87% of birds taken were reportedly adults. In total, as in many raptorial birds that are far from bird-hunting specialists, birds are the most diverse group in the buzzard's prey spectrum due to the sheer number and diversity of birds, few raptors do not hunt them at least occasionally. Nearly 150 species of bird have been identified in the common buzzard's diet. In general, despite many that are taken, birds usually take a secondary position in the diet after mammals. In northern Scotland, birds were fairly numerous in the foods of buzzards. The most often recorded avian prey and 2nd and 3rd most frequent prey species (after only field voles) in Glen Urquhart, were 23.9 g (0.84 oz) chaffinch (Fringilla coelebs) and 18.4 g (0.65 oz) meadow pipits (Anthus pratensis), with the buzzards taking 195 fledglings of these species against only 90 adults. This differed from Moray where the most frequent avian prey and 2nd most frequent prey species behind the rabbit was the 480 g (17 oz) common wood pigeon (Columba palumbus) and the buzzards took four times as many adults relative to fledglings.
Birds were the primary food for common buzzards in the Italian Alps, where they made up 46% of the diet against mammal which accounted for 29% in 146 prey items. The leading prey species here were 103 g (3.6 oz) Eurasian blackbirds (Turdus merula) and 160 g (5.6 oz) Eurasian jays (Garrulus glandarius), albeit largely fledglings were taken of both. Birds could also take the leading position in years with low vole populations in southern Norway, in particular thrushes, namely the blackbird, the 67.7 g (2.39 oz) song thrush (Turdus philomelos) and the 61 g (2.2 oz) redwing (Turdus iliacus), which were collectively 22.1% of 244 prey items in 1993. In southern Spain, birds were equal in number to mammals in the diet, both at 38.3%, but most remains were classified as "unidentified medium-sized birds", although the most often identified species of those that apparently could be determined were Eurasian jays and red-legged partridges (Alectoris rufa). Similarly, in northern Ireland, birds were roughly equal in import to mammals but most were unidentified corvids. In Seversky Donets, Ukraine, birds and mammals both made up 39.3% of the foods of buzzards. Common buzzards may hunt nearly 80 species passerines and nearly all available gamebirds. Like many other largish raptors, gamebirds are attractive to hunt for buzzards due to their ground-dwelling habits. Buzzards were the most frequent predator in a study of juvenile pheasants in England, accounting for 4.3% of 725 deaths (against 3.2% by foxes, 0.7% by owls and 0.5% by other mammals). They also prey on a wide size range of birds, ranging down to Europe's smallest bird, the 5.2 g (0.18 oz) goldcrest (Regulus regulus). Very few individual birds hunted by buzzards weigh more than 500 g (1.1 lb). However, there have been some particularly large avian kills by buzzards, including any that weigh more or 1,000 g (2.2 lb), or about the largest average size of a buzzard, have including adults of mallard (Anas platyrhynchos), black grouse (Tetrao tetrix), ring-necked pheasant (Phasianus colchicus), common raven (Corvus corax) and some of the larger gulls if ambushed on their nests. The largest avian kill by a buzzard, and possibly largest known overall for the species, was an adult female western capercaillie (Tetrao urogallus) that weighed an estimated 1,985 g (4.376 lb). At times, buzzards will hunt the young of large birds such as herons and cranes. Other assorted avian prey has included a few species of waterfowl, most available pigeons and doves, cuckoos, swifts, grebes, rails, nearly 20 assorted shorebirds, tubenoses, hoopoes, bee-eaters and several types of woodpecker. Birds with more conspicuous or open nesting areas or habits are more likely to have fledglings or nestlings attacked, such as water birds, while those with more secluded or inaccessible nests, such as pigeons/doves and woodpeckers, adults are more likely to be hunted.
The common buzzard may be the most regular avian predator of reptiles and amphibians in Europe apart from the sections where they are sympatric with the largely snake-eating short-toed eagle. In total, the prey spectrum of common buzzards include nearly 50 herpetological prey species. In studies from northern and southern Spain, the leading prey numerically were both reptilian, although in Biscay (northern Spain) the leading prey (19%) was classified as "unidentified snakes". In Murcia, the most numerous prey was the 77.2 g (2.72 oz) ocellated lizard (Timon lepidus), at 32.9%. In total, at Biscay and Murcia, reptiles accounted for 30.4% and 35.9% of the prey items, respectively. Findings were similar in a separate study from northeastern Spain, where reptiles amounted to 35.9% of prey. In Bari, Italy, reptiles were the main prey, making up almost exactly half of the biomass, led by the large green whip snake (Hierophis viridiflavus), maximum size up to 1,360 g (3.00 lb), at 24.2% of food mass. In Stavropol Krai, Russia, the 20 g (0.71 oz) sand lizard (Lacerta agilis) was the main prey at 23.7% of 55 prey items. The 16 g (0.56 oz) slowworm (Anguis fragilis), a legless lizard, became the most numerous prey for the buzzards of southern Norway in low vole years, amounting to 21.3% of 244 prey items in 1993 and were also common even in the peak vole year of 1994 (19% of 332 prey items). More or less any snake in Europe is potential prey and the buzzard has been known to be uncharacteristically bold in going after and overpowering large snakes such as rat snakes, ranging up to nearly 1.5 m (4 ft 11 in) in length, and healthy, large vipers despite the danger of being struck by such prey. However, in at least one case, the corpse of a female buzzard was found envenomed over the body of an adder that it had killed. In some parts of range, the common buzzard acquires the habit of taking many frogs and toads. This was the case in the Mogilev Region of Belarus where the 23 g (0.81 oz) moor frog (Rana arvalis) was the major prey (28.5%) over several years, followed by other frogs and toads amounting to 39.4% of the diet over the years. In central Scotland, the 46 g (1.6 oz) common toad (Bufo bufo) was the most numerous prey species, accounting for 21.7% of 263 prey items, while the common frog (Rana temporaria) made up a further 14.7% of the diet. Frogs made up about 10% of the diet in central Poland as well.
When common buzzards feed on invertebrates, these are chiefly earthworms, beetles and caterpillars in Europe and largely seemed to be preyed on by juvenile buzzards with less refined hunting skills or in areas with mild winters and ample swarming or social insects. In most dietary studies, invertebrates are at best a minor supplemental contributor to the buzzard's diet. Nonetheless, roughly a dozen beetle species have found in the foods of buzzards from Ukraine alone. In winter in northeastern Spain, it was found that the buzzards switched largely from the vertebrate prey typically taken during spring and summer to a largely insect-based diet. Most of this prey was unidentified but the most frequently identified were European mantis (Mantis religiosa) and European mole cricket (Gryllotalpa gryllotalpa). In Ukraine, 30.8% of the food by number was found to be insects. Especially in winter quarters such as southern Africa, common buzzards are often attracted to swarming locusts and other orthopterans. In this way the steppe buzzard may mirror a similar long-distance migrant from the Americas, the Swainson's hawk, which feeds its young largely on nutritious vertebrates but switches to a largely insect-based once the reach their distant wintering grounds in South America. In Eritea, 18 returning migrant steppe buzzards were seen to feed together on swarms of grasshoppers. For wintering steppe buzzards in Zimbabwe, one source went so far as to refer to them as primarily insectivorous, apparently being somewhat locally specialized to feeding on termites. Stomach contents in buzzards from Malawi apparently consisted largely of grasshoppers (alternately with lizards). Fish tend to be the rarest class of prey found in the common buzzard's foods. There are a couple cases of predation of fish detected in the Netherlands, while elsewhere they've been known to have fed upon eels and carp.
Common buzzards co-occur with dozens of other raptorial birds through their breeding, resident and wintering grounds. There may be many other birds that broadly overlap in prey selection to some extent. Furthermore, their preference for interfaces of forest and field is used heavily by many birds of prey. Some of the most similar species by diet are the common kestrel (Falco tinniculus), hen harrier (Circus cyaenus) and lesser spotted eagle (Clanga clanga), not to mention nearly every European species of owl, as all but two may locally prefer rodents such as voles in their diets. Diet overlap was found to be extensive between buzzards and red foxes (Vulpes vulpes) in Poland, with 61.9% of prey selection overlapping by species although the dietary breadth of the fox was broader and more opportunistic. Both fox dens and buzzard roosts were found to be significantly closer to high vole areas relative to the overall environment here. The only other widely found European Buteo, the rough-legged buzzard, comes to winter extensively with common buzzards. It was found in southern Sweden, habitat, hunting and prey selection often overlapped considerably. Rough-legged buzzards appear to prefer slightly more open habitat and took slightly fewer wood mice than common buzzard. Roughlegs also hover much more frequently and are more given to hunting in high winds. The two buzzards are aggressive towards one another and excluded each other from winter feeding territories in similar ways to the way they exclude conspecifics. In northern Germany, the buffer of their habitat preferences apparently accounted for the lack of effect on each other's occupancy between the two buzzard species. Despite a broad range of overlap, very little is known about the ecology of common and long-legged buzzards where they co-exist. However, it can be inferred from the long-legged species preference for predation on differing prey, such as blind mole-rats, ground squirrels, hamsters and gerbils, from the voles usually preferred by the common species, that serious competition for food is unlikely.
A more direct negative effect has been found in buzzard's co-existence with northern goshawk (Accipiter gentilis). Despite the considerable discrepancy of the two species dietary habits, habitat selection in Europe is largely similar between buzzards and goshawks. Goshawks are slightly larger than buzzards and are more powerful, agile and generally more aggressive birds, and so they are considered dominant. In studies from Germany and Sweden, buzzards were found to be less disturbance sensitive than goshawks but were probably displaced into inferior nesting spots by the dominant goshawks. The exposure of buzzards to a dummy goshawk was found to decrease breeding success whereas there was no effect on breeding goshawks when they were exposed to a dummy buzzard. In many cases, in Germany and Sweden, goshawks displaced buzzards from their nests to take them over for themselves. In Poland, buzzards productivity was correlated to prey population variations, particularly voles which could vary from 10–80 per hectare, whereas goshawks were seemingly unaffected by prey variations; buzzards were found here to number 1.73 pair per 10 km (3.9 sq mi) against goshawk 1.63 pair per 10 km (3.9 sq mi). In contrast, the slightly larger counterpart of buzzards in North America, the red-tailed hawk (which is also slightly larger than American goshawks, the latter averaging smaller than European ones) are more similar in diet to goshawks there. Redtails are not invariably dominated by goshawks and are frequently able to outcompete them by virtue of greater dietary and habitat flexibility. Furthermore, red-tailed hawks are apparently equally capable of killing goshawks as goshawks are of killing them (killings are more one-sided in buzzard-goshawk interactions in favour of the latter). Other raptorial birds, including many of similar or mildly larger size than common buzzards themselves, may dominate or displace the buzzard, especially with aims to take over their nests. Species such as the black kite (Milvus migrans), booted eagle (Hieraeetus pennatus) and the lesser spotted eagle have been known to displace actively nesting buzzards, although in some cases the buzzards may attempt to defend themselves. The broad range of accipitrids that take over buzzard nests is somewhat unusual. More typically, common buzzards are victims of nest parasitism to owls and falcons, as neither of these other kinds of raptorial birds builds their own nests, but these may regularly take up occupancy on already abandoned or alternate nests rather than ones the buzzards are actively using. Even with birds not traditionally considered raptorial, such as common ravens, may compete for nesting sites with buzzards. In urban vicinities of southwestern England, it was found that peregrine falcons (Falco peregrinus) were harassing buzzards so persistently, in many cases resulting in injury or death for the buzzards, the attacks tending to peak during the falcon's breeding seasons and tend to be focused on subadult buzzards. Despite often being dominated in nesting site confrontations by even similarly sized raptors, buzzards appear to be bolder in direct competition over food with other raptors outside of the context of breeding, and has even been known to displace larger birds of prey such as red kites (Milvus milvus) and female buzzards may also dominate male goshawks (which are much smaller than the female goshawk) at disputed kills.
Common buzzards are occasionally threatened by predation by other raptorial birds. Northern goshawks have been known to have preyed upon buzzards in a few cases. Much larger raptors are known to have killed a few buzzards as well, including steppe eagles (Aquila nipalensis) on migrating steppe buzzards in Israel. Further instances of predation on buzzards have involved golden, eastern imperial (Aquila heliaca), Bonelli's (Aquila fasciata) and white-tailed eagles (Haliaeetus albicilla) in Europe. Besides preying on adult buzzard, white-tailed eagles have been known to raise buzzards with their own young. These are most likely cases of eagles carrying off young buzzard nestlings with the intention of predation but, for unclear reasons, not killing them. Instead the mother eagle comes to brood the young buzzard. Despite the difference of the two species diets, white-tailed eagles are surprisingly successful at raising young buzzards (which are conspicuously much smaller than their own nestlings) to fledging. Studies in Lithuania of white-tailed eagle diets found that predation on common buzzards was more frequent than anticipated, with 36 buzzard remains found in 11 years of study of the summer diet of the white-tailed eagles. While nestling buzzards were multiple times more vulnerable to predation than adult buzzards in the Lithuanian data, the region's buzzards expelled considerable time and energy during the late nesting period trying to protect their nests. The most serious predator of common buzzards, however, is almost certainly the Eurasian eagle-owl (Bubo bubo). This is a very large owl with a mean body mass about three to four times greater than that of a buzzard. The eagle-owl, despite often taking small mammals that broadly overlap with those selected by buzzards, is considered a "super-predator" that is a major threat to nearly all co-existing raptorial birds, capably destroying whole broods of other raptorial birds and dispatching adult raptors even as large as eagles. Due to their large numbers in edge habitats, common buzzards frequently feature heavily in the eagle-owl's diet. Eagle-owls, as will some other large owls, also readily expropriate the nests of buzzards. In the Czech Republic and in Luxembourg, the buzzard was the third and fifth most frequent prey species for eagle-owls, respectively. The reintroduction of eagle-owls to sections of Germany has been found to have a slight deleterious effect on the local occupancy of common buzzards. The only sparing factor is the temporal difference (the buzzard nesting later in the year than the eagle-owl) and buzzards may locally be able to avoid nesting near an active eagle-owl family. As the ecology of the wintering population is relatively little studied, a similar very large owl at the top of the avian food chain, the Verreaux's eagle-owl (Bubo lacteus), is the only known predator of wintering steppe buzzards in southern Africa. Despite not being known predators of buzzards, other large, vole-eating owls are known to displace or to be avoided by nesting buzzards, such as great grey owls (Strix nebulosa) and Ural owls (Strix uralensis). Unlike with large birds of prey, next to nothing is known of mammalian predators of common buzzards, despite up to several nestlings and fledglings being likely depredated by mammals.
Common buzzards themselves rarely present a threat to other raptorial birds but may occasionally kill a few of those of smaller size. The buzzard is a known predator of 237 g (8.4 oz) Eurasian sparrowhawks (Accipiter nisus), 184 g (6.5 oz) common kestrel and 152 g (5.4 oz) lesser kestrel (Falco naumanni) . Perhaps surprisingly, given the nocturnal habits of this prey, the group of raptorial birds the buzzard is known to hunt most extensively is owls. Known owl prey has included 419 g (14.8 oz) barn owls (Tyto alba), 92 g (3.2 oz) European scops owls (Otus scops), 475 g (16.8 oz) tawny owls (Strix aluco), 169 g (6.0 oz) little owls (Athene noctua), 138 g (4.9 oz) boreal owls (Aegolius funereus), 286 g (10.1 oz) long-eared owls (Asio otus) and 355 g (12.5 oz) short-eared owls (Asio flammeus). Despite their relatively large size, tawny owls are known to avoid buzzards as there are several records of them preying upon the owls.
Home ranges of common buzzards are generally 0.5 to 2 km (0.19 to 0.77 sq mi). The size of breeding territory seem to be generally correlated with food supply. In a German study, the range was 0.8 to 1.8 km (0.31 to 0.69 sq mi) with an average of 1.26 km (0.49 sq mi). Some of the lowest pair densities of common buzzards seem to come from Russia. For instance, in Kerzhenets Nature Reserve, the recorded density was 0.6 pairs per 100 km (39 sq mi) and the average distance of nearest neighbors was 3.8 km (2.4 mi). The Snowdonia region of northern Wales held a pair per 9.7 km (3.7 sq mi) with a mean nearest neighbor distance of 1.95 km (1.21 mi); in adjacent Migneint, pair occurrence was 7.2 km (2.8 sq mi), with a mean distance of 1.53 km (0.95 mi). In the Teno massif of the Canary Islands, the average density was estimated as 23 pairs per 100 km (39 sq mi), similar to that of a middling continental population. On another set of islands, on Crete the density of pairs was lower at 5.7 pairs per 100 km (39 sq mi); here buzzards tend to have an irregular distribution, some in lower intensity harvest olive groves but their occurrence actually more common in agricultural than natural areas. In the Italian Alps, it was recorded in 1993–96 that there were from 28 to 30 pairs per 100 km (39 sq mi). In central Italy, density average was lower at 19.74 pairs per 100 km (39 sq mi). Higher density areas are known than those above. Two areas of the Midlands of England showed occupancies of 81 and 22 territorial pairs per 100 km (39 sq mi). High buzzard densities there were associated with high proportions of unimproved pasture and mature woodland within the estimated territories. Similarly high densities of common buzzards were estimated in central Slovakia using two different methods, here indicating densities of 96 to 129 pairs per 100 km (39 sq mi). Despite claims from the study of the English midlands were the highest known territory density for the species, a number ranging from 32 to 51 pairs in wooded area of merely 22 km (8.5 sq mi) in Czech Republic seems to surely exceed even those densities. The Czech study hypothesized that fragmentation of forest in human management of lands for wild sheep and deer, creating exceptional concentrations of prey such as voles, and lack of appropriate habitat in surrounding regions for the exceptionally high density.
In the North-Estonian Neeruti landscape reserve (area 1250 ha), Marek Vahula found 9 populated nests in 1989 and 1990. One nest was found in 1982 and is apparently the oldest known nest that is still populated today.
Common buzzards maintain their territories through flight displays. In Europe, territorial behaviour generally starts in February. However, displays are not uncommon throughout year in resident pairs, especially by males, and can elicit similar displays by neighbors. In them, common buzzards generally engage in high circling, spiraling upward on slightly raised wings. Mutual high circling by pairs sometimes go on at length, especially during the period prior to or during breeding season. In mutual displays, a pair may follow each other at 10–50 m (33–164 ft) in level flight. During the mutual displays, the male may engage in exaggerated deep flapping or zig-zag tumbling, apparently in response to the female being too distant. Two or three pairs may circle together at times and as many as 14 individual adults have been recorded over established display sites. Sky-dancing by common buzzards have been recorded in spring and autumn, typically by male but sometimes by female, nearly always with much calling. Their sky-dances are of the rollercoaster type, with upward sweep until they start to stall, but sometimes embellished with loops or rolls at the top. Next in the sky-dance, they dive on more or less closed wings before spreading them and shooting up again, upward sweeps of up to 30 m (98 ft), with dive drops of up to at least 60 m (200 ft). These dances may be repeated in series of 10 to 20. In the climax of the sky dance, the undulations become progressive shallower, often slowing and terminating directly onto a perch. Various other aerial displays include low contour flight or weaving among trees, frequently with deep beats and exaggerated upstrokes which show underwing pattern to rivals perched below. Talon grappling and occasionally cartwheeling downward with feet interlocked has been recorded in buzzards and, as in many raptors, is likely the physical culmination of the aggressive territorial display, especially between males. Despite the highly territorial nature of buzzards and their devotion to a single mate and breeding ground each summer, there is one case of a polyandrous trio of buzzards nesting in the Canary Islands.
Common buzzards tend to build a bulky nest of sticks, twigs and often heather. Commonly, nests are up to 1 to 1.2 m (3 ft 3 in to 3 ft 11 in) across and 60 cm (24 in) deep. With reuse over years, the diameter can reach or exceed 1.5 m (4 ft 11 in) and weight of nests can reach over 200 kg (440 lb). Active nests tend to be lined with greenery, most often this consists of broad-leafed foliage but sometimes also includes rush or seaweed locally. Nest height in trees is commonly 3 to 25 m (9.8 to 82.0 ft), usually by main trunk or main crutch of the tree. In Germany, trees used for nesting consisted mostly of red beeches (Fagus sylvatica) (in 337 cases), whereas a further 84 were in assorted oaks. Buzzards were recorded to nest almost exclusively in pines in Spain at a mean height of 14.5 m (48 ft). Trees are generally used for a nesting location but they will also utilize crags or bluffs if trees are unavailable. Buzzards in one English study were surprisingly partial to nesting on well-vegetated banks and due to the rich surrounding environment habitat and prey population, were actually more productive than nests located in other locations here. Furthermore, a few ground nests were recorded in high prey-level agricultural areas in the Netherlands. In the Italian Alps, 81% of 108 nests were on cliffs. The common buzzard generally lacks the propensity of its Nearctic counterpart, the red-tailed hawk, to occasionally nest on or near manmade structures (often in heavily urbanized areas) but in Spain some pairs recorded nesting along the perimeter of abandoned buildings. Pairs often have several nests but some pairs may use one over several consecutive years. Two to four alternate nests in a territory is typical for common buzzards, especially those breeding further north in their range.
The breeding season commences at differing times based on latitude. Common buzzard breeding seasons may fall as early as January to April but typically the breeding season is March to July in much of Palearctic. In the northern stretches of the range the breeding season may last into May–August. Mating usually occurs on or near the nest and lasts about 15 seconds, typically occurring several times a day. Eggs are usually laid in 2 to 3-day intervals. The clutch size can range from to 2 to 6, a relatively large clutch for an accipitrid. More northerly and westerly buzzard usually bear larger clutches, which average nearer 3, than those further east and south. In Spain, the average clutch size is about 2 to 2.3. From 4 locations in different parts of Europe, 43% had clutch size of 2, 41% had size of 3, clutches of 1 and 4 each constituted about 8%. Laying dates are remarkably constant throughout Great Britain. There are, however, highly significant differences in clutch size between British study areas. These do not follow any latitudinal gradient and it is likely that local factors such as habitat and prey availability are more important determinants of clutch size. The eggs are white in ground colour, rather round in shape with sporadic red to brown markings sometimes lightly showing. In the nominate race, egg size is 49.8–63.8 mm (1.96–2.51 in) in height by 39.1–48.2 mm (1.54–1.90 in) in diameter with an average of 55 mm × 44 mm (2.2 in × 1.7 in) in 600 eggs. In the race of vulpinus, egg height is 48–63 mm (1.9–2.5 in) by 39.2–47.5 mm (1.54–1.87 in) with an average of 54.2 mm × 42.8 mm (2.13 in × 1.69 in) in 303 eggs. Eggs are generally laid in late March to early April in extreme south, sometime in April in most of Europe, into May and possibly even early June in the extreme north. If eggs are lost to a predator (including humans) or fail in some other way, common buzzards do not usually lay replacement clutches but they have been recorded, even with 3 attempts of clutches by a single female. The female does most but not all of the incubating, doing so for a total of 33–35 days. The female remains at the nest brooding the young in the early stages with the male bringing all prey. At about 8–12 days, both the male and female will bring prey but the female continues to do all feeding until the young can tear up their own prey.
Once hatching commences, it may take 48 hours for the chick to chip out. Hatching may take place over 3–7 days, with new hatchlings averaging about 45 g (1.6 oz) in body mass. Often the youngest nestling dies from starvation, especially in broods of three or more. In nestlings, the first down replaces by longer, coarser down at about 7 days of age with the first proper feathers appearing at 12 to 15 days. The young are nearly fully feathered rather than downy at about a month of age and can start to feed themselves as well. The first attempts to leave the nest are often at about 40–50 days, averaging usually 40–45 in nominate buzzards in Europe, but more quickly on average at 40–42 in vulpinus. Fledging occurs typically at 43–54 days but in extreme cases at as late 62 days. Sexual dimorphism is apparent in European fledglings, as females often scale about 1,000 g (2.2 lb) against 780 g (1.72 lb) in males. After leaving the nest, buzzards generally stay close by, but with migratory ones there is more definitive movement generally southbound. Full independence is generally sought 6 to 8 weeks after fledging. 1st year birds generally remain in wintering area for following summer but then return to near area of origin but then migrate south again without breeding. Radio-tracking suggests that most dispersal, even relatively early dispersals, by juvenile buzzards is undertaken independently rather than via exile by parents, as has been recorded in some other birds of prey. In common buzzards, generally speaking, siblings stay quite close to each other after dispersal from their parents and form something of a social group, although parents usually tolerate their presence on their territory until they are laying another clutch. However, the social group of siblings disbands at about a year of age. Juvenile buzzards are subordinate to adults during most encounters and tend to avoid direct confrontations and actively defended territories until they are of appropriate age (usually at least 2 years of age). This was the case as well for steppe buzzard juveniles wintering in southern Africa, although in some cases juveniles were able to successfully steal prey from adults there.
Numerous factors may weigh into the breeding success of common buzzards. Chiefly among these are prey populations, habitat, disturbance and persecution levels and innerspecies competition. In Germany, intra- and interspecific competition, plumage morph, laying date, precipitation levels and anthropogenic disturbances in the breeding territory, in declining order, were deemed to be the most significant bearers of breeding success. In an accompanying study, it was found that a mere 17% of adult birds of both sexes present in a German study area produced 50% of offspring, so breeding success may be lower than perceived and many adult buzzards for unknown causes may not attempt to breed at all. High breeding success was detected in Argyll, Scotland, due likely to hearty prey populations (rabbits) but also probably a lower local rate of persecution than elsewhere in the British isles. Here, the mean number of fledglings were 1.75 against 0.82–1.41 in other parts of Britain. It was found in the English Midlands that breeding success both by measure of clutch size and mean number of fledglings, was relatively high thanks again to high prey populations. Breeding success was lower farther from significant stands of trees in the Midlands and most nesting failures that could be determined occurred in the incubation stage, possibly in correlation with predation of eggs by corvids. More significant than even prey, late winter-early spring was found to be likely the primary driver of breeding success in buzzards from southern Norway. Here, even in peak vole years, nesting success could be considerably hampered by heavy snow at this crucial stage. In Norway, large clutches of 3+ were expected only in years with minimal snow cover, high vole populations and lighter rains in May–June. In the Italian Alps, the mean number of fledglings per pair was 1.07. 33.4% of nesting attempts were failures per a study in southwestern Germany, with an average of 1.06 of all nesting attempts and 1.61 for all successful attempt. In Germany, weather conditions and rodent populations seemed to be the primary drivers of nesting success. In Murcia part of Spain contrasted with Biscay to the north, higher levels of interspecific competition from booted eagles and northern goshawks did not appear to negatively affect breeding success due to more ample prey populations (rabbits again) in Murcia than in Biscay.
In the Westphalia area of Germany, it was found that intermediate colour morphs were more productive than those that were darker or lighter. For reasons that are not entirely clear, apparently fewer parasites were found to afflict broods of intermediate plumaged buzzard less so than dark and light phenotypes, in particular higher melanin levels somehow were found to be more inviting to parasitic organism that effect the health of the buzzard's offspring. The composition of habitat and its relation to human disturbance were important variables for the dark and light phenotypes but were less important to intermediate individuals. Thus selection pressures resulting from different factors did not vary much between sexes but varied between the three phenotypes in the population. Breeding success in areas with wild European rabbits was considerably effected by rabbit myxomatosis and rabbit haemorrhagic disease, both of which have heavily depleted wild rabbit population. Breeding success in formerly rabbit-rich areas were recorded to decrease from as much as 2.6 to as little as 0.9 young per pair. Age of first breeding in several radio-tagged buzzards showed only a single male breeding as early as his 2nd summer (at about a year of age). Significantly more buzzards were found to start breeding at the 3 summer but breeding attempts can be individually erratic given the availability of habitat, food and mates. The mean life expectancy was estimated at 6.3 years in the late 1950s, but this was at a time of high persecution when humans were causing 50–80% of buzzard deaths. In a more modern context with regionally reduced persecution rates, the lifespan expected can be higher (possibly in excess of 10 years at times) but is still widely variable due to a wide variety of factors.
The common buzzard is one of the most numerous birds of prey in its range. Almost certainly, it is the most numerous diurnal bird of prey throughout Europe. Conservative estimates put the total population at no fewer than 700,000 pairs in Europe, which are more than twice the total estimates for the next four birds of prey estimated as most common: the Eurasian sparrowhawk (more than 340,000 pairs), the common kestrel (more than 330,000 pairs) and the northern goshawk (more than 160,000 pairs). Ferguson-Lees et al. roughly estimated that the total population of the common buzzard ranges to nearly 5 million pairs but at time was including the now spilit-off species of eastern and Himalayan buzzards in those numbers. These numbers may be excessive but the total population of common buzzards is certain to total well over seven figures. More recently, the IUCN estimated the common buzzard (sans the Himalayan and eastern subspecies) to number somewhere between 2.1 and 3.7 million birds, which would put this buzzard one of the most numerous of all accipitrid family members (estimates for Eurasian sparrowhawks, red-tailed hawks and northern goshawks also may range over 2 million). In 1991, other than their absence in Iceland, after having been extent as breeder by 1910, buzzards recolonized Ireland sometime in the 1950s and has increased by the 1990s to 26 pairs. Supplemental feeding has reportedly helped the Irish buzzard population to rebound, especially where rabbits have decreased. Most other countries have at least four figures of breeding pairs. As of the 1990s, other countries such as Great Britain, France, Switzerland, Czech Republic, Poland, Sweden, Belarus and Ukraine all numbered pairs well into five figures, while Germany had an estimated 140,000 pairs and European Russian may have held 500,000 pairs. Between 44,000 and 61,000 pairs nested in Great Britain by 2001 with numbers gradually increasing after past persecution, habitat alteration and prey reductions, making it by far the most abundant diurnal raptor there. In Westphalia, Germany, population of Buzzards was shown to nearly triple over the last few decades. The Westphalian buzzards are possibly benefiting from increasingly warmer mean climate, which in turn is increasing vulnerability of voles. However, the rate of increase was significantly greater in males than in females, in part because of reintroduced Eurasian eagle-owls to the region preying on nests (including the brooding mother), which may in turn put undue pressure on the local buzzard population.
At least 238 common buzzards killed through persecution were recovered in England from 1975 to 1989, largely through poisoning. Persecution did not significantly differ at any time due this span of years nor did the persecution rates decrease, nor did it when compared to rates of last survey of this in 1981. While some persecution persists in England, it is probably slightly less common today. The buzzard was found to be the most vulnerable raptor to power-line collision fatalities in Spain probably as it is one of the most common largish birds, and together with the common raven, it accounted for nearly a third of recorded electrocutions. Given its relative abundance, the common buzzard is held as an ideal bioindicator, as they are effected by a range of pesticide and metal contamination through pollution like other raptors but are largely resilient to these at the population levels. In turn, this allows biologists to study (and harvest if needed) the buzzards intensively and their environments without affecting their overall population. The lack of affect may be due to the buzzard's adaptability as well as its relatively short, terrestrially-based food chain, which exposes them to less risk of contamination and population depletions than raptors that prey more heavily on water-based prey (such as some large eagles) or other birds (such as falcons). Common buzzards are seldom vulnerable to egg-shell thinning from DDT as are other raptors but egg-shell thinning has been recorded. Other factors that negatively effect raptors have been studied in common buzzards are helminths, avipoxvirus and assorted other viruses. | [
{
"paragraph_id": 0,
"text": "The common buzzard (Buteo buteo) is a medium-to-large bird of prey which has a large range. It is a member of the genus Buteo in the family Accipitridae. The species lives in most of Europe and extends its breeding range across much of the Palearctic as far as northwestern China (Tian Shan), far western Siberia and northwestern Mongolia. Over much of its range, it is a year-round resident. However, buzzards from the colder parts of the Northern Hemisphere as well as those that breed in the eastern part of their range typically migrate south for the northern winter, many journeying as far as South Africa.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The common buzzard is an opportunistic predator that can take a wide variety of prey, but it feeds mostly on small mammals, especially rodents such as voles. It typically hunts from a perch. Like most accipitrid birds of prey, it builds a nest, typically in trees in this species, and is a devoted parent to a relatively small brood of young. The common buzzard appears to be the most common diurnal raptor in Europe, as estimates of its total global population run well into the millions.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first formal description of the common buzzard was by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae under the binomial name Falco buteo. The genus Buteo was introduced by the French naturalist Bernard Germain de Lacépède in 1799 by tautonymy with the specific name of this species. The word buteo is Latin for a buzzard. It should not be confused with the Turkey vulture, which is sometimes called a buzzard in American English.",
"title": "Taxonomy"
},
{
"paragraph_id": 3,
"text": "The Buteoninae subfamily originated from and is most diversified in the Americas, with occasional broader radiations that led to common buzzards and other Eurasian and African buzzards. The common buzzard is a member of the genus Buteo, a group of medium-sized raptors with robust bodies and broad wings. The Buteo species of Eurasia and Africa are usually commonly referred to as \"buzzards\" while those in the Americas are called hawks. Under current classification, the genus includes approximately 28 species, the second most diverse of all extant accipitrid genera behind only Accipiter. DNA testing shows that the common buzzard is fairly closely related to the red-tailed hawk (Buteo jamaicensis) of North America, which occupies a similar ecological niche to the buzzard in that continent. The two species may belong to the same species complex. Three buzzards in Africa are likely closely related to the common buzzard based on genetic materials, the Mountain buzzard (Buteo oreophilus), Forest buzzards (Buteo trizonatus) and the Madagascar buzzard (Buteo brachypterus), to the point where it has been questioned whether they are sufficiently distinct to qualify as full species. However, the distinctiveness of these African buzzards has generally been supported. Genetic studies have further indicated that the modern buzzards of Eurasia and Africa are a relatively young group, showing that they diverged at about 300,000 years ago. Nonetheless, fossils dating earlier than 5 million year old (the late Miocene period) showed Buteo species were present in Europe much earlier than that would imply, although it cannot be stated to a certainty that these would’ve been related to the extant buzzards.",
"title": "Taxonomy"
},
{
"paragraph_id": 4,
"text": "Some 16 subspecies have been described in the past and up to 11 are often considered valid, although some authorities accept as few as seven. Common buzzard subspecies fall into two groups.",
"title": "Taxonomy"
},
{
"paragraph_id": 5,
"text": "The western buteo group is mainly resident or short-distance migrants and includes:",
"title": "Taxonomy"
},
{
"paragraph_id": 6,
"text": "The eastern vulpinus group includes:",
"title": "Taxonomy"
},
{
"paragraph_id": 7,
"text": "At one time, races of the common buzzard were thought to range as far in Asia as a breeding bird well into the Himalayas and as far east as northeastern China, Russia to the Sea of Okhotsk, and all the islands of the Kurile Islands and of Japan, despite both the Himalayan and eastern birds showing a natural gap in distribution from the next nearest breeding common buzzard. However, DNA testing has revealed that the buzzards of these populations probably belong to different species. Most authorities now accept these buzzards as full species: the eastern buzzard (Buteo japonicus; with three subspecies of its own) and the Himalayan buzzard (Buteo refectus). Buzzards found on the islands of Cape Verde off of the coast of western Africa, once referred to as the subspecies B. b. bannermani, and Socotra Island off of the northern peninsula of Arabia, once referred to as the rarely recognized subspecies B. b. socotrae, are now generally thought not to belong to the common buzzard. DNA testing has indicated that these insular buzzards are actually more closely related to the long-legged buzzard (Buteo rufinus) than to the common buzzard. Subsequently, some researchers have advocated full species status for the Cape Verde population, but the placement of these buzzards is generally deemed unclear.",
"title": "Taxonomy"
},
{
"paragraph_id": 8,
"text": "The common buzzard is a medium to large sized raptor that is highly variable in plumage. Most buzzards are distinctly round headed with a somewhat slender bill, relatively long wings that either reach or fall slightly short of the tail tip when perched, a fairly short tail, and somewhat short and mainly bare tarsi. They can appear fairly compact in overall appearance but may also appear large relative to other more common raptorial birds such as kestrels and sparrowhawks. The common buzzard measures between 40 and 58 cm (16 and 23 in) in length with a 109–140 cm (43–55 in) wingspan. Females average about 2–7% larger than males linearly and weigh about 15% more. Body mass can show considerable variation. Buzzards from Great Britain alone can vary from 427 to 1,183 g (0.941 to 2.608 lb) in males, while females there can range from 486 to 1,370 g (1.071 to 3.020 lb).",
"title": "Description"
},
{
"paragraph_id": 9,
"text": "In Europe, most typical buzzards are dark brown above and on the upperside of the head and mantle, but can become paler and warmer brown with worn plumage. The flight feathers on perched European buzzards are always brown in the nominate subspecies (B. b. buteo). Usually the tail will usually be narrowly barred grey-brown and dark brown with a pale tip and a broad dark subterminal band but the tail in palest birds can show a varying amount a white and reduced subterminal band or even appear almost all white. In European buzzards, the underside coloring can be variable but most typically show a brown-streaked white throat with a somewhat darker chest. A pale U across breast is often present; followed by a pale line running down the belly which separates the dark areas on breast-side and flanks. These pale areas tend to have highly variable markings that tend to form irregular bars. Juvenile buzzards are quite similar to adult in the nominate race, being best told apart by having a paler eye, a narrower subterminal band on the tail and underside markings that appear as streaks rather than bars. Furthermore, juveniles may show variable creamy to rufous fringes to upperwing coverts but these also may not be present. Seen from below in flight, buzzards in Europe typically have a dark trailing edge to the wings. If seen from above, one of the best marks is their broad dark subterminal tail band. Flight feathers of typical European buzzards are largely greyish, the aforementioned dark wing linings at front with contrasting paler band along the median coverts. In flight, paler individuals tend to show dark carpal patches that can appears as blackish arches or commas but these may be indistinct in darker individuals or can appear light brownish or faded in paler individuals. Juvenile nominate buzzards are best told apart from adults in flight by the lack of a distinct subterminal band (instead showing fairly even barring throughout) and below by having less sharp and brownish rather than blackish trailing wing edge. Juvenile buzzards show streaking paler parts of under wing and body showing rather than barring as do adults. Beyond the typical mid-range brownish buzzard, birds in Europe can range from almost uniform black-brown above to mainly white. Extreme dark individuals may range from chocolate brown to blackish with almost no pale showing but a variable, faded U on the breast and with or without faint lighter brown throat streaks. Extreme pale birds are largely whitish with variable widely spaced streaks or arrowheads of light brown about the mid-chest and flanks and may or may not show dark feather-centres on the head, wing-coverts and sometimes all but part of mantle. Individuals can show nearly endless variation of colours and hues in between these extremes and the common buzzard is counted among the most variably plumage diurnal raptors for this reason. One study showed that this variation may actually be the result of diminished single-locus genetic diversity.",
"title": "Description"
},
{
"paragraph_id": 10,
"text": "Beyond the nominate form (B. b. buteo) that occupies most of the common buzzard's European range, a second main, widely distributed subspecies is known as the steppe buzzard (B. b. vulpinus). The steppe buzzard race shows three main colour morphs, each of which can be predominant in a region of breeding range. It is more distinctly polymorphic rather than just individually very variable like the nominate race. This may be because, unlike the nominate buzzard, the steppe buzzard is highly migratory. Polymorphism has been linked with migratory behaviour. The most common type of steppe buzzard is the rufous morph which gives this subspecies its scientific name (vulpes is Latin for \"fox\"). This morph comprises a majority of birds seen in passage east of the Mediterranean. Rufous morph buzzards are a paler grey-brown above than most nominate B. b. buteo. Compared to the nominate race, rufous vulpinus show a patterning not dissimilar but generally far more rufous-toned on head, the fringes to mantle wing coverts and, especially, on the tail and the underside. The head is grey-brown with rufous tinges usually while the tail is rufous and can vary from almost unmarked to thinly dark-barred with a subterminal band. The underside can be uniformly pale to dark rufous, barred heavily or lightly with rufous or with dusky barring, usually with darker individuals showing the U as in nominate but with a rufous hue. The pale morph of the steppe buzzard is commonest in the west of its subspecies range, predominantly seen in winter and migration at the various land bridge of the Mediterranean. As in the rufous morph, the pale morph vulpinus is grey-brown above but the tail is generally marked with thin dark bars and a subterminal band, only showing rufous near the tip. The underside in the pale morph is greyish-white with dark grey-brown or somewhat streaked head to chest and barred belly and chest, occasionally showing darker flanks that can be somewhat rufous. Dark morph vulpinus tend to be found in the east and southeast of the subspecies range and are easily outnumbered by rufous morph while largely using similar migration points. Dark morph individuals vary from grey-brown to much darker blackish-brown, and have a tail that is dark grey or somewhat mixed grey and rufous, is distinctly marked with dark barring and has a broad, black subterminal band. Dark morph vulpinus have a head and underside that is mostly uniform dark, from dark brown to blackish-brown to almost pure black. Rufous morph juveniles are often distinctly paler in ground colour (ranging even to creamy-grey) than adults with distinct barring below actually increased in pale morph type juvenile. Pale and rufous morph juveniles can only be distinguished from each other in extreme cases. Dark morph juveniles are more similar to adult dark morph vulpinus but often show a little whitish streaking below, and like all other races have lighter coloured eyes and more evenly barred tails than adults. Steppe buzzards tend to appear smaller and more agile in flight than nominate whose wing beats can look slower and clumsier. In flight, rufous morph vulpinus have their whole body and underwing varying from uniform to patterned rufous (if patterning present, it is variable, but can be on chest and often thighs, sometimes flanks, pale band across median coverts), while the under-tail usually paler rufous than above. Whitish flight feathers are more prominent than in nominate and more marked contrast with the bold dark brown band along the trailing edges. Markings of pale vulpinus as seen in flight are similar to rufous morph (such as paler wing markings) but more greyish both on wings and body. In dark morph vulpinus the broad black trailing edges and colour of body make whitish areas of inner wing stand out further with an often bolder and blacker carpal patch than in other morphs. As in nominate, juvenile vulpinus (rufous/pale) tend to have much less distinct trailing edges, general streaking on body and along median underwing coverts. Dark morph vulpinus resemble adult in flight more so than other morphs.",
"title": "Description"
},
{
"paragraph_id": 11,
"text": "The common buzzard is often confused with other raptors especially in flight or at a distance. Inexperienced and over-enthusiastic observers have even mistaken darker birds for the far larger and differently proportioned golden eagle (Aquila chrysaetos) and also dark birds for western marsh harrier (Circus aeruginosus) which also flies in a dihedral but is obviously relatively much longer and slenderer winged and tailed and with far different flying methods. Also buzzards may possibly be confused with dark or light morph booted eagles (Hieraeetus pennatus), which are similar in size, but the eagle flies on level, parallel-edged wings which usually appear broader, has a longer squarer tail, with no carpal patch in pale birds and all dark flight feathers but for whitish wedge on inner primaries in dark morph ones. Pale individuals are sometimes also mistaken with pale morph short-toed eagles (Circaetus gallicus) which are much larger with a considerably bigger head, longer wings (which are usually held evenly in flight rather than in a dihedral) and paler underwing lacking any carpal patch or dark wing lining. More serious identification concerns lie in other Buteo species and in flight with honey buzzards, which are quite different looking when seen perched at close range. The European honey buzzard (Pernis apivorus) is thought in engage in mimicry of more powerful raptors, in particular, juveniles may mimic the plumage of the more powerful common buzzard. While less individually variable in Europe, the honey buzzard is more extensive polymorphic on underparts than even the common buzzard. The most common morph of the adult European honey buzzard is heavily and rufous barred on the underside, quite different from the common buzzard, however the brownish juvenile much more resembles an intermediate common buzzard. Honey buzzards flap with distinctively slower and more even wing beats than common buzzard. The wings are also lifted higher on each upstroke, creating a more regular and mechanical effect, furthermore their wings are held slightly arched when soaring but not in a V. On the honey buzzard, the head appears smaller, the body thinner, the tail longer and the wings narrower and more parallel edged. The steppe buzzard race is particularly often mistaken for juvenile European honey buzzards, to the point where early observers of raptor migration in Israel considered distant individuals indistinguishable. However, when compared to a steppe buzzard, the honey buzzard has distinctly darker secondaries on the underwing with fewer and broader bars and more extensive black wing-tips (whole fingers) contrasting with a less extensively pale hand. Found in the same range as the steppe buzzard in some parts of southern Siberia as well as (with wintering steppes) in southwestern India, the Oriental honey buzzard (Pernis ptilorhynchus) is larger than both the European honey buzzard and the common buzzard. The oriental species is with more similar in body plan to common buzzards, being relatively broader winged, shorter tailed and more amply-headed (though the head is still relatively small) relative to the European honey buzzard, but all plumages lack carpal patches.",
"title": "Description"
},
{
"paragraph_id": 12,
"text": "In much of Europe, the common buzzard is the only type of buzzard. However, the subarctic breeding rough-legged buzzard (Buteo lagopus) comes down to occupy much of the northern part of the continent during winter in the same haunts as the common buzzard. However, the rough-legged buzzard is typically larger and distinctly longer-winged with feathered legs, as well as having a white based tail with a broad subterminal band. Rough-legged buzzards have slower wing beats and hover far more frequently than do common buzzards. The carpal patch marking on the under-wing are also bolder and blacker on all paler forms of rough-legged hawk. Many pale morph rough-legged buzzards have a bold, blackish band across the belly against contrasting paler feathers, a feature which rarely appears in individual common buzzard. Usually the face also appears somewhat whitish in most pale morphs of rough-legged buzzards, which is true of only extremely pale common buzzards. Dark morph rough-legged buzzards are usually distinctly darker (ranging to almost blackish) than even extreme dark individuals of common buzzards in Europe and still have the distinct white-based tail and broad subterminal band of other roughlegs. In eastern Europe and much of the Asian range of common buzzards, the long-legged buzzard (Buteo rufinus) may live alongside the common species. As in the steppe buzzard race, the long-legged buzzard has three main colour morphs that are more or less similar in hue. In both the steppe buzzard race and long-legged buzzard, the main colour is overall fairly rufous. More so than steppe buzzards, long-legged buzzards tend to have a distinctly paler head and neck compared to other feathers, and, more distinctly, a normally unbarred tail. Furthermore, the long-legged buzzard is usually a rather larger bird, often considered fairly eagle-like in appearance (although it does appear gracile and small-billed even compared to smaller true eagles), an effect enhanced by its longer tarsi, somewhat longer neck and relatively elongated wings. The flight style of the latter species is deeper, slower and more aquiline, with much more frequent hovering, showing a more protruding head and a slightly higher V held in a soar. The smaller North African and Arabian race of long-legged buzzard (B. r. cirtensis) is more similar in size and nearly all colour characteristics to steppe buzzard, extending to the heavily streaked juvenile plumage, in some cases such birds can be distinguished only by their proportions and flight patterns which remain unchanged. Hybridization with the latter race (B. r. cirtensis) and nominate common buzzards has been observed in the Strait of Gibraltar, a few such birds have been reported potentially in the southern Mediterranean due to mutually encroaching ranges, which are blurring possibly due to climate change.",
"title": "Description"
},
{
"paragraph_id": 13,
"text": "Wintering steppe buzzards may live alongside mountain buzzards and especially with forest buzzard while wintering in Africa. The juveniles of steppe and forest buzzards are more or less indistinguishable and only told apart by proportions and flight style, the latter species being smaller, more compact, having a smaller bill, shorter legs and shorter and thinner wings than a steppe buzzard. However, size is not diagnostic unless side by side as the two buzzards overlap in this regard. Most reliable are the species wing proportions and their flight actions. Forest buzzard have more flexible wing beats interspersed with glides, additionally soaring on flatter wings and apparently never engage in hovering. Adult forest buzzards compared to the typical adult steppe buzzard (rufous morph) are also similar, but the forest typically has a whiter underside, sometimes mostly plain white, usually with heavy blotches or drop-shaped marks on abdomen, with barring on thighs, more narrow tear-shaped on chest and more spotted on leading edges of underwing, usually lacking marking on the white U across chest (which is otherwise similar but usually broader than that of vulpinus). In comparison, the mountain buzzard, which is more similar in size to the steppe buzzard and slightly larger than the forest buzzard, is usually duller brown above than a steppe buzzard and is more whitish below with distinctive heavy brown blotches from breasts to the belly, flanks and wing linings while juvenile mountain buzzard is buffy below with smaller and streakier markings. The steppe buzzard when compared to another African species, the red-necked buzzard (Buteo auguralis), which has red tail similar to vulpinus, is distinct in all other plumage aspects despite their similar size. The latter buzzard has a streaky rufous head and is white below with a contrasting bold dark chest in adult plumage and, in juvenile plumage, has heavy, dark blotches on the chest and flanks with pale wing-linings. Jackal and augur buzzards (Buteo rufofuscus & augur), also both rufous on the tail, are larger and bulkier than steppe buzzards and have several distinctive plumage characteristics, most notably both having their own striking, contrasting patterns of black-brown, rufous and cream.",
"title": "Description"
},
{
"paragraph_id": 14,
"text": "The common buzzard is found throughout several islands in the eastern Atlantic islands, including the Canary Islands and Azores and almost throughout Europe. It is today found in Ireland and in nearly every part of Scotland, Wales and England. In mainland Europe, remarkably, there are no substantial gaps without breeding common buzzards from Portugal and Spain to Greece, Estonia, Belarus and Ukraine, though are present mainly only in the breeding season in much of the eastern half of the latter three countries. They are also present in all larger Mediterranean islands such as Corsica, Sardinia, Sicily and Crete. Further north in Scandinavia, they are found mainly in southeastern Norway (though also some points in southwestern Norway close to the coast and one section north of Trondheim), just over the southern half of Sweden and hugging over the Gulf of Bothnia to Finland where they live as a breeding species over nearly two-thirds of the land.",
"title": "Distribution and habitat"
},
{
"paragraph_id": 15,
"text": "The common buzzard reaches its northern limits as a breeder in far eastern Finland and over the border to European Russia, continuing as a breeder over to the narrowest straits of the White Sea and nearly to the Kola Peninsula. In these northern quarters, the common buzzard is present typically only in summer but is a year-around resident of a hearty bit of southern Sweden and some of southern Norway. Outside of Europe, it is a resident of northern Turkey (largely close to the Black Sea) otherwise occurring mainly as a passage migrant or winter visitor in the remainder of Turkey, Georgia, sporadically but not rarely in Azerbaijan and Armenia, northern Iran (largely hugging the Caspian Sea) to northern Turkmenistan. Further north though its absent from either side of the northern Caspian Sea, the common buzzard is found in much of western Russia (though exclusively as a breeder) including all of the Central Federal District and the Volga Federal District, all but the northernmost parts of the Northwestern and Ural Federal Districts and nearly the southern half of the Siberian Federal District, its farthest easterly occurrence as a breeder. It also found in northern Kazakhstan, Kyrgyzstan, far northwestern China (Tien Shan) and northwestern Mongolia.",
"title": "Distribution and habitat"
},
{
"paragraph_id": 16,
"text": "Non-breeding populations occur, either as migrants or wintering birds, in southwestern India, Israel, Lebanon, Syria, Egypt (northeastern), northern Tunisia (and far northwestern Algeria), northern Morocco, near the coasts of The Gambia, Senegal and far southwestern Mauritania and Ivory Coast (and bordering Burkina Faso). In eastern and central Africa, it is found in winter from southeastern Sudan, Eritrea, about two-thirds of Ethiopia, much of Kenya (though apparently absent from the northeast and northwest), Uganda, southern and eastern Democratic Republic of the Congo, and more or less the entirety of southern Africa from Angola across to Tanzania down the remainder of the continent (but for an apparent gap along the coast from southwestern Angola to northwestern South Africa).",
"title": "Distribution and habitat"
},
{
"paragraph_id": 17,
"text": "The common buzzard generally inhabits the interface of woodlands and open grounds; most typically the species lives in forest edge, small woods or shelterbelts with adjacent grassland, arables or other farmland. It acquits to open moorland as long as there is some trees for perch hunting and nesting use. The woods they inhabit may be coniferous, temperate broadleaf and mixed forests and temperate deciduous forest with occasional preferences for the local dominant tree. It is absent from treeless tundra, as well as the Subarctic where the species almost entirely gives way to the rough-legged buzzard. The common buzzard is sporadic or rare in treeless steppe but can occasionally migrate through it (despite its name, the steppe buzzard subspecies breeds primarily in the wooded fringes of the steppe). The species may be found to some extent in both in mountainous or flat country. Although adaptable to and sometimes seen in wetlands and in coastal areas, buzzards are often considered more of an upland species and neither appear to be regularly attracted to or to strongly avoid bodies of waters in non-migratory times. Buzzards in well-wooded areas of eastern Poland largely used large, mature stands of trees that were more humid, richer and denser than prevalent in surrounding area, but showed preference for those within 30 to 90 m (98 to 295 ft) of openings. Mostly resident buzzards live in lowlands and foothills, but they can live in timbered ridges and uplands as well as rocky coasts, sometimes nesting on cliff ledges rather than trees. Buzzards may live from sea level to elevations of 2,000 m (6,600 ft), breeding mostly below 1,000 m (3,300 ft) but they can winter to an elevation of 2,500 m (8,200 ft) and migrates easily to 4,500 m (14,800 ft). In the mountainous Italian Apennines, buzzard nests were at a mean elevation of 1,399 m (4,590 ft) and were, relative to the surrounding area, further from human developed areas (i.e. roads) and nearer to valley bottoms in rugged, irregularly topographed places, especially ones that faced northeast. Common buzzards are fairly adaptable to agricultural lands but will show can show regional declines in apparent response to agriculture. Changes to more extensive agricultural practices were shown to reduce buzzard populations in western France where reduction of “hedgerows, woodlots and grasslands areas\" caused a decline of buzzards and in Hampshire, England where more extensive grazing by free-range cattle and horses led to declines of buzzards, probably largely due to the seeming reduction of small mammal populations there. On the contrary, buzzards in central Poland adapted to removal of pine trees and reduction of rodent prey by changing nest sites and prey for a time with no strong change in their local numbers. Extensive urbanization seems to negatively affect buzzards, this species being generally less adaptable to urban areas than their New World counterparts, the red-tailed hawk. Although peri-urban areas can actually increase potential prey populations in a location at times, individual buzzard mortality, nest disturbances and nest site habitat degradation rises significantly in such areas. Common buzzards are fairly adaptive to rural areas as well as suburban areas with parks and large gardens, in addition to such areas if they're near farms.",
"title": "Distribution and habitat"
},
{
"paragraph_id": 18,
"text": "The common buzzard is a typical Buteo in much of its behaviour. It is most often seen either soaring at varying heights or perched prominently on tree tops, bare branches, telegraph poles, fence posts, rocks or ledges, or alternately well inside tree canopies. Buzzards will also stand and forage on the ground. In resident populations, it may spend more than half of its day inactively perched. Furthermore, it has been described a \"sluggish and not very bold\" bird of prey. It is a gifted soarer once aloft and can do so for extended periods but can appear laborious and heavy in level flight, more so nominate buzzards than steppe buzzards. Particularly in migration, as was recorded in the case of steppe buzzards' movement over Israel, buzzards readily adjust their direction, tail and wing placement and flying height to adjust for the surrounding environment and wind conditions. In Israel, migrant buzzards rarely soar all that high (maximum 1,000–2,000 m (3,300–6,600 ft) above ground) due to the lack of mountain ridges that in other areas typically produce flyways; however tail-winds are significant and allow birds to cover a mean of 9.8 metres per second (22 miles per hour).",
"title": "Behaviour"
},
{
"paragraph_id": 19,
"text": "The common buzzard is aptly described as a partial migrant. The autumn and spring movements of buzzards are subject to extensive variation, even down to the individual level, based on a region's food resources, competition (both from other buzzards and other predators), extent of human disturbance and weather conditions. Short distance movements are the norm for juveniles and some adults in autumn and winter, but more adults in central Europe and the British Isles remain on their year-around residence than do not. Even for first year juvenile buzzards dispersal may not take them very far. In England, 96% of first-years moved in winter to less than 100 km (62 mi) from their natal site. Southwestern Poland was recorded to be a fairly important wintering grounds for central European buzzards in early spring that apparently travelled from somewhat farther north, in winter average density was a locally high 2.12 individual per square kilometer. Habitat and prey availability seemed to be the primary drivers of habitat selection in fall for European buzzards. In northern Germany, buzzards were recorded to show preferences in fall for areas fairly distant from nesting site, with a large quantity of vole-holes and more widely dispersed perches. In Bulgaria, the mean wintering density was 0.34 individual per square kilometer, and buzzards showed a preference for agricultural over forested areas. Similar habitat preferences were recorded in northeastern Romania, where buzzard density was 0.334–0.539 individuals per square kilometer. The nominate buzzards of Scandinavia are somewhat more strongly migratory than most central European populations. However, birds from Sweden show some variation in migratory behaviours. A maximum of 41,000 individuals have been recorded at one of the main migration sites within southern Sweden in Falsterbo. In southern Sweden, winter movements and migration was studied via observation of buzzard colour. White individuals were substantially more common in southern Sweden rather than further north in their Swedish range. The southern population migrates earlier than intermediate to dark buzzards, in both adults and juveniles. A larger proportion of juveniles than of adults migrate in the southern population. Especially adults in the southern population are resident to a higher degree than more northerly breeders.",
"title": "Behaviour"
},
{
"paragraph_id": 20,
"text": "The entire population of the steppe buzzard is strongly migratory, covering substantial distances during migration. In no part of the range do steppe buzzards use the same summering and wintering grounds. Steppe buzzards are slightly gregarious in migration, and travel in variously sized flocks. This race migrates in September to October often from Asia Minor to the Cape of Africa in about a month but does not cross water, following around the Winam Gulf of Lake Victoria rather than crossing the several kilometer wide gulf. Similarly, they will funnel along both sides of the Black Sea. Migratory behavior of steppe buzzards mirrors those of broad-winged & Swainson's hawks (Buteo platypterus & swainsoni) in every significant way as similar long-distance migrating Buteos, including trans-equatorial movements, avoidance of large bodies of waters and flocking behaviour. Migrating steppe buzzards will rise up with the morning thermals and can cover an average of hundreds of miles a day using the available currents along mountain ridges and other topographic features. The spring migration for steppe buzzards peaks around March–April, but the latest vulpinus arrive in their breeding grounds by late April or early May. Distances covered by migrating steppe buzzards in one way flights from northern Europe (i.e. Finland or Sweden) to southern Africa have ranged over 13,000 km (8,100 mi) within a season . For the steppe buzzards from eastern and northern Europe and western Russia (which compromise a majority of all steppe buzzards), peak migratory numbers occur in differing areas in autumn, when the largest recorded movements occurs through Asia Minor such as Turkey, than in spring, when the largest recorded movement are to the south in the Middle East, especially Israel. The two migratory movements barely differ overall until they reach the Middle East and east Africa, where the largest volume of migrants in autumn occurs at the southern part of the Red Sea, around Djibouti and Yemen, while the main volume in spring is in the northernmost strait, around Egypt and Israel. In autumn, numbers of steppe buzzards recorded in migration have ranged up to 32,000 (recorded 1971) in northwestern Turkey (Bosporus) and in northeastern Turkey (Black Sea) up to 205,000 (recorded 1976). Further down in migration, autumn numbers of up to 98,000 have been recorded in passage in Djibouti. Between 150,000 and nearly 466,000 Steppe Buzzard have been recorded migrating through Israel during spring, making this not only the most abundant migratory raptor here but one of the largest raptor migrations anywhere in the world. Migratory movements of southern Africa buzzards largely occur along the major mountain ranges, such as the Drakensberg and Lebombo Mountains. Wintering steppe buzzards occur far more irregularly in Transvaal than Cape region in winter. The onset of migratory movement for steppe buzzards back to the breeding grounds in southern Africa is mainly in March, peaking in the second week. Steppe buzzard molt their feathers rapidly upon arrival at wintering grounds and seems to split their flight feather molt between breeding ground in Eurasia and wintering ground in southern Africa, the molt pausing during migration. In last 50 years, it was recorded that nominate buzzards are typically migrating shorter distances and wintering further north, possibly in response to climate change, resulting in relatively smaller numbers of them at migration sites. They are also extending their breeding range possibly reducing/supplanting steppe buzzards.",
"title": "Behaviour"
},
{
"paragraph_id": 21,
"text": "Resident populations of common buzzards tend to vocalize all year around, whereas migrants tend to vocalize only during the breeding season. Both nominate buzzards and steppe buzzards (and their numerous related subspecies within their types) tend to have similar voices. The main call of the species is a plaintive, far-carrying pee-yow or peee-oo, used as both contact call and more excitedly in aerial displays. Their call is sharper, more ringing when used in aggression, tends to be more drawn-out and wavering when chasing intruders, sharper, more yelping when as warning when approaching the nest or shorter and more explosive when called in alarm. Other variations of their vocal performances include a cat-like mew, uttered repeatedly on the wing or when perched, especially in display; a repeated mah has been recorded as uttered by pairs answering each other, further chuckles and croaks have also been recorded at nests. Juveniles can usually be distinguished by the discordant nature of their calls compared to those of adults.",
"title": "Behaviour"
},
{
"paragraph_id": 22,
"text": "The common buzzard is a generalist predator which hunts a wide variety of prey given the opportunity. Their prey spectrum extents to a wide variety of vertebrates including mammals, birds (from any age from eggs to adult birds), reptiles, amphibians and, rarely, fish, as well as to various invertebrates, mostly insects. Young animals are often attacked, largely the nidifugous young of various vertebrates. In total well over 300 prey species are known to be taken by common buzzards. Furthermore, prey size can vary from tiny beetles, caterpillars and ants to large adult grouse and rabbits up to nearly twice their body mass. Mean body mass of vertebrate prey was estimated at 179.6 g (6.34 oz) in Belarus. At times, they will also subsist partially on carrion, usually of dead mammals or fish. However, dietary studies have shown that they mostly prey upon small mammals, largely small rodents. Like many temperate zone raptorial birds of varied lineages, voles are an essential part of the common buzzard's diet. This bird's preference for the interface between woods and open areas frequently puts them in ideal vole habitat. Hunting in relatively open areas has been found to increase hunting success whereas more complete shrub cover lowered success. A majority of prey is taken by dropping from perch, and is normally taken on ground. Alternately, prey may be hunted in a low flight. This species tends not to hunt in a spectacular stoop but generally drops gently then gradually accelerate at bottom with wings held above the back. Sometimes, the buzzard also forages by random glides or soars over open country, wood edges or clearings. Perch hunting may be done preferentially but buzzards fairly regularly also hunt from a ground position when the habitat demands it. Outside the breeding season, as many 15–30 buzzards have been recorded foraging on ground in a single large field, especially juveniles. Normally the rarest foraging type is hovering. A study from Great Britain indicated that hovering does not seem to increase hunting success.",
"title": "Dietary biology"
},
{
"paragraph_id": 23,
"text": "A high diversity of rodents may be taken given the chance, as around 60 species of rodent have been recorded in the foods of common buzzards. It seems clear that voles are the most significant prey type for European buzzards. Nearly every study from the continent makes reference to the importance, in particular, of the two most numerous and widely distributed European voles: the 28.5 g (1.01 oz) common vole (Microtus arvalis) and the somewhat more northerly ranging 40 g (1.4 oz) field vole (Microtus agrestis). In southern Scotland, field voles were the best-represented species in pellets, accounting for 32.1% of 581 pellets. In southern Norway, field voles were again the main food in years with peak vole numbers, accounting for 40.8% of 179 prey items in 1985 and 24.7% of 332 prey items in 1994. Altogether, rodents amount to 67.6% and 58.4% of the foods in these respective peak vole years. However, in low vole population years, the contribution of rodents to the diet was minor. As far west as the Netherlands, common voles were the most regular prey, amounting to 19.6% of 6624 prey items in a very large study. Common voles were the main foods recorded in central Slovakia, accounting for 26.5% of 606 prey items. The common vole, or other related vole species at times, were the main foods as well in Ukraine (17.2% of 146 prey items) ranging east to Russia in the Privolshky Steppe Nature Reserve (41.8% of 74 prey items) and in Samara (21.4% of 183 prey items). Other records from Russia and Ukraine show voles ranging from slightly secondary prey to as much as 42.2% of the diet. In Belarus, voles, including Microtus species and 18.4 g (0.65 oz) bank voles (Myodes glareolus), accounted for 34.8% of the biomass on average in 1065 prey items from different study areas over 4 years. At least 12 species of the genus Microtus are known to be hunted by common buzzards and even this is probably conservative, moreover similar species like lemmings will be taken if available.",
"title": "Dietary biology"
},
{
"paragraph_id": 24,
"text": "Other rodents are taken largely opportunistically rather than by preference. Several wood mice (Apodemus ssp.) are known to be taken quite frequently but given their preference for activity in deeper woods than the field-forest interfaces preferred, they are rarely more than secondary food items. An exception was in Samara where the yellow-necked mouse (Apodemus flavicollis), one of the largest of its genus at 28.4 g (1.00 oz), made up 20.9%, putting it just behind the common vole in importance. Similarly, tree squirrels are readily taken but rarely important in the foods of buzzards in Europe, as buzzards apparently prefer to avoid taking prey from trees nor do they possess the agility typically necessary to capture significant quantities of tree squirrels. All four ground squirrels that range (mostly) into eastern Europe are also known to be common buzzard prey but little quantitative analysis has gone into how significant such predator-prey relations are. Rodent prey taken have ranged in size from the 7.8 g (0.28 oz) Eurasian harvest mouse (Micromys minutus) to the non-native, 1,100 g (2.4 lb) muskrat (Ondatra zibethicus). Other rodents taken either seldom or in areas where the food habits of buzzards are spottily known include flying squirrels, marmots (presumably very young if taken alive), chipmunks, spiny rats, hamsters, mole-rats, gerbils, jirds and jerboas and occasionally hearty numbers of dormice, although these are nocturnal. Surprisingly little research has gone into the diets of wintering steppe buzzards in southern Africa, considering their numerous status there. However, it has been indicated that the main prey remains consist of rodents such as the four-striped grass mouse (Rhabdomys pumilio) and Cape mole-rats (Georychus capensis).",
"title": "Dietary biology"
},
{
"paragraph_id": 25,
"text": "Other than rodents, two other groups of mammals can be counted as significant to the diet of common buzzards. One of these main prey types of import in the diets of common buzzards are leporids or lagomorphs, especially the European rabbit (Oryctolagus cuniculus) where it is found in numbers in a wild or feral state. In all dietary studies from Scotland, rabbits were highly important to the buzzard's diet. In southern Scotland, rabbits constituted 40.8% of remains at nests and 21.6% of pellet contents, while lagomorphs (mainly rabbits but also some young hares) were present in 99% of remains in Moray, Scotland. The nutritional richness relative to the commonest prey elsewhere, such as voles, might account for the high productivity of buzzards here. For example, clutch sizes were twice as large on average where rabbits were common (Moray) than were where they were rare (Glen Urquhart). In northern Ireland, an area of interest because it is devoid of any native vole species, rabbits were again the main prey. Here, lagomorphs constituted 22.5% of prey items by number and 43.7% by biomass. While rabbits are non-native, albeit long-established, in the British Isles, in their native area of the Iberian peninsula, rabbits are similarly significant to the buzzard's diet. In Murcia, Spain, rabbits were the most common mammal in the diet, making up 16.8% of 167 prey items. In a large study from northeastern Spain, rabbits were dominant in the buzzard's foods, making up 66.5% of 598 prey items. In the Netherlands, European rabbits were second in number (19.1% of 6624 prey items) only to common voles and the largest contributor of biomass to nests (36.7%). Outside of these (at least historically) rabbit-rich areas, leverets of the common hare species found in Europe can be important supplemental prey. European hare (Lepus europaeus) were the fourth most important prey species in central Poland and the third most significant prey species in Stavropol Krai, Russia. Buzzards normally attack the young of European rabbits and hares. Most of the rabbits taken by buzzard variously been estimated from 159 to 550 g (5.6 to 19.4 oz), and infrequently up to 700 g (1.5 lb) in weight. Similarly, in different areas and the mean weight of brown hares taken in Finland was around 500 g (1.1 lb). One young mountain hares (Lepus timidus) taken in Norway was estimated to about 1,000 g (2.2 lb). However, common buzzards have the physical ability to kill adult rabbits. This is supported by remains of relatively large-sized tarsus bones of the rabbit, up to 64mm in length, suggesting prime adult rabbits weigh up to 1,600 g (3.5 lb) can be preyed upon.",
"title": "Dietary biology"
},
{
"paragraph_id": 26,
"text": "The other significant mammalian prey type is insectivores, among which more than 20 species are known to be taken by this species, including nearly all the species of shrew, mole and hedgehog found in Europe. Moles are taken particularly often among this order, since as is the case with \"vole-holes\", buzzards probably tend to watch molehills in fields for activity and dive quickly from their perch when one of the subterranean mammals pops up. The most widely found mole in the buzzard's northern range is the 98 g (3.5 oz) European mole (Talpa europaea) and this is one of the more important non-rodent prey items for the species. This species was present in 55% of 101 remains in Glen Urquhart, Scotland and was the second most common prey species (18.6%) in 606 prey items in Slovakia. In Bari, Italy, the Roman mole (Talpa romana), of similar size to the European species, was the leading identified mammalian prey, making up 10.7% of the diet. The full-size range of insectivores may be taken by buzzards, ranging from the world's smallest mammal (by weight), the 1.8 g (0.063 oz) Etruscan shrew (Suncus etruscus) to arguably the heaviest insectivore, the 800 g (28 oz) European hedgehog (Erinaceus europaeus). Mammalian prey for common buzzards other than rodents, insectivores, and lagomorphs is rarely taken. Occasionally, some weasels such as least weasel (Mustela nivalis) and stoat (Mustela erminea) are taken, and remains of young pine martens (Martes martes) and adult european polecats (Mustela putorius) was found in buzzard nest. Numerous larger mammals, including medium-sized carnivores such as dogs, cats and foxes and various ungulates, are sometimes eaten as carrion by buzzards, mainly during lean winter months. Still-borns of deer are also visited with some frequency.",
"title": "Dietary biology"
},
{
"paragraph_id": 27,
"text": "When attacking birds, common buzzards chiefly prey on nestlings and fledglings of small to medium-sized birds, largely passerines but also a variety of gamebirds, but sometimes also injured, sickly or unwary but healthy adults. While capable of overpowering birds larger than itself, the common buzzard is usually considered to lack the agility necessary to capture many adult birds, even gamebirds which would presumably be weaker fliers considering their relatively heavy bodies and small wings. The amount of fledgling and younger birds preyed upon relative to adults is variable, however. For example, in the Italian Alps, 72% of birds taken were fledglings or recently fledged juveniles, 19% were nestlings and 8% were adults. On the contrary, in southern Scotland, even though the buzzards were taking relatively large bird prey, largely red grouse (Lagopus lagopus scotica), 87% of birds taken were reportedly adults. In total, as in many raptorial birds that are far from bird-hunting specialists, birds are the most diverse group in the buzzard's prey spectrum due to the sheer number and diversity of birds, few raptors do not hunt them at least occasionally. Nearly 150 species of bird have been identified in the common buzzard's diet. In general, despite many that are taken, birds usually take a secondary position in the diet after mammals. In northern Scotland, birds were fairly numerous in the foods of buzzards. The most often recorded avian prey and 2nd and 3rd most frequent prey species (after only field voles) in Glen Urquhart, were 23.9 g (0.84 oz) chaffinch (Fringilla coelebs) and 18.4 g (0.65 oz) meadow pipits (Anthus pratensis), with the buzzards taking 195 fledglings of these species against only 90 adults. This differed from Moray where the most frequent avian prey and 2nd most frequent prey species behind the rabbit was the 480 g (17 oz) common wood pigeon (Columba palumbus) and the buzzards took four times as many adults relative to fledglings.",
"title": "Dietary biology"
},
{
"paragraph_id": 28,
"text": "Birds were the primary food for common buzzards in the Italian Alps, where they made up 46% of the diet against mammal which accounted for 29% in 146 prey items. The leading prey species here were 103 g (3.6 oz) Eurasian blackbirds (Turdus merula) and 160 g (5.6 oz) Eurasian jays (Garrulus glandarius), albeit largely fledglings were taken of both. Birds could also take the leading position in years with low vole populations in southern Norway, in particular thrushes, namely the blackbird, the 67.7 g (2.39 oz) song thrush (Turdus philomelos) and the 61 g (2.2 oz) redwing (Turdus iliacus), which were collectively 22.1% of 244 prey items in 1993. In southern Spain, birds were equal in number to mammals in the diet, both at 38.3%, but most remains were classified as \"unidentified medium-sized birds\", although the most often identified species of those that apparently could be determined were Eurasian jays and red-legged partridges (Alectoris rufa). Similarly, in northern Ireland, birds were roughly equal in import to mammals but most were unidentified corvids. In Seversky Donets, Ukraine, birds and mammals both made up 39.3% of the foods of buzzards. Common buzzards may hunt nearly 80 species passerines and nearly all available gamebirds. Like many other largish raptors, gamebirds are attractive to hunt for buzzards due to their ground-dwelling habits. Buzzards were the most frequent predator in a study of juvenile pheasants in England, accounting for 4.3% of 725 deaths (against 3.2% by foxes, 0.7% by owls and 0.5% by other mammals). They also prey on a wide size range of birds, ranging down to Europe's smallest bird, the 5.2 g (0.18 oz) goldcrest (Regulus regulus). Very few individual birds hunted by buzzards weigh more than 500 g (1.1 lb). However, there have been some particularly large avian kills by buzzards, including any that weigh more or 1,000 g (2.2 lb), or about the largest average size of a buzzard, have including adults of mallard (Anas platyrhynchos), black grouse (Tetrao tetrix), ring-necked pheasant (Phasianus colchicus), common raven (Corvus corax) and some of the larger gulls if ambushed on their nests. The largest avian kill by a buzzard, and possibly largest known overall for the species, was an adult female western capercaillie (Tetrao urogallus) that weighed an estimated 1,985 g (4.376 lb). At times, buzzards will hunt the young of large birds such as herons and cranes. Other assorted avian prey has included a few species of waterfowl, most available pigeons and doves, cuckoos, swifts, grebes, rails, nearly 20 assorted shorebirds, tubenoses, hoopoes, bee-eaters and several types of woodpecker. Birds with more conspicuous or open nesting areas or habits are more likely to have fledglings or nestlings attacked, such as water birds, while those with more secluded or inaccessible nests, such as pigeons/doves and woodpeckers, adults are more likely to be hunted.",
"title": "Dietary biology"
},
{
"paragraph_id": 29,
"text": "The common buzzard may be the most regular avian predator of reptiles and amphibians in Europe apart from the sections where they are sympatric with the largely snake-eating short-toed eagle. In total, the prey spectrum of common buzzards include nearly 50 herpetological prey species. In studies from northern and southern Spain, the leading prey numerically were both reptilian, although in Biscay (northern Spain) the leading prey (19%) was classified as \"unidentified snakes\". In Murcia, the most numerous prey was the 77.2 g (2.72 oz) ocellated lizard (Timon lepidus), at 32.9%. In total, at Biscay and Murcia, reptiles accounted for 30.4% and 35.9% of the prey items, respectively. Findings were similar in a separate study from northeastern Spain, where reptiles amounted to 35.9% of prey. In Bari, Italy, reptiles were the main prey, making up almost exactly half of the biomass, led by the large green whip snake (Hierophis viridiflavus), maximum size up to 1,360 g (3.00 lb), at 24.2% of food mass. In Stavropol Krai, Russia, the 20 g (0.71 oz) sand lizard (Lacerta agilis) was the main prey at 23.7% of 55 prey items. The 16 g (0.56 oz) slowworm (Anguis fragilis), a legless lizard, became the most numerous prey for the buzzards of southern Norway in low vole years, amounting to 21.3% of 244 prey items in 1993 and were also common even in the peak vole year of 1994 (19% of 332 prey items). More or less any snake in Europe is potential prey and the buzzard has been known to be uncharacteristically bold in going after and overpowering large snakes such as rat snakes, ranging up to nearly 1.5 m (4 ft 11 in) in length, and healthy, large vipers despite the danger of being struck by such prey. However, in at least one case, the corpse of a female buzzard was found envenomed over the body of an adder that it had killed. In some parts of range, the common buzzard acquires the habit of taking many frogs and toads. This was the case in the Mogilev Region of Belarus where the 23 g (0.81 oz) moor frog (Rana arvalis) was the major prey (28.5%) over several years, followed by other frogs and toads amounting to 39.4% of the diet over the years. In central Scotland, the 46 g (1.6 oz) common toad (Bufo bufo) was the most numerous prey species, accounting for 21.7% of 263 prey items, while the common frog (Rana temporaria) made up a further 14.7% of the diet. Frogs made up about 10% of the diet in central Poland as well.",
"title": "Dietary biology"
},
{
"paragraph_id": 30,
"text": "When common buzzards feed on invertebrates, these are chiefly earthworms, beetles and caterpillars in Europe and largely seemed to be preyed on by juvenile buzzards with less refined hunting skills or in areas with mild winters and ample swarming or social insects. In most dietary studies, invertebrates are at best a minor supplemental contributor to the buzzard's diet. Nonetheless, roughly a dozen beetle species have found in the foods of buzzards from Ukraine alone. In winter in northeastern Spain, it was found that the buzzards switched largely from the vertebrate prey typically taken during spring and summer to a largely insect-based diet. Most of this prey was unidentified but the most frequently identified were European mantis (Mantis religiosa) and European mole cricket (Gryllotalpa gryllotalpa). In Ukraine, 30.8% of the food by number was found to be insects. Especially in winter quarters such as southern Africa, common buzzards are often attracted to swarming locusts and other orthopterans. In this way the steppe buzzard may mirror a similar long-distance migrant from the Americas, the Swainson's hawk, which feeds its young largely on nutritious vertebrates but switches to a largely insect-based once the reach their distant wintering grounds in South America. In Eritea, 18 returning migrant steppe buzzards were seen to feed together on swarms of grasshoppers. For wintering steppe buzzards in Zimbabwe, one source went so far as to refer to them as primarily insectivorous, apparently being somewhat locally specialized to feeding on termites. Stomach contents in buzzards from Malawi apparently consisted largely of grasshoppers (alternately with lizards). Fish tend to be the rarest class of prey found in the common buzzard's foods. There are a couple cases of predation of fish detected in the Netherlands, while elsewhere they've been known to have fed upon eels and carp.",
"title": "Dietary biology"
},
{
"paragraph_id": 31,
"text": "Common buzzards co-occur with dozens of other raptorial birds through their breeding, resident and wintering grounds. There may be many other birds that broadly overlap in prey selection to some extent. Furthermore, their preference for interfaces of forest and field is used heavily by many birds of prey. Some of the most similar species by diet are the common kestrel (Falco tinniculus), hen harrier (Circus cyaenus) and lesser spotted eagle (Clanga clanga), not to mention nearly every European species of owl, as all but two may locally prefer rodents such as voles in their diets. Diet overlap was found to be extensive between buzzards and red foxes (Vulpes vulpes) in Poland, with 61.9% of prey selection overlapping by species although the dietary breadth of the fox was broader and more opportunistic. Both fox dens and buzzard roosts were found to be significantly closer to high vole areas relative to the overall environment here. The only other widely found European Buteo, the rough-legged buzzard, comes to winter extensively with common buzzards. It was found in southern Sweden, habitat, hunting and prey selection often overlapped considerably. Rough-legged buzzards appear to prefer slightly more open habitat and took slightly fewer wood mice than common buzzard. Roughlegs also hover much more frequently and are more given to hunting in high winds. The two buzzards are aggressive towards one another and excluded each other from winter feeding territories in similar ways to the way they exclude conspecifics. In northern Germany, the buffer of their habitat preferences apparently accounted for the lack of effect on each other's occupancy between the two buzzard species. Despite a broad range of overlap, very little is known about the ecology of common and long-legged buzzards where they co-exist. However, it can be inferred from the long-legged species preference for predation on differing prey, such as blind mole-rats, ground squirrels, hamsters and gerbils, from the voles usually preferred by the common species, that serious competition for food is unlikely.",
"title": "Dietary biology"
},
{
"paragraph_id": 32,
"text": "A more direct negative effect has been found in buzzard's co-existence with northern goshawk (Accipiter gentilis). Despite the considerable discrepancy of the two species dietary habits, habitat selection in Europe is largely similar between buzzards and goshawks. Goshawks are slightly larger than buzzards and are more powerful, agile and generally more aggressive birds, and so they are considered dominant. In studies from Germany and Sweden, buzzards were found to be less disturbance sensitive than goshawks but were probably displaced into inferior nesting spots by the dominant goshawks. The exposure of buzzards to a dummy goshawk was found to decrease breeding success whereas there was no effect on breeding goshawks when they were exposed to a dummy buzzard. In many cases, in Germany and Sweden, goshawks displaced buzzards from their nests to take them over for themselves. In Poland, buzzards productivity was correlated to prey population variations, particularly voles which could vary from 10–80 per hectare, whereas goshawks were seemingly unaffected by prey variations; buzzards were found here to number 1.73 pair per 10 km (3.9 sq mi) against goshawk 1.63 pair per 10 km (3.9 sq mi). In contrast, the slightly larger counterpart of buzzards in North America, the red-tailed hawk (which is also slightly larger than American goshawks, the latter averaging smaller than European ones) are more similar in diet to goshawks there. Redtails are not invariably dominated by goshawks and are frequently able to outcompete them by virtue of greater dietary and habitat flexibility. Furthermore, red-tailed hawks are apparently equally capable of killing goshawks as goshawks are of killing them (killings are more one-sided in buzzard-goshawk interactions in favour of the latter). Other raptorial birds, including many of similar or mildly larger size than common buzzards themselves, may dominate or displace the buzzard, especially with aims to take over their nests. Species such as the black kite (Milvus migrans), booted eagle (Hieraeetus pennatus) and the lesser spotted eagle have been known to displace actively nesting buzzards, although in some cases the buzzards may attempt to defend themselves. The broad range of accipitrids that take over buzzard nests is somewhat unusual. More typically, common buzzards are victims of nest parasitism to owls and falcons, as neither of these other kinds of raptorial birds builds their own nests, but these may regularly take up occupancy on already abandoned or alternate nests rather than ones the buzzards are actively using. Even with birds not traditionally considered raptorial, such as common ravens, may compete for nesting sites with buzzards. In urban vicinities of southwestern England, it was found that peregrine falcons (Falco peregrinus) were harassing buzzards so persistently, in many cases resulting in injury or death for the buzzards, the attacks tending to peak during the falcon's breeding seasons and tend to be focused on subadult buzzards. Despite often being dominated in nesting site confrontations by even similarly sized raptors, buzzards appear to be bolder in direct competition over food with other raptors outside of the context of breeding, and has even been known to displace larger birds of prey such as red kites (Milvus milvus) and female buzzards may also dominate male goshawks (which are much smaller than the female goshawk) at disputed kills.",
"title": "Dietary biology"
},
{
"paragraph_id": 33,
"text": "Common buzzards are occasionally threatened by predation by other raptorial birds. Northern goshawks have been known to have preyed upon buzzards in a few cases. Much larger raptors are known to have killed a few buzzards as well, including steppe eagles (Aquila nipalensis) on migrating steppe buzzards in Israel. Further instances of predation on buzzards have involved golden, eastern imperial (Aquila heliaca), Bonelli's (Aquila fasciata) and white-tailed eagles (Haliaeetus albicilla) in Europe. Besides preying on adult buzzard, white-tailed eagles have been known to raise buzzards with their own young. These are most likely cases of eagles carrying off young buzzard nestlings with the intention of predation but, for unclear reasons, not killing them. Instead the mother eagle comes to brood the young buzzard. Despite the difference of the two species diets, white-tailed eagles are surprisingly successful at raising young buzzards (which are conspicuously much smaller than their own nestlings) to fledging. Studies in Lithuania of white-tailed eagle diets found that predation on common buzzards was more frequent than anticipated, with 36 buzzard remains found in 11 years of study of the summer diet of the white-tailed eagles. While nestling buzzards were multiple times more vulnerable to predation than adult buzzards in the Lithuanian data, the region's buzzards expelled considerable time and energy during the late nesting period trying to protect their nests. The most serious predator of common buzzards, however, is almost certainly the Eurasian eagle-owl (Bubo bubo). This is a very large owl with a mean body mass about three to four times greater than that of a buzzard. The eagle-owl, despite often taking small mammals that broadly overlap with those selected by buzzards, is considered a \"super-predator\" that is a major threat to nearly all co-existing raptorial birds, capably destroying whole broods of other raptorial birds and dispatching adult raptors even as large as eagles. Due to their large numbers in edge habitats, common buzzards frequently feature heavily in the eagle-owl's diet. Eagle-owls, as will some other large owls, also readily expropriate the nests of buzzards. In the Czech Republic and in Luxembourg, the buzzard was the third and fifth most frequent prey species for eagle-owls, respectively. The reintroduction of eagle-owls to sections of Germany has been found to have a slight deleterious effect on the local occupancy of common buzzards. The only sparing factor is the temporal difference (the buzzard nesting later in the year than the eagle-owl) and buzzards may locally be able to avoid nesting near an active eagle-owl family. As the ecology of the wintering population is relatively little studied, a similar very large owl at the top of the avian food chain, the Verreaux's eagle-owl (Bubo lacteus), is the only known predator of wintering steppe buzzards in southern Africa. Despite not being known predators of buzzards, other large, vole-eating owls are known to displace or to be avoided by nesting buzzards, such as great grey owls (Strix nebulosa) and Ural owls (Strix uralensis). Unlike with large birds of prey, next to nothing is known of mammalian predators of common buzzards, despite up to several nestlings and fledglings being likely depredated by mammals.",
"title": "Dietary biology"
},
{
"paragraph_id": 34,
"text": "Common buzzards themselves rarely present a threat to other raptorial birds but may occasionally kill a few of those of smaller size. The buzzard is a known predator of 237 g (8.4 oz) Eurasian sparrowhawks (Accipiter nisus), 184 g (6.5 oz) common kestrel and 152 g (5.4 oz) lesser kestrel (Falco naumanni) . Perhaps surprisingly, given the nocturnal habits of this prey, the group of raptorial birds the buzzard is known to hunt most extensively is owls. Known owl prey has included 419 g (14.8 oz) barn owls (Tyto alba), 92 g (3.2 oz) European scops owls (Otus scops), 475 g (16.8 oz) tawny owls (Strix aluco), 169 g (6.0 oz) little owls (Athene noctua), 138 g (4.9 oz) boreal owls (Aegolius funereus), 286 g (10.1 oz) long-eared owls (Asio otus) and 355 g (12.5 oz) short-eared owls (Asio flammeus). Despite their relatively large size, tawny owls are known to avoid buzzards as there are several records of them preying upon the owls.",
"title": "Dietary biology"
},
{
"paragraph_id": 35,
"text": "Home ranges of common buzzards are generally 0.5 to 2 km (0.19 to 0.77 sq mi). The size of breeding territory seem to be generally correlated with food supply. In a German study, the range was 0.8 to 1.8 km (0.31 to 0.69 sq mi) with an average of 1.26 km (0.49 sq mi). Some of the lowest pair densities of common buzzards seem to come from Russia. For instance, in Kerzhenets Nature Reserve, the recorded density was 0.6 pairs per 100 km (39 sq mi) and the average distance of nearest neighbors was 3.8 km (2.4 mi). The Snowdonia region of northern Wales held a pair per 9.7 km (3.7 sq mi) with a mean nearest neighbor distance of 1.95 km (1.21 mi); in adjacent Migneint, pair occurrence was 7.2 km (2.8 sq mi), with a mean distance of 1.53 km (0.95 mi). In the Teno massif of the Canary Islands, the average density was estimated as 23 pairs per 100 km (39 sq mi), similar to that of a middling continental population. On another set of islands, on Crete the density of pairs was lower at 5.7 pairs per 100 km (39 sq mi); here buzzards tend to have an irregular distribution, some in lower intensity harvest olive groves but their occurrence actually more common in agricultural than natural areas. In the Italian Alps, it was recorded in 1993–96 that there were from 28 to 30 pairs per 100 km (39 sq mi). In central Italy, density average was lower at 19.74 pairs per 100 km (39 sq mi). Higher density areas are known than those above. Two areas of the Midlands of England showed occupancies of 81 and 22 territorial pairs per 100 km (39 sq mi). High buzzard densities there were associated with high proportions of unimproved pasture and mature woodland within the estimated territories. Similarly high densities of common buzzards were estimated in central Slovakia using two different methods, here indicating densities of 96 to 129 pairs per 100 km (39 sq mi). Despite claims from the study of the English midlands were the highest known territory density for the species, a number ranging from 32 to 51 pairs in wooded area of merely 22 km (8.5 sq mi) in Czech Republic seems to surely exceed even those densities. The Czech study hypothesized that fragmentation of forest in human management of lands for wild sheep and deer, creating exceptional concentrations of prey such as voles, and lack of appropriate habitat in surrounding regions for the exceptionally high density.",
"title": "Breeding"
},
{
"paragraph_id": 36,
"text": "In the North-Estonian Neeruti landscape reserve (area 1250 ha), Marek Vahula found 9 populated nests in 1989 and 1990. One nest was found in 1982 and is apparently the oldest known nest that is still populated today.",
"title": "Breeding"
},
{
"paragraph_id": 37,
"text": "Common buzzards maintain their territories through flight displays. In Europe, territorial behaviour generally starts in February. However, displays are not uncommon throughout year in resident pairs, especially by males, and can elicit similar displays by neighbors. In them, common buzzards generally engage in high circling, spiraling upward on slightly raised wings. Mutual high circling by pairs sometimes go on at length, especially during the period prior to or during breeding season. In mutual displays, a pair may follow each other at 10–50 m (33–164 ft) in level flight. During the mutual displays, the male may engage in exaggerated deep flapping or zig-zag tumbling, apparently in response to the female being too distant. Two or three pairs may circle together at times and as many as 14 individual adults have been recorded over established display sites. Sky-dancing by common buzzards have been recorded in spring and autumn, typically by male but sometimes by female, nearly always with much calling. Their sky-dances are of the rollercoaster type, with upward sweep until they start to stall, but sometimes embellished with loops or rolls at the top. Next in the sky-dance, they dive on more or less closed wings before spreading them and shooting up again, upward sweeps of up to 30 m (98 ft), with dive drops of up to at least 60 m (200 ft). These dances may be repeated in series of 10 to 20. In the climax of the sky dance, the undulations become progressive shallower, often slowing and terminating directly onto a perch. Various other aerial displays include low contour flight or weaving among trees, frequently with deep beats and exaggerated upstrokes which show underwing pattern to rivals perched below. Talon grappling and occasionally cartwheeling downward with feet interlocked has been recorded in buzzards and, as in many raptors, is likely the physical culmination of the aggressive territorial display, especially between males. Despite the highly territorial nature of buzzards and their devotion to a single mate and breeding ground each summer, there is one case of a polyandrous trio of buzzards nesting in the Canary Islands.",
"title": "Breeding"
},
{
"paragraph_id": 38,
"text": "Common buzzards tend to build a bulky nest of sticks, twigs and often heather. Commonly, nests are up to 1 to 1.2 m (3 ft 3 in to 3 ft 11 in) across and 60 cm (24 in) deep. With reuse over years, the diameter can reach or exceed 1.5 m (4 ft 11 in) and weight of nests can reach over 200 kg (440 lb). Active nests tend to be lined with greenery, most often this consists of broad-leafed foliage but sometimes also includes rush or seaweed locally. Nest height in trees is commonly 3 to 25 m (9.8 to 82.0 ft), usually by main trunk or main crutch of the tree. In Germany, trees used for nesting consisted mostly of red beeches (Fagus sylvatica) (in 337 cases), whereas a further 84 were in assorted oaks. Buzzards were recorded to nest almost exclusively in pines in Spain at a mean height of 14.5 m (48 ft). Trees are generally used for a nesting location but they will also utilize crags or bluffs if trees are unavailable. Buzzards in one English study were surprisingly partial to nesting on well-vegetated banks and due to the rich surrounding environment habitat and prey population, were actually more productive than nests located in other locations here. Furthermore, a few ground nests were recorded in high prey-level agricultural areas in the Netherlands. In the Italian Alps, 81% of 108 nests were on cliffs. The common buzzard generally lacks the propensity of its Nearctic counterpart, the red-tailed hawk, to occasionally nest on or near manmade structures (often in heavily urbanized areas) but in Spain some pairs recorded nesting along the perimeter of abandoned buildings. Pairs often have several nests but some pairs may use one over several consecutive years. Two to four alternate nests in a territory is typical for common buzzards, especially those breeding further north in their range.",
"title": "Breeding"
},
{
"paragraph_id": 39,
"text": "The breeding season commences at differing times based on latitude. Common buzzard breeding seasons may fall as early as January to April but typically the breeding season is March to July in much of Palearctic. In the northern stretches of the range the breeding season may last into May–August. Mating usually occurs on or near the nest and lasts about 15 seconds, typically occurring several times a day. Eggs are usually laid in 2 to 3-day intervals. The clutch size can range from to 2 to 6, a relatively large clutch for an accipitrid. More northerly and westerly buzzard usually bear larger clutches, which average nearer 3, than those further east and south. In Spain, the average clutch size is about 2 to 2.3. From 4 locations in different parts of Europe, 43% had clutch size of 2, 41% had size of 3, clutches of 1 and 4 each constituted about 8%. Laying dates are remarkably constant throughout Great Britain. There are, however, highly significant differences in clutch size between British study areas. These do not follow any latitudinal gradient and it is likely that local factors such as habitat and prey availability are more important determinants of clutch size. The eggs are white in ground colour, rather round in shape with sporadic red to brown markings sometimes lightly showing. In the nominate race, egg size is 49.8–63.8 mm (1.96–2.51 in) in height by 39.1–48.2 mm (1.54–1.90 in) in diameter with an average of 55 mm × 44 mm (2.2 in × 1.7 in) in 600 eggs. In the race of vulpinus, egg height is 48–63 mm (1.9–2.5 in) by 39.2–47.5 mm (1.54–1.87 in) with an average of 54.2 mm × 42.8 mm (2.13 in × 1.69 in) in 303 eggs. Eggs are generally laid in late March to early April in extreme south, sometime in April in most of Europe, into May and possibly even early June in the extreme north. If eggs are lost to a predator (including humans) or fail in some other way, common buzzards do not usually lay replacement clutches but they have been recorded, even with 3 attempts of clutches by a single female. The female does most but not all of the incubating, doing so for a total of 33–35 days. The female remains at the nest brooding the young in the early stages with the male bringing all prey. At about 8–12 days, both the male and female will bring prey but the female continues to do all feeding until the young can tear up their own prey.",
"title": "Breeding"
},
{
"paragraph_id": 40,
"text": "Once hatching commences, it may take 48 hours for the chick to chip out. Hatching may take place over 3–7 days, with new hatchlings averaging about 45 g (1.6 oz) in body mass. Often the youngest nestling dies from starvation, especially in broods of three or more. In nestlings, the first down replaces by longer, coarser down at about 7 days of age with the first proper feathers appearing at 12 to 15 days. The young are nearly fully feathered rather than downy at about a month of age and can start to feed themselves as well. The first attempts to leave the nest are often at about 40–50 days, averaging usually 40–45 in nominate buzzards in Europe, but more quickly on average at 40–42 in vulpinus. Fledging occurs typically at 43–54 days but in extreme cases at as late 62 days. Sexual dimorphism is apparent in European fledglings, as females often scale about 1,000 g (2.2 lb) against 780 g (1.72 lb) in males. After leaving the nest, buzzards generally stay close by, but with migratory ones there is more definitive movement generally southbound. Full independence is generally sought 6 to 8 weeks after fledging. 1st year birds generally remain in wintering area for following summer but then return to near area of origin but then migrate south again without breeding. Radio-tracking suggests that most dispersal, even relatively early dispersals, by juvenile buzzards is undertaken independently rather than via exile by parents, as has been recorded in some other birds of prey. In common buzzards, generally speaking, siblings stay quite close to each other after dispersal from their parents and form something of a social group, although parents usually tolerate their presence on their territory until they are laying another clutch. However, the social group of siblings disbands at about a year of age. Juvenile buzzards are subordinate to adults during most encounters and tend to avoid direct confrontations and actively defended territories until they are of appropriate age (usually at least 2 years of age). This was the case as well for steppe buzzard juveniles wintering in southern Africa, although in some cases juveniles were able to successfully steal prey from adults there.",
"title": "Breeding"
},
{
"paragraph_id": 41,
"text": "Numerous factors may weigh into the breeding success of common buzzards. Chiefly among these are prey populations, habitat, disturbance and persecution levels and innerspecies competition. In Germany, intra- and interspecific competition, plumage morph, laying date, precipitation levels and anthropogenic disturbances in the breeding territory, in declining order, were deemed to be the most significant bearers of breeding success. In an accompanying study, it was found that a mere 17% of adult birds of both sexes present in a German study area produced 50% of offspring, so breeding success may be lower than perceived and many adult buzzards for unknown causes may not attempt to breed at all. High breeding success was detected in Argyll, Scotland, due likely to hearty prey populations (rabbits) but also probably a lower local rate of persecution than elsewhere in the British isles. Here, the mean number of fledglings were 1.75 against 0.82–1.41 in other parts of Britain. It was found in the English Midlands that breeding success both by measure of clutch size and mean number of fledglings, was relatively high thanks again to high prey populations. Breeding success was lower farther from significant stands of trees in the Midlands and most nesting failures that could be determined occurred in the incubation stage, possibly in correlation with predation of eggs by corvids. More significant than even prey, late winter-early spring was found to be likely the primary driver of breeding success in buzzards from southern Norway. Here, even in peak vole years, nesting success could be considerably hampered by heavy snow at this crucial stage. In Norway, large clutches of 3+ were expected only in years with minimal snow cover, high vole populations and lighter rains in May–June. In the Italian Alps, the mean number of fledglings per pair was 1.07. 33.4% of nesting attempts were failures per a study in southwestern Germany, with an average of 1.06 of all nesting attempts and 1.61 for all successful attempt. In Germany, weather conditions and rodent populations seemed to be the primary drivers of nesting success. In Murcia part of Spain contrasted with Biscay to the north, higher levels of interspecific competition from booted eagles and northern goshawks did not appear to negatively affect breeding success due to more ample prey populations (rabbits again) in Murcia than in Biscay.",
"title": "Breeding"
},
{
"paragraph_id": 42,
"text": "In the Westphalia area of Germany, it was found that intermediate colour morphs were more productive than those that were darker or lighter. For reasons that are not entirely clear, apparently fewer parasites were found to afflict broods of intermediate plumaged buzzard less so than dark and light phenotypes, in particular higher melanin levels somehow were found to be more inviting to parasitic organism that effect the health of the buzzard's offspring. The composition of habitat and its relation to human disturbance were important variables for the dark and light phenotypes but were less important to intermediate individuals. Thus selection pressures resulting from different factors did not vary much between sexes but varied between the three phenotypes in the population. Breeding success in areas with wild European rabbits was considerably effected by rabbit myxomatosis and rabbit haemorrhagic disease, both of which have heavily depleted wild rabbit population. Breeding success in formerly rabbit-rich areas were recorded to decrease from as much as 2.6 to as little as 0.9 young per pair. Age of first breeding in several radio-tagged buzzards showed only a single male breeding as early as his 2nd summer (at about a year of age). Significantly more buzzards were found to start breeding at the 3 summer but breeding attempts can be individually erratic given the availability of habitat, food and mates. The mean life expectancy was estimated at 6.3 years in the late 1950s, but this was at a time of high persecution when humans were causing 50–80% of buzzard deaths. In a more modern context with regionally reduced persecution rates, the lifespan expected can be higher (possibly in excess of 10 years at times) but is still widely variable due to a wide variety of factors.",
"title": "Breeding"
},
{
"paragraph_id": 43,
"text": "The common buzzard is one of the most numerous birds of prey in its range. Almost certainly, it is the most numerous diurnal bird of prey throughout Europe. Conservative estimates put the total population at no fewer than 700,000 pairs in Europe, which are more than twice the total estimates for the next four birds of prey estimated as most common: the Eurasian sparrowhawk (more than 340,000 pairs), the common kestrel (more than 330,000 pairs) and the northern goshawk (more than 160,000 pairs). Ferguson-Lees et al. roughly estimated that the total population of the common buzzard ranges to nearly 5 million pairs but at time was including the now spilit-off species of eastern and Himalayan buzzards in those numbers. These numbers may be excessive but the total population of common buzzards is certain to total well over seven figures. More recently, the IUCN estimated the common buzzard (sans the Himalayan and eastern subspecies) to number somewhere between 2.1 and 3.7 million birds, which would put this buzzard one of the most numerous of all accipitrid family members (estimates for Eurasian sparrowhawks, red-tailed hawks and northern goshawks also may range over 2 million). In 1991, other than their absence in Iceland, after having been extent as breeder by 1910, buzzards recolonized Ireland sometime in the 1950s and has increased by the 1990s to 26 pairs. Supplemental feeding has reportedly helped the Irish buzzard population to rebound, especially where rabbits have decreased. Most other countries have at least four figures of breeding pairs. As of the 1990s, other countries such as Great Britain, France, Switzerland, Czech Republic, Poland, Sweden, Belarus and Ukraine all numbered pairs well into five figures, while Germany had an estimated 140,000 pairs and European Russian may have held 500,000 pairs. Between 44,000 and 61,000 pairs nested in Great Britain by 2001 with numbers gradually increasing after past persecution, habitat alteration and prey reductions, making it by far the most abundant diurnal raptor there. In Westphalia, Germany, population of Buzzards was shown to nearly triple over the last few decades. The Westphalian buzzards are possibly benefiting from increasingly warmer mean climate, which in turn is increasing vulnerability of voles. However, the rate of increase was significantly greater in males than in females, in part because of reintroduced Eurasian eagle-owls to the region preying on nests (including the brooding mother), which may in turn put undue pressure on the local buzzard population.",
"title": "Status"
},
{
"paragraph_id": 44,
"text": "At least 238 common buzzards killed through persecution were recovered in England from 1975 to 1989, largely through poisoning. Persecution did not significantly differ at any time due this span of years nor did the persecution rates decrease, nor did it when compared to rates of last survey of this in 1981. While some persecution persists in England, it is probably slightly less common today. The buzzard was found to be the most vulnerable raptor to power-line collision fatalities in Spain probably as it is one of the most common largish birds, and together with the common raven, it accounted for nearly a third of recorded electrocutions. Given its relative abundance, the common buzzard is held as an ideal bioindicator, as they are effected by a range of pesticide and metal contamination through pollution like other raptors but are largely resilient to these at the population levels. In turn, this allows biologists to study (and harvest if needed) the buzzards intensively and their environments without affecting their overall population. The lack of affect may be due to the buzzard's adaptability as well as its relatively short, terrestrially-based food chain, which exposes them to less risk of contamination and population depletions than raptors that prey more heavily on water-based prey (such as some large eagles) or other birds (such as falcons). Common buzzards are seldom vulnerable to egg-shell thinning from DDT as are other raptors but egg-shell thinning has been recorded. Other factors that negatively effect raptors have been studied in common buzzards are helminths, avipoxvirus and assorted other viruses.",
"title": "Status"
}
] | The common buzzard is a medium-to-large bird of prey which has a large range. It is a member of the genus Buteo in the family Accipitridae. The species lives in most of Europe and extends its breeding range across much of the Palearctic as far as northwestern China, far western Siberia and northwestern Mongolia. Over much of its range, it is a year-round resident. However, buzzards from the colder parts of the Northern Hemisphere as well as those that breed in the eastern part of their range typically migrate south for the northern winter, many journeying as far as South Africa. The common buzzard is an opportunistic predator that can take a wide variety of prey, but it feeds mostly on small mammals, especially rodents such as voles. It typically hunts from a perch. Like most accipitrid birds of prey, it builds a nest, typically in trees in this species, and is a devoted parent to a relatively small brood of young. The common buzzard appears to be the most common diurnal raptor in Europe, as estimates of its total global population run well into the millions. | 2001-09-15T13:27:04Z | 2023-12-17T12:20:58Z | [
"Template:Wikispecies",
"Template:Webarchive",
"Template:Buteoninae",
"Template:Speciesbox",
"Template:BirdLife",
"Template:Avibase",
"Template:VIREO",
"Template:Xeno-canto species",
"Template:Taxonbar",
"Template:Cite web",
"Template:Cvt",
"Template:Short description",
"Template:Convert",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:ISBN",
"Template:Commons category",
"Template:Use dmy dates",
"Template:Authority control",
"Template:InternetBirdCollection"
] | https://en.wikipedia.org/wiki/Common_buzzard |
4,194 | Bohrium | Bohrium is a synthetic chemical element; it has symbol Bh and atomic number 107. It is named after Danish physicist Niels Bohr. As a synthetic element, it can be created in particle accelerators but is not found in nature. All known isotopes of bohrium are highly radioactive; the most stable known isotope is Bh with a half-life of approximately 2.4 minutes, though the unconfirmed Bh may have a longer half-life of about 11.5 minutes.
In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 7 elements as the fifth member of the 6d series of transition metals. Chemistry experiments have confirmed that bohrium behaves as the heavier homologue to rhenium in group 7. The chemical properties of bohrium are characterized only partly, but they compare well with the chemistry of the other group 7 elements.
Superheavy elements, also known as transactinide elements, transactinides, or super-heavy elements, are the chemical elements with atomic number greater than 103. The superheavy elements are those beyond the actinides in the periodic table; the last actinide is lawrencium (atomic number 103). By definition, superheavy elements are also transuranium elements, i.e., having atomic numbers greater than that of uranium (92). Depending on the definition of group 3 adopted by authors, lawrencium may also be included to complete the 6d series.
Glenn T. Seaborg first proposed the actinide concept, which led to the acceptance of the actinide series. He also proposed a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153 (although more recent work suggests the end of the superactinide series to occur at element 157 instead). The transactinide seaborgium was named in his honor.
Superheavy elements are radioactive and have only been obtained synthetically in laboratories. No macroscopic sample of any of these elements have ever been produced. Superheavy elements are all named after physicists and chemists or important locations involved in the synthesis of the elements.
IUPAC defines an element to exist if its lifetime is longer than 10 second, which is the time it takes for the atom to form an electron cloud.
Two groups claimed discovery of the element. Evidence of bohrium was first reported in 1976 by a Soviet research team led by Yuri Oganessian, in which targets of bismuth-209 and lead-208 were bombarded with accelerated nuclei of chromium-54 and manganese-55 respectively. Two activities, one with a half-life of one to two milliseconds, and the other with an approximately five-second half-life, were seen. Since the ratio of the intensities of these two activities was constant throughout the experiment, it was proposed that the first was from the isotope bohrium-261 and that the second was from its daughter dubnium-257. Later, the dubnium isotope was corrected to dubnium-258, which indeed has a five-second half-life (dubnium-257 has a one-second half-life); however, the half-life observed for its parent is much shorter than the half-lives later observed in the definitive discovery of bohrium at Darmstadt in 1981. The IUPAC/IUPAP Transfermium Working Group (TWG) concluded that while dubnium-258 was probably seen in this experiment, the evidence for the production of its parent bohrium-262 was not convincing enough.
In 1981, a German research team led by Peter Armbruster and Gottfried Münzenberg at the GSI Helmholtz Centre for Heavy Ion Research (GSI Helmholtzzentrum für Schwerionenforschung) in Darmstadt bombarded a target of bismuth-209 with accelerated nuclei of chromium-54 to produce 5 atoms of the isotope bohrium-262:
This discovery was further substantiated by their detailed measurements of the alpha decay chain of the produced bohrium atoms to previously known isotopes of fermium and californium. The IUPAC/IUPAP Transfermium Working Group (TWG) recognised the GSI collaboration as official discoverers in their 1992 report.
In September 1992, the German group suggested the name nielsbohrium with symbol Ns to honor the Danish physicist Niels Bohr. The Soviet scientists at the Joint Institute for Nuclear Research in Dubna, Russia had suggested this name be given to element 105 (which was finally called dubnium) and the German team wished to recognise both Bohr and the fact that the Dubna team had been the first to propose the cold fusion reaction, and simultaneously help to solve the controversial problem of the naming of element 105. The Dubna team agreed with the German group's naming proposal for element 107.
There was an element naming controversy as to what the elements from 104 to 106 were to be called; the IUPAC adopted unnilseptium (symbol Uns) as a temporary, systematic element name for this element. In 1994 a committee of IUPAC recommended that element 107 be named bohrium, not nielsbohrium, since there was no precedent for using a scientist's complete name in the naming of an element. This was opposed by the discoverers as there was some concern that the name might be confused with boron and in particular the distinguishing of the names of their respective oxyanions, bohrate and borate. The matter was handed to the Danish branch of IUPAC which, despite this, voted in favour of the name bohrium, and thus the name bohrium for element 107 was recognized internationally in 1997; the names of the respective oxyanions of boron and bohrium remain unchanged despite their homophony.
Bohrium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Twelve different isotopes of bohrium have been reported with atomic masses 260–262, 264–267, 270–272, 274, and 278, one of which, bohrium-262, has a known metastable state. All of these but the unconfirmed Bh decay only through alpha decay, although some unknown bohrium isotopes are predicted to undergo spontaneous fission.
The lighter isotopes usually have shorter half-lives; half-lives of under 100 ms for Bh, Bh, Bh, and Bh were observed. Bh, Bh, Bh, and Bh are more stable at around 1 s, and Bh and Bh have half-lives of about 10 s. The heaviest isotopes are the most stable, with Bh and Bh having measured half-lives of about 2.4 min and 40 s respectively, and the even heavier unconfirmed isotope Bh appearing to have an even longer half-life of about 11.5 minutes.
The most proton-rich isotopes with masses 260, 261, and 262 were directly produced by cold fusion, those with mass 262 and 264 were reported in the decay chains of meitnerium and roentgenium, while the neutron-rich isotopes with masses 265, 266, 267 were created in irradiations of actinide targets. The five most neutron-rich ones with masses 270, 271, 272, 274, and 278 (unconfirmed) appear in the decay chains of Nh, Mc, Mc, Ts, and Fl respectively. The half-lives of bohrium isotopes range from about ten milliseconds for Bh to about one minute for Bh and Bh, extending to about 11.5 minutes for the unconfirmed Bh, which may have one of the longest half-lives among reported superheavy nuclides.
Very few properties of bohrium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that bohrium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of bohrium metal remain unknown and only predictions are available.
Bohrium is the fifth member of the 6d series of transition metals and the heaviest member of group 7 in the periodic table, below manganese, technetium and rhenium. All the members of the group readily portray their group oxidation state of +7 and the state becomes more stable as the group is descended. Thus bohrium is expected to form a stable +7 state. Technetium also shows a stable +4 state whilst rhenium exhibits stable +4 and +3 states. Bohrium may therefore show these lower states as well. The higher +7 oxidation state is more likely to exist in oxyanions, such as perbohrate, BhO4, analogous to the lighter permanganate, pertechnetate, and perrhenate. Nevertheless, bohrium(VII) is likely to be unstable in aqueous solution, and would probably be easily reduced to the more stable bohrium(IV).
The lighter group 7 elements are known to form volatile heptoxides M2O7 (M = Mn, Tc, Re), so bohrium should also form the volatile oxide Bh2O7. The oxide should dissolve in water to form perbohric acid, HBhO4. Rhenium and technetium form a range of oxyhalides from the halogenation of the oxide. The chlorination of the oxide forms the oxychlorides MO3Cl, so BhO3Cl should be formed in this reaction. Fluorination results in MO3F and MO2F3 for the heavier elements in addition to the rhenium compounds ReOF5 and ReF7. Therefore, oxyfluoride formation for bohrium may help to indicate eka-rhenium properties. Since the oxychlorides are asymmetrical, and they should have increasingly large dipole moments going down the group, they should become less volatile in the order TcO3Cl > ReO3Cl > BhO3Cl: this was experimentally confirmed in 2000 by measuring the enthalpies of adsorption of these three compounds. The values are for TcO3Cl and ReO3Cl are −51 kJ/mol and −61 kJ/mol respectively; the experimental value for BhO3Cl is −77.8 kJ/mol, very close to the theoretically expected value of −78.5 kJ/mol.
Bohrium is expected to be a solid under normal conditions and assume a hexagonal close-packed crystal structure (/a = 1.62), similar to its lighter congener rhenium. Early predictions by Fricke estimated its density at 37.1 g/cm, but newer calculations predict a somewhat lower value of 26–27 g/cm.
The atomic radius of bohrium is expected to be around 128 pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Bh ion is predicted to have an electron configuration of [Rn] 5f 6d 7s, giving up a 6d electron instead of a 7s electron, which is the opposite of the behavior of its lighter homologues manganese and technetium. Rhenium, on the other hand, follows its heavier congener bohrium in giving up a 5d electron before a 6s electron, as relativistic effects have become significant by the sixth period, where they cause among other things the yellow color of gold and the low melting point of mercury. The Bh ion is expected to have an electron configuration of [Rn] 5f 6d 7s; in contrast, the Re ion is expected to have a [Xe] 4f 5d configuration, this time analogous to manganese and technetium. The ionic radius of hexacoordinate heptavalent bohrium is expected to be 58 pm (heptavalent manganese, technetium, and rhenium having values of 46, 57, and 53 pm respectively). Pentavalent bohrium should have a larger ionic radius of 83 pm.
In 1995, the first report on attempted isolation of the element was unsuccessful, prompting new theoretical studies to investigate how best to investigate bohrium (using its lighter homologs technetium and rhenium for comparison) and removing unwanted contaminating elements such as the trivalent actinides, the group 5 elements, and polonium.
In 2000, it was confirmed that although relativistic effects are important, bohrium behaves like a typical group 7 element. A team at the Paul Scherrer Institute (PSI) conducted a chemistry reaction using six atoms of Bh produced in the reaction between Bk and Ne ions. The resulting atoms were thermalised and reacted with a HCl/O2 mixture to form a volatile oxychloride. The reaction also produced isotopes of its lighter homologues, technetium (as Tc) and rhenium (as Re). The isothermal adsorption curves were measured and gave strong evidence for the formation of a volatile oxychloride with properties similar to that of rhenium oxychloride. This placed bohrium as a typical member of group 7. The adsorption enthalpies of the oxychlorides of technetium, rhenium, and bohrium were measured in this experiment, agreeing very well with the theoretical predictions and implying a sequence of decreasing oxychloride volatility down group 7 of TcO3Cl > ReO3Cl > BhO3Cl.
The longer-lived heavy isotopes of bohrium, produced as the daughters of heavier elements, offer advantages for future radiochemical experiments. Although the heavy isotope Bh requires a rare and highly radioactive berkelium target for its production, the isotopes Bh, Bh, and Bh can be readily produced as daughters of more easily produced moscovium and nihonium isotopes. | [
{
"paragraph_id": 0,
"text": "Bohrium is a synthetic chemical element; it has symbol Bh and atomic number 107. It is named after Danish physicist Niels Bohr. As a synthetic element, it can be created in particle accelerators but is not found in nature. All known isotopes of bohrium are highly radioactive; the most stable known isotope is Bh with a half-life of approximately 2.4 minutes, though the unconfirmed Bh may have a longer half-life of about 11.5 minutes.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 7 elements as the fifth member of the 6d series of transition metals. Chemistry experiments have confirmed that bohrium behaves as the heavier homologue to rhenium in group 7. The chemical properties of bohrium are characterized only partly, but they compare well with the chemistry of the other group 7 elements.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Superheavy elements, also known as transactinide elements, transactinides, or super-heavy elements, are the chemical elements with atomic number greater than 103. The superheavy elements are those beyond the actinides in the periodic table; the last actinide is lawrencium (atomic number 103). By definition, superheavy elements are also transuranium elements, i.e., having atomic numbers greater than that of uranium (92). Depending on the definition of group 3 adopted by authors, lawrencium may also be included to complete the 6d series.",
"title": "Introduction"
},
{
"paragraph_id": 3,
"text": "Glenn T. Seaborg first proposed the actinide concept, which led to the acceptance of the actinide series. He also proposed a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153 (although more recent work suggests the end of the superactinide series to occur at element 157 instead). The transactinide seaborgium was named in his honor.",
"title": "Introduction"
},
{
"paragraph_id": 4,
"text": "Superheavy elements are radioactive and have only been obtained synthetically in laboratories. No macroscopic sample of any of these elements have ever been produced. Superheavy elements are all named after physicists and chemists or important locations involved in the synthesis of the elements.",
"title": "Introduction"
},
{
"paragraph_id": 5,
"text": "IUPAC defines an element to exist if its lifetime is longer than 10 second, which is the time it takes for the atom to form an electron cloud.",
"title": "Introduction"
},
{
"paragraph_id": 6,
"text": "Two groups claimed discovery of the element. Evidence of bohrium was first reported in 1976 by a Soviet research team led by Yuri Oganessian, in which targets of bismuth-209 and lead-208 were bombarded with accelerated nuclei of chromium-54 and manganese-55 respectively. Two activities, one with a half-life of one to two milliseconds, and the other with an approximately five-second half-life, were seen. Since the ratio of the intensities of these two activities was constant throughout the experiment, it was proposed that the first was from the isotope bohrium-261 and that the second was from its daughter dubnium-257. Later, the dubnium isotope was corrected to dubnium-258, which indeed has a five-second half-life (dubnium-257 has a one-second half-life); however, the half-life observed for its parent is much shorter than the half-lives later observed in the definitive discovery of bohrium at Darmstadt in 1981. The IUPAC/IUPAP Transfermium Working Group (TWG) concluded that while dubnium-258 was probably seen in this experiment, the evidence for the production of its parent bohrium-262 was not convincing enough.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1981, a German research team led by Peter Armbruster and Gottfried Münzenberg at the GSI Helmholtz Centre for Heavy Ion Research (GSI Helmholtzzentrum für Schwerionenforschung) in Darmstadt bombarded a target of bismuth-209 with accelerated nuclei of chromium-54 to produce 5 atoms of the isotope bohrium-262:",
"title": "History"
},
{
"paragraph_id": 8,
"text": "This discovery was further substantiated by their detailed measurements of the alpha decay chain of the produced bohrium atoms to previously known isotopes of fermium and californium. The IUPAC/IUPAP Transfermium Working Group (TWG) recognised the GSI collaboration as official discoverers in their 1992 report.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In September 1992, the German group suggested the name nielsbohrium with symbol Ns to honor the Danish physicist Niels Bohr. The Soviet scientists at the Joint Institute for Nuclear Research in Dubna, Russia had suggested this name be given to element 105 (which was finally called dubnium) and the German team wished to recognise both Bohr and the fact that the Dubna team had been the first to propose the cold fusion reaction, and simultaneously help to solve the controversial problem of the naming of element 105. The Dubna team agreed with the German group's naming proposal for element 107.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "There was an element naming controversy as to what the elements from 104 to 106 were to be called; the IUPAC adopted unnilseptium (symbol Uns) as a temporary, systematic element name for this element. In 1994 a committee of IUPAC recommended that element 107 be named bohrium, not nielsbohrium, since there was no precedent for using a scientist's complete name in the naming of an element. This was opposed by the discoverers as there was some concern that the name might be confused with boron and in particular the distinguishing of the names of their respective oxyanions, bohrate and borate. The matter was handed to the Danish branch of IUPAC which, despite this, voted in favour of the name bohrium, and thus the name bohrium for element 107 was recognized internationally in 1997; the names of the respective oxyanions of boron and bohrium remain unchanged despite their homophony.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Bohrium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Twelve different isotopes of bohrium have been reported with atomic masses 260–262, 264–267, 270–272, 274, and 278, one of which, bohrium-262, has a known metastable state. All of these but the unconfirmed Bh decay only through alpha decay, although some unknown bohrium isotopes are predicted to undergo spontaneous fission.",
"title": "Isotopes"
},
{
"paragraph_id": 12,
"text": "The lighter isotopes usually have shorter half-lives; half-lives of under 100 ms for Bh, Bh, Bh, and Bh were observed. Bh, Bh, Bh, and Bh are more stable at around 1 s, and Bh and Bh have half-lives of about 10 s. The heaviest isotopes are the most stable, with Bh and Bh having measured half-lives of about 2.4 min and 40 s respectively, and the even heavier unconfirmed isotope Bh appearing to have an even longer half-life of about 11.5 minutes.",
"title": "Isotopes"
},
{
"paragraph_id": 13,
"text": "The most proton-rich isotopes with masses 260, 261, and 262 were directly produced by cold fusion, those with mass 262 and 264 were reported in the decay chains of meitnerium and roentgenium, while the neutron-rich isotopes with masses 265, 266, 267 were created in irradiations of actinide targets. The five most neutron-rich ones with masses 270, 271, 272, 274, and 278 (unconfirmed) appear in the decay chains of Nh, Mc, Mc, Ts, and Fl respectively. The half-lives of bohrium isotopes range from about ten milliseconds for Bh to about one minute for Bh and Bh, extending to about 11.5 minutes for the unconfirmed Bh, which may have one of the longest half-lives among reported superheavy nuclides.",
"title": "Isotopes"
},
{
"paragraph_id": 14,
"text": "Very few properties of bohrium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that bohrium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of bohrium metal remain unknown and only predictions are available.",
"title": "Predicted properties"
},
{
"paragraph_id": 15,
"text": "Bohrium is the fifth member of the 6d series of transition metals and the heaviest member of group 7 in the periodic table, below manganese, technetium and rhenium. All the members of the group readily portray their group oxidation state of +7 and the state becomes more stable as the group is descended. Thus bohrium is expected to form a stable +7 state. Technetium also shows a stable +4 state whilst rhenium exhibits stable +4 and +3 states. Bohrium may therefore show these lower states as well. The higher +7 oxidation state is more likely to exist in oxyanions, such as perbohrate, BhO4, analogous to the lighter permanganate, pertechnetate, and perrhenate. Nevertheless, bohrium(VII) is likely to be unstable in aqueous solution, and would probably be easily reduced to the more stable bohrium(IV).",
"title": "Predicted properties"
},
{
"paragraph_id": 16,
"text": "The lighter group 7 elements are known to form volatile heptoxides M2O7 (M = Mn, Tc, Re), so bohrium should also form the volatile oxide Bh2O7. The oxide should dissolve in water to form perbohric acid, HBhO4. Rhenium and technetium form a range of oxyhalides from the halogenation of the oxide. The chlorination of the oxide forms the oxychlorides MO3Cl, so BhO3Cl should be formed in this reaction. Fluorination results in MO3F and MO2F3 for the heavier elements in addition to the rhenium compounds ReOF5 and ReF7. Therefore, oxyfluoride formation for bohrium may help to indicate eka-rhenium properties. Since the oxychlorides are asymmetrical, and they should have increasingly large dipole moments going down the group, they should become less volatile in the order TcO3Cl > ReO3Cl > BhO3Cl: this was experimentally confirmed in 2000 by measuring the enthalpies of adsorption of these three compounds. The values are for TcO3Cl and ReO3Cl are −51 kJ/mol and −61 kJ/mol respectively; the experimental value for BhO3Cl is −77.8 kJ/mol, very close to the theoretically expected value of −78.5 kJ/mol.",
"title": "Predicted properties"
},
{
"paragraph_id": 17,
"text": "Bohrium is expected to be a solid under normal conditions and assume a hexagonal close-packed crystal structure (/a = 1.62), similar to its lighter congener rhenium. Early predictions by Fricke estimated its density at 37.1 g/cm, but newer calculations predict a somewhat lower value of 26–27 g/cm.",
"title": "Predicted properties"
},
{
"paragraph_id": 18,
"text": "The atomic radius of bohrium is expected to be around 128 pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Bh ion is predicted to have an electron configuration of [Rn] 5f 6d 7s, giving up a 6d electron instead of a 7s electron, which is the opposite of the behavior of its lighter homologues manganese and technetium. Rhenium, on the other hand, follows its heavier congener bohrium in giving up a 5d electron before a 6s electron, as relativistic effects have become significant by the sixth period, where they cause among other things the yellow color of gold and the low melting point of mercury. The Bh ion is expected to have an electron configuration of [Rn] 5f 6d 7s; in contrast, the Re ion is expected to have a [Xe] 4f 5d configuration, this time analogous to manganese and technetium. The ionic radius of hexacoordinate heptavalent bohrium is expected to be 58 pm (heptavalent manganese, technetium, and rhenium having values of 46, 57, and 53 pm respectively). Pentavalent bohrium should have a larger ionic radius of 83 pm.",
"title": "Predicted properties"
},
{
"paragraph_id": 19,
"text": "In 1995, the first report on attempted isolation of the element was unsuccessful, prompting new theoretical studies to investigate how best to investigate bohrium (using its lighter homologs technetium and rhenium for comparison) and removing unwanted contaminating elements such as the trivalent actinides, the group 5 elements, and polonium.",
"title": "Experimental chemistry"
},
{
"paragraph_id": 20,
"text": "In 2000, it was confirmed that although relativistic effects are important, bohrium behaves like a typical group 7 element. A team at the Paul Scherrer Institute (PSI) conducted a chemistry reaction using six atoms of Bh produced in the reaction between Bk and Ne ions. The resulting atoms were thermalised and reacted with a HCl/O2 mixture to form a volatile oxychloride. The reaction also produced isotopes of its lighter homologues, technetium (as Tc) and rhenium (as Re). The isothermal adsorption curves were measured and gave strong evidence for the formation of a volatile oxychloride with properties similar to that of rhenium oxychloride. This placed bohrium as a typical member of group 7. The adsorption enthalpies of the oxychlorides of technetium, rhenium, and bohrium were measured in this experiment, agreeing very well with the theoretical predictions and implying a sequence of decreasing oxychloride volatility down group 7 of TcO3Cl > ReO3Cl > BhO3Cl.",
"title": "Experimental chemistry"
},
{
"paragraph_id": 21,
"text": "The longer-lived heavy isotopes of bohrium, produced as the daughters of heavier elements, offer advantages for future radiochemical experiments. Although the heavy isotope Bh requires a rare and highly radioactive berkelium target for its production, the isotopes Bh, Bh, and Bh can be readily produced as daughters of more easily produced moscovium and nihonium isotopes.",
"title": "Experimental chemistry"
}
] | Bohrium is a synthetic chemical element; it has symbol Bh and atomic number 107. It is named after Danish physicist Niels Bohr. As a synthetic element, it can be created in particle accelerators but is not found in nature. All known isotopes of bohrium are highly radioactive; the most stable known isotope is 270Bh with a half-life of approximately 2.4 minutes, though the unconfirmed 278Bh may have a longer half-life of about 11.5 minutes. In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 7 elements as the fifth member of the 6d series of transition metals. Chemistry experiments have confirmed that bohrium behaves as the heavier homologue to rhenium in group 7. The chemical properties of bohrium are characterized only partly, but they compare well with the chemistry of the other group 7 elements. | 2001-09-15T13:32:48Z | 2023-12-19T15:20:14Z | [
"Template:Good article",
"Template:Reflist",
"Template:RedBook2005",
"Template:Cite web",
"Template:Doi",
"Template:Distinguish",
"Template:See also",
"Template:Excerpt",
"Template:Main",
"Template:Isotopes summary",
"Template:Periodic table (navbox)",
"Template:Authority control",
"Template:Cite book",
"Template:Nuclide",
"Template:SubatomicParticle",
"Template:Clear",
"Template:Chem",
"Template:Notelist",
"Template:Infobox bohrium",
"Template:Fricke1975",
"Template:Cite journal",
"Template:Commons category-inline"
] | https://en.wikipedia.org/wiki/Bohrium |
4,195 | Barbara Olson | Barbara Kay Olson (née Bracher; December 27, 1955 – September 11, 2001) was an American lawyer and conservative television commentator who worked for CNN, Fox News Channel, and several other outlets. She was a passenger on American Airlines Flight 77 en route to a taping of Bill Maher's television show Politically Incorrect when it was flown into the Pentagon in the September 11 attacks.
Olson was born Barbara Kay Bracher in Houston, Texas, on December 27, 1955. Her older sister, Toni Bracher-Lawrence, was a member of the Houston City Council from 2004 to 2010. She graduated from Waltrip High School.
She married Theodore Olson in 1996, becoming his third wife.
Olson was a frequent critic of the Bill Clinton administration and wrote a book about then First Lady Hillary Clinton, Hell to Pay: The Unfolding Story of Hillary Rodham Clinton (1999). Olson's second book, The Final Days: The Last, Desperate Abuses of Power by the Clinton White House was published posthumously.
Olson was a passenger on American Airlines Flight 77, on her way to a taping of Politically Incorrect in Los Angeles, when it was flown into the Pentagon in the September 11 attacks.
Her original plan had been to fly to California on September 10, but she waited until the next day so that she could wake up with her husband on his birthday, September 11. At the National September 11 Memorial, Olson's name is located on Panel S-70 of the South Pool, along with those of other passengers of Flight 77.
Three months after the attacks, Olson's remains were identified. She was buried at her family's retreat in Wisconsin. | [
{
"paragraph_id": 0,
"text": "Barbara Kay Olson (née Bracher; December 27, 1955 – September 11, 2001) was an American lawyer and conservative television commentator who worked for CNN, Fox News Channel, and several other outlets. She was a passenger on American Airlines Flight 77 en route to a taping of Bill Maher's television show Politically Incorrect when it was flown into the Pentagon in the September 11 attacks.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Olson was born Barbara Kay Bracher in Houston, Texas, on December 27, 1955. Her older sister, Toni Bracher-Lawrence, was a member of the Houston City Council from 2004 to 2010. She graduated from Waltrip High School.",
"title": "Early life"
},
{
"paragraph_id": 2,
"text": "She married Theodore Olson in 1996, becoming his third wife.",
"title": "Personal life"
},
{
"paragraph_id": 3,
"text": "Olson was a frequent critic of the Bill Clinton administration and wrote a book about then First Lady Hillary Clinton, Hell to Pay: The Unfolding Story of Hillary Rodham Clinton (1999). Olson's second book, The Final Days: The Last, Desperate Abuses of Power by the Clinton White House was published posthumously.",
"title": "Personal life"
},
{
"paragraph_id": 4,
"text": "Olson was a passenger on American Airlines Flight 77, on her way to a taping of Politically Incorrect in Los Angeles, when it was flown into the Pentagon in the September 11 attacks.",
"title": "Death and legacy"
},
{
"paragraph_id": 5,
"text": "Her original plan had been to fly to California on September 10, but she waited until the next day so that she could wake up with her husband on his birthday, September 11. At the National September 11 Memorial, Olson's name is located on Panel S-70 of the South Pool, along with those of other passengers of Flight 77.",
"title": "Death and legacy"
},
{
"paragraph_id": 6,
"text": "Three months after the attacks, Olson's remains were identified. She was buried at her family's retreat in Wisconsin.",
"title": "Death and legacy"
}
] | Barbara Kay Olson was an American lawyer and conservative television commentator who worked for CNN, Fox News Channel, and several other outlets. She was a passenger on American Airlines Flight 77 en route to a taping of Bill Maher's television show Politically Incorrect when it was flown into the Pentagon in the September 11 attacks. | 2001-09-15T17:52:38Z | 2023-12-22T00:57:37Z | [
"Template:Use mdy dates",
"Template:Reflist",
"Template:Commons category",
"Template:Cite news",
"Template:Webarchive",
"Template:Find a Grave",
"Template:Short description",
"Template:Infobox person",
"Template:Spaced ndash",
"Template:Cite book",
"Template:C-SPAN",
"Template:Dead link",
"Template:Authority control",
"Template:More citations needed",
"Template:Cite web",
"Template:Cbignore",
"Template:IMDb name"
] | https://en.wikipedia.org/wiki/Barbara_Olson |
4,196 | Barnard's Star | Barnard's Star is a small red dwarf star in the constellation of Ophiuchus. At a distance of 5.96 light-years (1.83 pc) from Earth, it is the fourth-nearest-known individual star to the Sun after the three components of the Alpha Centauri system, and the closest star in the northern celestial hemisphere. Its stellar mass is about 16% of the Sun's, and it has 19% of the Sun's diameter. Despite its proximity, the star has a dim apparent visual magnitude of +9.5 and is invisible to the unaided eye; it is much brighter in the infrared than in visible light.
The star is named after E. E. Barnard, an American astronomer who in 1916 measured its proper motion as 10.3 arcseconds per year relative to the Sun, the highest known for any star. The star had previously appeared on Harvard University photographic plates in 1888 and 1890.
Barnard's Star is among the most studied red dwarfs because of its proximity and favorable location for observation near the celestial equator. Historically, research on Barnard's Star has focused on measuring its stellar characteristics, its astrometry, and also refining the limits of possible extrasolar planets. Although Barnard's Star is ancient, it still experiences stellar flare events, one being observed in 1998.
Barnard's Star has been subject to multiple claims of planets that were later disproven. From the early 1960s to the early 1970s, Peter van de Kamp argued that planets orbited Barnard's Star. His specific claims of large gas giants were refuted in the mid-1970s after much debate. In November 2018, a candidate super-Earth planetary companion known as Barnard's Star b was reported to orbit Barnard's Star. It was believed to have a minimum mass of 3.2 MEarth and orbit at 0.4 AU. However, work presented in July 2021 refuted the existence of this planet.
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Barnard's Star for this star on 1 February 2017 and it is now included in the List of IAU-approved Star Names.
Barnard's Star is a red dwarf of the dim spectral type M4, and it is too faint to see without a telescope; Its apparent magnitude is 9.5.
At 7–12 billion years of age, Barnard's Star is considerably older than the Sun, which is 4.5 billion years old, and it might be among the oldest stars in the Milky Way galaxy. Barnard's Star has lost a great deal of rotational energy, and the periodic slight changes in its brightness indicate that it rotates once in 130 days (the Sun rotates in 25). Given its age, Barnard's Star was long assumed to be quiescent in terms of stellar activity. In 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star. Barnard's Star has the variable star designation V2500 Ophiuchi. In 2003, Barnard's Star presented the first detectable change in the radial velocity of a star caused by its motion. Further variability in the radial velocity of Barnard's Star was attributed to its stellar activity.
The proper motion of Barnard's Star corresponds to a relative lateral speed of 90 km/s. The 10.3 arcseconds it travels in a year amount to a quarter of a degree in a human lifetime, roughly half the angular diameter of the full Moon.
The radial velocity of Barnard's Star is −110 km/s, as measured from the blueshift due to its motion toward the Sun. Combined with its proper motion and distance, this gives a "space velocity" (actual speed relative to the Sun) of 142.6±0.2 km/s. Barnard's Star will make its closest approach to the Sun around 11,800 CE, when it will approach to within about 3.75 light-years.
Proxima Centauri is the closest star to the Sun at a position currently 4.24 light-years distant from it. However, despite Barnard's Star's even closer pass to the Sun in 11,800 CE, it will still not then be the nearest star, since by that time Proxima Centauri will have moved to a yet-nearer proximity to the Sun. At the time of the star's closest pass by the Sun, Barnard's Star will still be too dim to be seen with the naked eye, since its apparent magnitude will only have increased by one magnitude to about 8.5 by then, still being 2.5 magnitudes short of visibility to the naked eye.
Barnard's Star has a mass of about 0.16 solar masses (M☉), and a radius about 0.2 times that of the Sun. Thus, although Barnard's Star has roughly 150 times the mass of Jupiter (MJ), its radius is only roughly 2 times larger, due to its much higher density. Its effective temperature is about 3,220 kelvin, and it has a luminosity of only 0.0034 solar luminosities. Barnard's Star is so faint that if it were at the same distance from Earth as the Sun is, it would appear only 100 times brighter than a full moon, comparable to the brightness of the Sun at 80 astronomical units.
Barnard's Star has 10–32% of the solar metallicity. Metallicity is the proportion of stellar mass made up of elements heavier than helium and helps classify stars relative to the galactic population. Barnard's Star seems to be typical of the old, red dwarf population II stars, yet these are also generally metal-poor halo stars. While sub-solar, Barnard's Star's metallicity is higher than that of a halo star and is in keeping with the low end of the metal-rich disk star range; this, plus its high space motion, have led to the designation "intermediate population II star", between a halo and disk star. Although some recently published scientific papers have given much higher estimates for the metallicity of the star, very close to the Sun's level, between 75 and 125% of the solar metallicity.
For a decade from 1963 to about 1973, a substantial number of astronomers accepted a claim by Peter van de Kamp that he had detected, by using astrometry, a perturbation in the proper motion of Barnard's Star consistent with its having one or more planets comparable in mass with Jupiter. Van de Kamp had been observing the star from 1938, attempting, with colleagues at the Sproul Observatory at Swarthmore College, to find minuscule variations of one micrometre in its position on photographic plates consistent with orbital perturbations that would indicate a planetary companion; this involved as many as ten people averaging their results in looking at plates, to avoid systemic individual errors. Van de Kamp's initial suggestion was a planet having about 1.6 MJ at a distance of 4.4 AU in a slightly eccentric orbit, and these measurements were apparently refined in a 1969 paper. Later that year, Van de Kamp suggested that there were two planets of 1.1 and 0.8 MJ.
Other astronomers subsequently repeated Van de Kamp's measurements, and two papers in 1973 undermined the claim of a planet or planets. George Gatewood and Heinrich Eichhorn, at a different observatory and using newer plate measuring techniques, failed to verify the planetary companion. Another paper published by John L. Hershey four months earlier, also using the Swarthmore observatory, found that changes in the astrometric field of various stars correlated to the timing of adjustments and modifications that had been carried out on the refractor telescope's objective lens; the claimed planet was attributed to an artifact of maintenance and upgrade work. The affair has been discussed as part of a broader scientific review.
Van de Kamp never acknowledged any error and published a further claim of two planets' existence as late as 1982; he died in 1995. Wulff Heintz, Van de Kamp's successor at Swarthmore and an expert on double stars, questioned his findings and began publishing criticisms from 1976 onwards. The two men were reported to have become estranged because of this.
In November 2018, an international team of astronomers announced the detection by radial velocity of a candidate super-Earth orbiting in relatively close proximity to Barnard's Star. Led by Ignasi Ribas of Spain their work, conducted over two decades of observation, provided strong evidence of the planet's existence. However, the existence of the planet was refuted in 2021, because the radial velocity signal was found to originate from a stellar activity cycle, and a study in 2022 confirmed this result.
Dubbed Barnard's Star b, the planet was thought to be near the stellar system's snow line, which is an ideal spot for the icy accretion of proto-planetary material. It was thought to orbit at 0.4 AU every 233 days and had a proposed minimum mass of 3.2 MEarth. The planet would have most likely been frigid, with an estimated surface temperature of about −170 °C (−274 °F), and lie outside Barnard Star's presumed habitable zone. Direct imaging of the planet and its tell-tale light signature would have been possible in the decade after its discovery. Further faint and unaccounted-for perturbations in the system suggested there may be a second planetary companion even farther out.
For the more than four decades between van de Kamp's rejected claim and the eventual announcement of a planet candidate, Barnard's Star was carefully studied and the mass and orbital boundaries for possible planets were slowly tightened. M dwarfs such as Barnard's Star are more easily studied than larger stars in this regard because their lower masses render perturbations more obvious.
Null results for planetary companions continued throughout the 1980s and 1990s, including interferometric work with the Hubble Space Telescope in 1999. Gatewood was able to show in 1995 that planets with 10 MJ were impossible around Barnard's Star, in a paper which helped refine the negative certainty regarding planetary objects in general. In 1999, the Hubble work further excluded planetary companions of 0.8 MJ with an orbital period of less than 1,000 days (Jupiter's orbital period is 4,332 days), while Kuerster determined in 2003 that within the habitable zone around Barnard's Star, planets are not possible with an "M sin i" value greater than 7.5 times the mass of the Earth (MEarth), or with a mass greater than 3.1 times the mass of Neptune (much lower than van de Kamp's smallest suggested value).
In 2013, a research paper was published that further refined planet mass boundaries for the star. Using radial velocity measurements, taken over a period of 25 years, from the Lick and Keck Observatories and applying Monte Carlo analysis for both circular and eccentric orbits, upper masses for planets out to 1,000-day orbits were determined. Planets above two Earth masses in orbits of less than 10 days were excluded, and planets of more than ten Earth masses out to a two-year orbit were also confidently ruled out. It was also discovered that the habitable zone of the star seemed to be devoid of roughly Earth-mass planets or larger, save for face-on orbits.
Even though this research greatly restricted the possible properties of planets around Barnard's Star, it did not rule them out completely as terrestrial planets were always going to be difficult to detect. NASA's Space Interferometry Mission, which was to begin searching for extrasolar Earth-like planets, was reported to have chosen Barnard's Star as an early search target, however the mission was shut down in 2010. ESA's similar Darwin interferometry mission had the same goal, but was stripped of funding in 2007.
The analysis of radial velocities that eventually led to discovery of the candidate super-Earth orbiting Barnard's Star was also used to set more precise upper mass limits for possible planets, up to and within the habitable zone: a maximum of 0.7 MEarth up to the inner edge and 1.2 MEarth on the outer edge of the optimistic habitable zone, corresponding to orbital periods of up to 10 and 40 days respectively. Therefore, it appears that Barnard's Star indeed does not host Earth-mass planets or larger, in hot and temperate orbits, unlike other M-dwarf stars that commonly have these types of planets in close-in orbits.
In 1998 a stellar flare on Barnard's Star was detected based on changes in the spectral emissions on 17 July during an unrelated search for variations in the proper motion. Four years passed before the flare was fully analyzed, at which point it was suggested that the flare's temperature was 8,000 K, more than twice the normal temperature of the star. Given the essentially random nature of flares, Diane Paulson, one of the authors of that study, noted that "the star would be fantastic for amateurs to observe".
The flare was surprising because intense stellar activity is not expected in stars of such age. Flares are not completely understood, but are believed to be caused by strong magnetic fields, which suppress plasma convection and lead to sudden outbursts: strong magnetic fields occur in rapidly rotating stars, while old stars tend to rotate slowly. For Barnard's Star to undergo an event of such magnitude is thus presumed to be a rarity. Research on the star's periodicity, or changes in stellar activity over a given timescale, also suggest it ought to be quiescent; 1998 research showed weak evidence for periodic variation in the star's brightness, noting only one possible starspot over 130 days.
Stellar activity of this sort has created interest in using Barnard's Star as a proxy to understand similar stars. It is hoped that photometric studies of its X-ray and UV emissions will shed light on the large population of old M dwarfs in the galaxy. Such research has astrobiological implications: given that the habitable zones of M dwarfs are close to the star, any planet located therein would be strongly affected by solar flares, stellar winds, and plasma ejection events.
In 2019, two additional ultraviolet stellar flares were detected, each with far-ultraviolet energy of 3×10 joules, together with one X-ray stellar flare with energy 1.6×10 joules. The flare rate observed to date is enough to cause loss of 87 Earth atmospheres per billion years through thermal processes and ≈3 Earth atmospheres per billion years through ion loss processes on Barnard's Star b.
Barnard's Star shares much the same neighborhood as the Sun. The neighbors of Barnard's Star are generally of red dwarf size, the smallest and most common star type. Its closest neighbor is currently the red dwarf Ross 154, at a distance of 1.66 parsecs (5.41 light-years). The Sun and Alpha Centauri are, respectively, the next closest systems. From Barnard's Star, the Sun would appear on the diametrically opposite side of the sky at coordinates RA=5 57 48.5, Dec=−04° 41′ 36″, in the westernmost part of the constellation Monoceros. The absolute magnitude of the Sun is 4.83, and at a distance of 1.834 parsecs, it would be a first-magnitude star, as Pollux is from the Earth.
Barnard's Star was studied as part of Project Daedalus. Undertaken between 1973 and 1978, the study suggested that rapid, uncrewed travel to another star system was possible with existing or near-future technology. Barnard's Star was chosen as a target partly because it was believed to have planets.
The theoretical model suggested that a nuclear pulse rocket employing nuclear fusion (specifically, electron bombardment of deuterium and helium-3) and accelerating for four years could achieve a velocity of 12% of the speed of light. The star could then be reached in 50 years, within a human lifetime. Along with detailed investigation of the star and any companions, the interstellar medium would be examined and baseline astrometric readings performed.
The initial Project Daedalus model sparked further theoretical research. In 1980, Robert Freitas suggested a more ambitious plan: a self-replicating spacecraft intended to search for and make contact with extraterrestrial life. Built and launched in Jupiter's orbit, it would reach Barnard's Star in 47 years under parameters similar to those of the original Project Daedalus. Once at the star, it would begin automated self-replication, constructing a factory, initially to manufacture exploratory probes and eventually to create a copy of the original spacecraft after 1,000 years. | [
{
"paragraph_id": 0,
"text": "Barnard's Star is a small red dwarf star in the constellation of Ophiuchus. At a distance of 5.96 light-years (1.83 pc) from Earth, it is the fourth-nearest-known individual star to the Sun after the three components of the Alpha Centauri system, and the closest star in the northern celestial hemisphere. Its stellar mass is about 16% of the Sun's, and it has 19% of the Sun's diameter. Despite its proximity, the star has a dim apparent visual magnitude of +9.5 and is invisible to the unaided eye; it is much brighter in the infrared than in visible light.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The star is named after E. E. Barnard, an American astronomer who in 1916 measured its proper motion as 10.3 arcseconds per year relative to the Sun, the highest known for any star. The star had previously appeared on Harvard University photographic plates in 1888 and 1890.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Barnard's Star is among the most studied red dwarfs because of its proximity and favorable location for observation near the celestial equator. Historically, research on Barnard's Star has focused on measuring its stellar characteristics, its astrometry, and also refining the limits of possible extrasolar planets. Although Barnard's Star is ancient, it still experiences stellar flare events, one being observed in 1998.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Barnard's Star has been subject to multiple claims of planets that were later disproven. From the early 1960s to the early 1970s, Peter van de Kamp argued that planets orbited Barnard's Star. His specific claims of large gas giants were refuted in the mid-1970s after much debate. In November 2018, a candidate super-Earth planetary companion known as Barnard's Star b was reported to orbit Barnard's Star. It was believed to have a minimum mass of 3.2 MEarth and orbit at 0.4 AU. However, work presented in July 2021 refuted the existence of this planet.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Barnard's Star for this star on 1 February 2017 and it is now included in the List of IAU-approved Star Names.",
"title": "Naming"
},
{
"paragraph_id": 5,
"text": "Barnard's Star is a red dwarf of the dim spectral type M4, and it is too faint to see without a telescope; Its apparent magnitude is 9.5.",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "At 7–12 billion years of age, Barnard's Star is considerably older than the Sun, which is 4.5 billion years old, and it might be among the oldest stars in the Milky Way galaxy. Barnard's Star has lost a great deal of rotational energy, and the periodic slight changes in its brightness indicate that it rotates once in 130 days (the Sun rotates in 25). Given its age, Barnard's Star was long assumed to be quiescent in terms of stellar activity. In 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star. Barnard's Star has the variable star designation V2500 Ophiuchi. In 2003, Barnard's Star presented the first detectable change in the radial velocity of a star caused by its motion. Further variability in the radial velocity of Barnard's Star was attributed to its stellar activity.",
"title": "Description"
},
{
"paragraph_id": 7,
"text": "The proper motion of Barnard's Star corresponds to a relative lateral speed of 90 km/s. The 10.3 arcseconds it travels in a year amount to a quarter of a degree in a human lifetime, roughly half the angular diameter of the full Moon.",
"title": "Description"
},
{
"paragraph_id": 8,
"text": "The radial velocity of Barnard's Star is −110 km/s, as measured from the blueshift due to its motion toward the Sun. Combined with its proper motion and distance, this gives a \"space velocity\" (actual speed relative to the Sun) of 142.6±0.2 km/s. Barnard's Star will make its closest approach to the Sun around 11,800 CE, when it will approach to within about 3.75 light-years.",
"title": "Description"
},
{
"paragraph_id": 9,
"text": "Proxima Centauri is the closest star to the Sun at a position currently 4.24 light-years distant from it. However, despite Barnard's Star's even closer pass to the Sun in 11,800 CE, it will still not then be the nearest star, since by that time Proxima Centauri will have moved to a yet-nearer proximity to the Sun. At the time of the star's closest pass by the Sun, Barnard's Star will still be too dim to be seen with the naked eye, since its apparent magnitude will only have increased by one magnitude to about 8.5 by then, still being 2.5 magnitudes short of visibility to the naked eye.",
"title": "Description"
},
{
"paragraph_id": 10,
"text": "Barnard's Star has a mass of about 0.16 solar masses (M☉), and a radius about 0.2 times that of the Sun. Thus, although Barnard's Star has roughly 150 times the mass of Jupiter (MJ), its radius is only roughly 2 times larger, due to its much higher density. Its effective temperature is about 3,220 kelvin, and it has a luminosity of only 0.0034 solar luminosities. Barnard's Star is so faint that if it were at the same distance from Earth as the Sun is, it would appear only 100 times brighter than a full moon, comparable to the brightness of the Sun at 80 astronomical units.",
"title": "Description"
},
{
"paragraph_id": 11,
"text": "Barnard's Star has 10–32% of the solar metallicity. Metallicity is the proportion of stellar mass made up of elements heavier than helium and helps classify stars relative to the galactic population. Barnard's Star seems to be typical of the old, red dwarf population II stars, yet these are also generally metal-poor halo stars. While sub-solar, Barnard's Star's metallicity is higher than that of a halo star and is in keeping with the low end of the metal-rich disk star range; this, plus its high space motion, have led to the designation \"intermediate population II star\", between a halo and disk star. Although some recently published scientific papers have given much higher estimates for the metallicity of the star, very close to the Sun's level, between 75 and 125% of the solar metallicity.",
"title": "Description"
},
{
"paragraph_id": 12,
"text": "For a decade from 1963 to about 1973, a substantial number of astronomers accepted a claim by Peter van de Kamp that he had detected, by using astrometry, a perturbation in the proper motion of Barnard's Star consistent with its having one or more planets comparable in mass with Jupiter. Van de Kamp had been observing the star from 1938, attempting, with colleagues at the Sproul Observatory at Swarthmore College, to find minuscule variations of one micrometre in its position on photographic plates consistent with orbital perturbations that would indicate a planetary companion; this involved as many as ten people averaging their results in looking at plates, to avoid systemic individual errors. Van de Kamp's initial suggestion was a planet having about 1.6 MJ at a distance of 4.4 AU in a slightly eccentric orbit, and these measurements were apparently refined in a 1969 paper. Later that year, Van de Kamp suggested that there were two planets of 1.1 and 0.8 MJ.",
"title": "Search for planets"
},
{
"paragraph_id": 13,
"text": "Other astronomers subsequently repeated Van de Kamp's measurements, and two papers in 1973 undermined the claim of a planet or planets. George Gatewood and Heinrich Eichhorn, at a different observatory and using newer plate measuring techniques, failed to verify the planetary companion. Another paper published by John L. Hershey four months earlier, also using the Swarthmore observatory, found that changes in the astrometric field of various stars correlated to the timing of adjustments and modifications that had been carried out on the refractor telescope's objective lens; the claimed planet was attributed to an artifact of maintenance and upgrade work. The affair has been discussed as part of a broader scientific review.",
"title": "Search for planets"
},
{
"paragraph_id": 14,
"text": "Van de Kamp never acknowledged any error and published a further claim of two planets' existence as late as 1982; he died in 1995. Wulff Heintz, Van de Kamp's successor at Swarthmore and an expert on double stars, questioned his findings and began publishing criticisms from 1976 onwards. The two men were reported to have become estranged because of this.",
"title": "Search for planets"
},
{
"paragraph_id": 15,
"text": "In November 2018, an international team of astronomers announced the detection by radial velocity of a candidate super-Earth orbiting in relatively close proximity to Barnard's Star. Led by Ignasi Ribas of Spain their work, conducted over two decades of observation, provided strong evidence of the planet's existence. However, the existence of the planet was refuted in 2021, because the radial velocity signal was found to originate from a stellar activity cycle, and a study in 2022 confirmed this result.",
"title": "Search for planets"
},
{
"paragraph_id": 16,
"text": "Dubbed Barnard's Star b, the planet was thought to be near the stellar system's snow line, which is an ideal spot for the icy accretion of proto-planetary material. It was thought to orbit at 0.4 AU every 233 days and had a proposed minimum mass of 3.2 MEarth. The planet would have most likely been frigid, with an estimated surface temperature of about −170 °C (−274 °F), and lie outside Barnard Star's presumed habitable zone. Direct imaging of the planet and its tell-tale light signature would have been possible in the decade after its discovery. Further faint and unaccounted-for perturbations in the system suggested there may be a second planetary companion even farther out.",
"title": "Search for planets"
},
{
"paragraph_id": 17,
"text": "For the more than four decades between van de Kamp's rejected claim and the eventual announcement of a planet candidate, Barnard's Star was carefully studied and the mass and orbital boundaries for possible planets were slowly tightened. M dwarfs such as Barnard's Star are more easily studied than larger stars in this regard because their lower masses render perturbations more obvious.",
"title": "Search for planets"
},
{
"paragraph_id": 18,
"text": "Null results for planetary companions continued throughout the 1980s and 1990s, including interferometric work with the Hubble Space Telescope in 1999. Gatewood was able to show in 1995 that planets with 10 MJ were impossible around Barnard's Star, in a paper which helped refine the negative certainty regarding planetary objects in general. In 1999, the Hubble work further excluded planetary companions of 0.8 MJ with an orbital period of less than 1,000 days (Jupiter's orbital period is 4,332 days), while Kuerster determined in 2003 that within the habitable zone around Barnard's Star, planets are not possible with an \"M sin i\" value greater than 7.5 times the mass of the Earth (MEarth), or with a mass greater than 3.1 times the mass of Neptune (much lower than van de Kamp's smallest suggested value).",
"title": "Search for planets"
},
{
"paragraph_id": 19,
"text": "In 2013, a research paper was published that further refined planet mass boundaries for the star. Using radial velocity measurements, taken over a period of 25 years, from the Lick and Keck Observatories and applying Monte Carlo analysis for both circular and eccentric orbits, upper masses for planets out to 1,000-day orbits were determined. Planets above two Earth masses in orbits of less than 10 days were excluded, and planets of more than ten Earth masses out to a two-year orbit were also confidently ruled out. It was also discovered that the habitable zone of the star seemed to be devoid of roughly Earth-mass planets or larger, save for face-on orbits.",
"title": "Search for planets"
},
{
"paragraph_id": 20,
"text": "Even though this research greatly restricted the possible properties of planets around Barnard's Star, it did not rule them out completely as terrestrial planets were always going to be difficult to detect. NASA's Space Interferometry Mission, which was to begin searching for extrasolar Earth-like planets, was reported to have chosen Barnard's Star as an early search target, however the mission was shut down in 2010. ESA's similar Darwin interferometry mission had the same goal, but was stripped of funding in 2007.",
"title": "Search for planets"
},
{
"paragraph_id": 21,
"text": "The analysis of radial velocities that eventually led to discovery of the candidate super-Earth orbiting Barnard's Star was also used to set more precise upper mass limits for possible planets, up to and within the habitable zone: a maximum of 0.7 MEarth up to the inner edge and 1.2 MEarth on the outer edge of the optimistic habitable zone, corresponding to orbital periods of up to 10 and 40 days respectively. Therefore, it appears that Barnard's Star indeed does not host Earth-mass planets or larger, in hot and temperate orbits, unlike other M-dwarf stars that commonly have these types of planets in close-in orbits.",
"title": "Search for planets"
},
{
"paragraph_id": 22,
"text": "In 1998 a stellar flare on Barnard's Star was detected based on changes in the spectral emissions on 17 July during an unrelated search for variations in the proper motion. Four years passed before the flare was fully analyzed, at which point it was suggested that the flare's temperature was 8,000 K, more than twice the normal temperature of the star. Given the essentially random nature of flares, Diane Paulson, one of the authors of that study, noted that \"the star would be fantastic for amateurs to observe\".",
"title": "Stellar flares"
},
{
"paragraph_id": 23,
"text": "The flare was surprising because intense stellar activity is not expected in stars of such age. Flares are not completely understood, but are believed to be caused by strong magnetic fields, which suppress plasma convection and lead to sudden outbursts: strong magnetic fields occur in rapidly rotating stars, while old stars tend to rotate slowly. For Barnard's Star to undergo an event of such magnitude is thus presumed to be a rarity. Research on the star's periodicity, or changes in stellar activity over a given timescale, also suggest it ought to be quiescent; 1998 research showed weak evidence for periodic variation in the star's brightness, noting only one possible starspot over 130 days.",
"title": "Stellar flares"
},
{
"paragraph_id": 24,
"text": "Stellar activity of this sort has created interest in using Barnard's Star as a proxy to understand similar stars. It is hoped that photometric studies of its X-ray and UV emissions will shed light on the large population of old M dwarfs in the galaxy. Such research has astrobiological implications: given that the habitable zones of M dwarfs are close to the star, any planet located therein would be strongly affected by solar flares, stellar winds, and plasma ejection events.",
"title": "Stellar flares"
},
{
"paragraph_id": 25,
"text": "In 2019, two additional ultraviolet stellar flares were detected, each with far-ultraviolet energy of 3×10 joules, together with one X-ray stellar flare with energy 1.6×10 joules. The flare rate observed to date is enough to cause loss of 87 Earth atmospheres per billion years through thermal processes and ≈3 Earth atmospheres per billion years through ion loss processes on Barnard's Star b.",
"title": "Stellar flares"
},
{
"paragraph_id": 26,
"text": "Barnard's Star shares much the same neighborhood as the Sun. The neighbors of Barnard's Star are generally of red dwarf size, the smallest and most common star type. Its closest neighbor is currently the red dwarf Ross 154, at a distance of 1.66 parsecs (5.41 light-years). The Sun and Alpha Centauri are, respectively, the next closest systems. From Barnard's Star, the Sun would appear on the diametrically opposite side of the sky at coordinates RA=5 57 48.5, Dec=−04° 41′ 36″, in the westernmost part of the constellation Monoceros. The absolute magnitude of the Sun is 4.83, and at a distance of 1.834 parsecs, it would be a first-magnitude star, as Pollux is from the Earth.",
"title": "Environment"
},
{
"paragraph_id": 27,
"text": "Barnard's Star was studied as part of Project Daedalus. Undertaken between 1973 and 1978, the study suggested that rapid, uncrewed travel to another star system was possible with existing or near-future technology. Barnard's Star was chosen as a target partly because it was believed to have planets.",
"title": "Proposed exploration"
},
{
"paragraph_id": 28,
"text": "The theoretical model suggested that a nuclear pulse rocket employing nuclear fusion (specifically, electron bombardment of deuterium and helium-3) and accelerating for four years could achieve a velocity of 12% of the speed of light. The star could then be reached in 50 years, within a human lifetime. Along with detailed investigation of the star and any companions, the interstellar medium would be examined and baseline astrometric readings performed.",
"title": "Proposed exploration"
},
{
"paragraph_id": 29,
"text": "The initial Project Daedalus model sparked further theoretical research. In 1980, Robert Freitas suggested a more ambitious plan: a self-replicating spacecraft intended to search for and make contact with extraterrestrial life. Built and launched in Jupiter's orbit, it would reach Barnard's Star in 47 years under parameters similar to those of the original Project Daedalus. Once at the star, it would begin automated self-replication, constructing a factory, initially to manufacture exploratory probes and eventually to create a copy of the original spacecraft after 1,000 years.",
"title": "Proposed exploration"
}
] | Barnard's Star is a small red dwarf star in the constellation of Ophiuchus. At a distance of 5.96 light-years (1.83 pc) from Earth, it is the fourth-nearest-known individual star to the Sun after the three components of the Alpha Centauri system, and the closest star in the northern celestial hemisphere. Its stellar mass is about 16% of the Sun's, and it has 19% of the Sun's diameter. Despite its proximity, the star has a dim apparent visual magnitude of +9.5 and is invisible to the unaided eye; it is much brighter in the infrared than in visible light. The star is named after E. E. Barnard, an American astronomer who in 1916 measured its proper motion as 10.3 arcseconds per year relative to the Sun, the highest known for any star. The star had previously appeared on Harvard University photographic plates in 1888 and 1890. Barnard's Star is among the most studied red dwarfs because of its proximity and favorable location for observation near the celestial equator. Historically, research on Barnard's Star has focused on measuring its stellar characteristics, its astrometry, and also refining the limits of possible extrasolar planets. Although Barnard's Star is ancient, it still experiences stellar flare events, one being observed in 1998. Barnard's Star has been subject to multiple claims of planets that were later disproven. From the early 1960s to the early 1970s, Peter van de Kamp argued that planets orbited Barnard's Star. His specific claims of large gas giants were refuted in the mid-1970s after much debate. In November 2018, a candidate super-Earth planetary companion known as Barnard's Star b was reported to orbit Barnard's Star. It was believed to have a minimum mass of 3.2 MEarth and orbit at 0.4 AU. However, work presented in July 2021 refuted the existence of this planet. | 2001-09-15T22:23:04Z | 2023-12-02T02:00:57Z | [
"Template:Use dmy dates",
"Template:Earth mass",
"Template:Nbsp",
"Template:Starbox begin",
"Template:Starbox observe",
"Template:Cite web",
"Template:Portal bar",
"Template:Starbox character",
"Template:Starbox catalog",
"Template:Starbox reference",
"Template:Starbox astrometry",
"Template:Solar mass",
"Template:Jupiter mass",
"Template:Main",
"Template:Reflist",
"Template:Cite journal",
"Template:Stars of Ophiuchus",
"Template:Starbox end",
"Template:RA",
"Template:Cite news",
"Template:Starbox image",
"Template:Annotated link",
"Template:Featured article",
"Template:Convert",
"Template:DEC",
"Template:Nearest systems",
"Template:Authority control",
"Template:Short description",
"Template:Starbox detail",
"Template:Val",
"Template:Cite encyclopedia",
"Template:Commons category",
"Template:Sky"
] | https://en.wikipedia.org/wiki/Barnard%27s_Star |
4,199 | Bayer designation | A Bayer designation is a stellar designation in which a specific star is identified by a Greek or Latin letter followed by the genitive form of its parent constellation's Latin name. The original list of Bayer designations contained 1,564 stars. The brighter stars were assigned their first systematic names by the German astronomer Johann Bayer in 1603, in his star atlas Uranometria. Bayer catalogued only a few stars too far south to be seen from Germany, but later astronomers (including Nicolas-Louis de Lacaille and Benjamin Apthorp Gould) supplemented Bayer's catalog with entries for southern constellations.
Bayer assigned a lowercase Greek letter (alpha (α), beta (β), gamma (γ), etc.) or a Latin letter (A, b, c, etc.) to each star he catalogued, combined with the Latin name of the star's parent constellation in genitive (possessive) form. The constellation name is frequently abbreviated to a standard three-letter form. For example, Aldebaran in the constellation Taurus (the Bull) is designated α Tauri (abbreviated α Tau, pronounced Alpha Tauri), which means "Alpha of the Bull".
Bayer used Greek letters for the brighter stars, but the Greek alphabet has only twenty-four letters, while a single constellation may contain fifty or more stars visible to the naked eye. When the Greek letters ran out, Bayer continued with Latin letters: uppercase A, followed by lowercase b through z (omitting j and v, but o was included), for a total of another 24 letters.
Bayer did not label "permanent" stars with uppercase letters (except for A, which he used instead of a to avoid confusion with α). However, a number of stars in southern constellations have uppercase letter designations, like B Centauri and G Scorpii. These letters were assigned by later astronomers, notably Lacaille in his Coelum Australe Stelliferum and Gould in his Uranometria Argentina. Lacaille followed Bayer's use of Greek letters, but this was insufficient for many constellations. He used first the lowercase letters, starting with a, and if needed the uppercase letters, starting with A, thus deviating somewhat from Bayer's practice. Lacaille used the Latin alphabet three times over in the large constellation Argo Navis, once for each of the three areas that are now the constellations of Carina, Puppis and Vela. That was still insufficient for the number of stars, so he also used uppercase Latin letters such as N Velorum and Q Puppis. Lacaille assigned uppercase letters between R and Z in several constellations, but these have either been dropped to allow the assignment of those letters to variable stars or have actually turned out to be variable.
In most constellations, Bayer assigned Greek and Latin letters to stars within a constellation in rough order of apparent brightness, from brightest to dimmest. The order is not necessarily a precise labeling from brightest to dimmest: in Bayer's day stellar brightness could not be measured precisely. Instead, stars were traditionally assigned to one of six magnitude classes (the brightest to first magnitude, the dimmest to sixth), and Bayer typically ordered stars within a constellation by class: all the first-magnitude stars (in some order), followed by all the second-magnitude stars, and so on. Within each magnitude class, Bayer made no attempt to arrange stars by relative brightness. As a result, the brightest star in each class did not always get listed first in Bayer's order—and the brightest star overall did not necessarily get the designation "Alpha". A good example is the constellation Gemini, where Pollux is Beta Geminorum and the slightly dimmer Castor is Alpha Geminorum.
In addition, Bayer did not always follow the magnitude class rule; he sometimes assigned letters to stars according to their location within a constellation, or the order of their rising, or to historical or mythological details. Occasionally the order looks quite arbitrary.
Of the 88 modern constellations, there are at least 30 in which "Alpha" is not the brightest star, and four of those lack a star labeled "Alpha" altogether. The constellations with no alpha-designated star include Vela and Puppis—both formerly part of Argo Navis, whose Greek-letter stars were split between three constellations. The former α Argus is Canopus, now α Carinae in the modern constellation Carina.
In Orion, Bayer first designated Betelgeuse and Rigel, the two 1st-magnitude stars (those of magnitude 1.5 or less), as Alpha and Beta from north to south, with Betelgeuse (the shoulder) coming ahead of Rigel (the foot), even though the latter is usually the brighter. (Betelgeuse is a variable star and can at its maximum occasionally outshine Rigel.) Bayer then repeated the procedure for the stars of the 2nd magnitude, labeling them from gamma through zeta in "top-down" (north-to-south) order. Letters as far as Latin p were used for stars of the sixth magnitude.
Although Bayer did not use uppercase Latin letters (except A) for "fixed stars", he did use them to label other items shown on his charts, such as neighboring constellations, "temporary stars", miscellaneous astronomical objects, or reference lines like the Tropic of Cancer. In Cygnus, for example, Bayer's fixed stars run through g, and on this chart Bayer employs H through P as miscellaneous labels, mostly for neighboring constellations. Bayer did not intend such labels as catalog designations, but some have survived to refer to astronomical objects: P Cygni for example is still used as a designation for Nova Cyg 1600. Tycho's Star (SN 1572), another "temporary star", appears as B Cassiopeiae. In charts for constellations that did not exhaust the Greek letters, Bayer sometimes used the leftover Greek letters for miscellaneous labels as well.
Ptolemy designated four stars as "border stars", each shared by two constellations: Alpheratz (in Andromeda and Pegasus), Elnath (in Taurus and Auriga), Nu Boötis (Nu and Nu)(in Boötes and Hercules) and Fomalhaut (in Piscis Austrinus and Aquarius). Bayer assigned the first three of these stars a Greek letter from both constellations: Alpha Andromedae = Delta Pegasi, Beta Tauri = Gamma Aurigae, and Nu Boötis = Psi Herculis. (He catalogued Fomalhaut only once, as Alpha Piscis Austrini.) When the International Astronomical Union (IAU) assigned definite boundaries to the constellations in 1930, it declared that stars and other celestial objects can belong to only one constellation. Consequently, the redundant second designation in each pair above has dropped out of use.
Bayer assigned two stars duplicate names by mistake: Xi Arietis (duplicated as Psi Ceti) and Kappa Ceti (Kappa and Kappa) (duplicated as g Tauri). He corrected these in a later atlas, and the duplicate names were no longer used.
Other cases of multiple Bayer designations arose when stars named by Bayer in one constellation were transferred by later astronomers to a different constellation. Bayer's Gamma and Omicron Scorpii, for example, were later reassigned from Scorpius to Libra and given the new names Sigma and Upsilon Librae. (To add to the confusion, the star now known as Omicron Scorpii was not named by Bayer but was assigned the designation o Scorpii (Latin lowercase 'o') by Lacaille—which later astronomers misinterpreted as omicron once Bayer's omicron had been reassigned to Libra.)
A few stars no longer lie (according to the modern constellation boundaries) within the constellation for which they are named. The proper motion of Rho Aquilae, for example, carried it across the boundary into Delphinus in 1992.
A further complication is the use of numeric superscripts to distinguish neighboring stars that Bayer (or a later astronomer) labeled with a common letter. Usually these are double stars (mostly optical doubles rather than true binary stars), but there are some exceptions such as the chain of stars π, π, π, π, π and π Orionis. The most stars given the same Bayer designation but with an extra number attached to it is Psi Aurigae. (ψ, ψ, ψ, ψ, ψ, ψ, ψ, ψ, ψ, ψ) | [
{
"paragraph_id": 0,
"text": "A Bayer designation is a stellar designation in which a specific star is identified by a Greek or Latin letter followed by the genitive form of its parent constellation's Latin name. The original list of Bayer designations contained 1,564 stars. The brighter stars were assigned their first systematic names by the German astronomer Johann Bayer in 1603, in his star atlas Uranometria. Bayer catalogued only a few stars too far south to be seen from Germany, but later astronomers (including Nicolas-Louis de Lacaille and Benjamin Apthorp Gould) supplemented Bayer's catalog with entries for southern constellations.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bayer assigned a lowercase Greek letter (alpha (α), beta (β), gamma (γ), etc.) or a Latin letter (A, b, c, etc.) to each star he catalogued, combined with the Latin name of the star's parent constellation in genitive (possessive) form. The constellation name is frequently abbreviated to a standard three-letter form. For example, Aldebaran in the constellation Taurus (the Bull) is designated α Tauri (abbreviated α Tau, pronounced Alpha Tauri), which means \"Alpha of the Bull\".",
"title": "Scheme"
},
{
"paragraph_id": 2,
"text": "Bayer used Greek letters for the brighter stars, but the Greek alphabet has only twenty-four letters, while a single constellation may contain fifty or more stars visible to the naked eye. When the Greek letters ran out, Bayer continued with Latin letters: uppercase A, followed by lowercase b through z (omitting j and v, but o was included), for a total of another 24 letters.",
"title": "Scheme"
},
{
"paragraph_id": 3,
"text": "Bayer did not label \"permanent\" stars with uppercase letters (except for A, which he used instead of a to avoid confusion with α). However, a number of stars in southern constellations have uppercase letter designations, like B Centauri and G Scorpii. These letters were assigned by later astronomers, notably Lacaille in his Coelum Australe Stelliferum and Gould in his Uranometria Argentina. Lacaille followed Bayer's use of Greek letters, but this was insufficient for many constellations. He used first the lowercase letters, starting with a, and if needed the uppercase letters, starting with A, thus deviating somewhat from Bayer's practice. Lacaille used the Latin alphabet three times over in the large constellation Argo Navis, once for each of the three areas that are now the constellations of Carina, Puppis and Vela. That was still insufficient for the number of stars, so he also used uppercase Latin letters such as N Velorum and Q Puppis. Lacaille assigned uppercase letters between R and Z in several constellations, but these have either been dropped to allow the assignment of those letters to variable stars or have actually turned out to be variable.",
"title": "Scheme"
},
{
"paragraph_id": 4,
"text": "In most constellations, Bayer assigned Greek and Latin letters to stars within a constellation in rough order of apparent brightness, from brightest to dimmest. The order is not necessarily a precise labeling from brightest to dimmest: in Bayer's day stellar brightness could not be measured precisely. Instead, stars were traditionally assigned to one of six magnitude classes (the brightest to first magnitude, the dimmest to sixth), and Bayer typically ordered stars within a constellation by class: all the first-magnitude stars (in some order), followed by all the second-magnitude stars, and so on. Within each magnitude class, Bayer made no attempt to arrange stars by relative brightness. As a result, the brightest star in each class did not always get listed first in Bayer's order—and the brightest star overall did not necessarily get the designation \"Alpha\". A good example is the constellation Gemini, where Pollux is Beta Geminorum and the slightly dimmer Castor is Alpha Geminorum.",
"title": "Order by magnitude class"
},
{
"paragraph_id": 5,
"text": "In addition, Bayer did not always follow the magnitude class rule; he sometimes assigned letters to stars according to their location within a constellation, or the order of their rising, or to historical or mythological details. Occasionally the order looks quite arbitrary.",
"title": "Order by magnitude class"
},
{
"paragraph_id": 6,
"text": "Of the 88 modern constellations, there are at least 30 in which \"Alpha\" is not the brightest star, and four of those lack a star labeled \"Alpha\" altogether. The constellations with no alpha-designated star include Vela and Puppis—both formerly part of Argo Navis, whose Greek-letter stars were split between three constellations. The former α Argus is Canopus, now α Carinae in the modern constellation Carina.",
"title": "Order by magnitude class"
},
{
"paragraph_id": 7,
"text": "In Orion, Bayer first designated Betelgeuse and Rigel, the two 1st-magnitude stars (those of magnitude 1.5 or less), as Alpha and Beta from north to south, with Betelgeuse (the shoulder) coming ahead of Rigel (the foot), even though the latter is usually the brighter. (Betelgeuse is a variable star and can at its maximum occasionally outshine Rigel.) Bayer then repeated the procedure for the stars of the 2nd magnitude, labeling them from gamma through zeta in \"top-down\" (north-to-south) order. Letters as far as Latin p were used for stars of the sixth magnitude.",
"title": "Orion as an example"
},
{
"paragraph_id": 8,
"text": "Although Bayer did not use uppercase Latin letters (except A) for \"fixed stars\", he did use them to label other items shown on his charts, such as neighboring constellations, \"temporary stars\", miscellaneous astronomical objects, or reference lines like the Tropic of Cancer. In Cygnus, for example, Bayer's fixed stars run through g, and on this chart Bayer employs H through P as miscellaneous labels, mostly for neighboring constellations. Bayer did not intend such labels as catalog designations, but some have survived to refer to astronomical objects: P Cygni for example is still used as a designation for Nova Cyg 1600. Tycho's Star (SN 1572), another \"temporary star\", appears as B Cassiopeiae. In charts for constellations that did not exhaust the Greek letters, Bayer sometimes used the leftover Greek letters for miscellaneous labels as well.",
"title": "Bayer's miscellaneous labels"
},
{
"paragraph_id": 9,
"text": "Ptolemy designated four stars as \"border stars\", each shared by two constellations: Alpheratz (in Andromeda and Pegasus), Elnath (in Taurus and Auriga), Nu Boötis (Nu and Nu)(in Boötes and Hercules) and Fomalhaut (in Piscis Austrinus and Aquarius). Bayer assigned the first three of these stars a Greek letter from both constellations: Alpha Andromedae = Delta Pegasi, Beta Tauri = Gamma Aurigae, and Nu Boötis = Psi Herculis. (He catalogued Fomalhaut only once, as Alpha Piscis Austrini.) When the International Astronomical Union (IAU) assigned definite boundaries to the constellations in 1930, it declared that stars and other celestial objects can belong to only one constellation. Consequently, the redundant second designation in each pair above has dropped out of use.",
"title": "Revised designations"
},
{
"paragraph_id": 10,
"text": "Bayer assigned two stars duplicate names by mistake: Xi Arietis (duplicated as Psi Ceti) and Kappa Ceti (Kappa and Kappa) (duplicated as g Tauri). He corrected these in a later atlas, and the duplicate names were no longer used.",
"title": "Revised designations"
},
{
"paragraph_id": 11,
"text": "Other cases of multiple Bayer designations arose when stars named by Bayer in one constellation were transferred by later astronomers to a different constellation. Bayer's Gamma and Omicron Scorpii, for example, were later reassigned from Scorpius to Libra and given the new names Sigma and Upsilon Librae. (To add to the confusion, the star now known as Omicron Scorpii was not named by Bayer but was assigned the designation o Scorpii (Latin lowercase 'o') by Lacaille—which later astronomers misinterpreted as omicron once Bayer's omicron had been reassigned to Libra.)",
"title": "Revised designations"
},
{
"paragraph_id": 12,
"text": "A few stars no longer lie (according to the modern constellation boundaries) within the constellation for which they are named. The proper motion of Rho Aquilae, for example, carried it across the boundary into Delphinus in 1992.",
"title": "Revised designations"
},
{
"paragraph_id": 13,
"text": "A further complication is the use of numeric superscripts to distinguish neighboring stars that Bayer (or a later astronomer) labeled with a common letter. Usually these are double stars (mostly optical doubles rather than true binary stars), but there are some exceptions such as the chain of stars π, π, π, π, π and π Orionis. The most stars given the same Bayer designation but with an extra number attached to it is Psi Aurigae. (ψ, ψ, ψ, ψ, ψ, ψ, ψ, ψ, ψ, ψ)",
"title": "Revised designations"
}
] | A Bayer designation is a stellar designation in which a specific star is identified by a Greek or Latin letter followed by the genitive form of its parent constellation's Latin name. The original list of Bayer designations contained 1,564 stars. The brighter stars were assigned their first systematic names by the German astronomer Johann Bayer in 1603, in his star atlas Uranometria. Bayer catalogued only a few stars too far south to be seen from Germany, but later astronomers supplemented Bayer's catalog with entries for southern constellations. | 2001-09-19T14:18:32Z | 2023-12-07T06:20:17Z | [
"Template:Short description",
"Template:Efn",
"Template:Nowrap",
"Template:Rp",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:Notelist"
] | https://en.wikipedia.org/wiki/Bayer_designation |
4,200 | Boötes | Boötes (/boʊˈoʊtiːz/ boh-OH-teez) is a constellation in the northern sky, located between 0° and +60° declination, and 13 and 16 hours of right ascension on the celestial sphere. The name comes from Latin: Boōtēs, which comes from Greek: Βοώτης, translit. Boṓtēs 'herdsman' or 'plowman' (literally, 'ox-driver'; from βοῦς boûs 'cow').
One of the 48 constellations described by the 2nd-century astronomer Ptolemy, Boötes is now one of the 88 modern constellations. It contains the fourth-brightest star in the night sky, the orange giant Arcturus. Epsilon Boötis, or Izar, is a colourful multiple star popular with amateur astronomers. Boötes is home to many other bright stars, including eight above the fourth magnitude and an additional 21 above the fifth magnitude, making a total of 29 stars easily visible to the naked eye.
In ancient Babylon, the stars of Boötes were known as SHU.PA. They were apparently depicted as the god Enlil, who was the leader of the Babylonian pantheon and special patron of farmers. Boötes may have been represented by the animal foreleg constellation in ancient Egypt, resembling that of an ox sufficiently to have been originally proposed as the "foreleg of ox" by Berio.
Homer mentions Boötes in the Odyssey as a celestial reference for navigation, describing it as "late-setting" or "slow to set". Exactly whom Boötes is supposed to represent in Greek mythology is not clear. According to one version, he was a son of Demeter, Philomenus, twin brother of Plutus, a plowman who drove the oxen in the constellation Ursa Major. This agrees with the constellation's name. The ancient Greeks saw the asterism now called the "Big Dipper" or "Plough" as a cart with oxen. Some myths say that Boötes invented the plow and was memorialized for his ingenuity as a constellation.
Another myth associated with Boötes by Hyginus is that of Icarius, who was schooled as a grape farmer and winemaker by Dionysus. Icarius made wine so strong that those who drank it appeared poisoned, which caused shepherds to avenge their supposedly poisoned friends by killing Icarius. Maera, Icarius' dog, brought his daughter Erigone to her father's body, whereupon both she and the dog committed suicide. Zeus then chose to honor all three by placing them in the sky as constellations: Icarius as Boötes, Erigone as Virgo, and Maera as Canis Major or Canis Minor.
Following another reading, the constellation is identified with Arcas and also referred to as Arcas and Arcturus, son of Zeus and Callisto. Arcas was brought up by his maternal grandfather Lycaon, to whom one day Zeus went and had a meal. To verify that the guest was really the king of the gods, Lycaon killed his grandson and prepared a meal made from his flesh. Zeus noticed and became very angry, transforming Lycaon into a wolf and giving life back to his son. In the meantime Callisto had been transformed into a she-bear by Zeus's wife Hera, who was angry at Zeus's infidelity. This is corroborated by the Greek name for Boötes, Arctophylax, which means "Bear Watcher".
Callisto, in the form of a bear was almost killed by her son, who was out hunting. Zeus rescued her, taking her into the sky where she became Ursa Major, "the Great Bear". Arcturus, the name of the constellation's brightest star, comes from the Greek word meaning "guardian of the bear". Sometimes Arcturus is depicted as leading the hunting dogs of nearby Canes Venatici and driving the bears of Ursa Major and Ursa Minor.
Several former constellations were formed from stars now included in Boötes. Quadrans Muralis, the Quadrant, was a constellation created near Beta Boötis from faint stars. It was designated in 1795 by Jérôme Lalande, an astronomer who used a quadrant to perform detailed astronometric measurements. Lalande worked with Nicole-Reine Lepaute and others to predict the 1758 return of Halley's Comet. Quadrans Muralis was formed from the stars of eastern Boötes, western Hercules and Draco. It was originally called Le Mural by Jean Fortin in his 1795 Atlas Céleste; it was not given the name Quadrans Muralis until Johann Bode's 1801 Uranographia. The constellation was quite faint, with its brightest stars reaching the 5th magnitude. Mons Maenalus, representing the Maenalus mountains, was created by Johannes Hevelius in 1687 at the foot of the constellation's figure. The mountain was named for the son of Lycaon, Maenalus. The mountain, one of Diana's hunting grounds, was also holy to Pan.
The stars of Boötes were incorporated into many different Chinese constellations. Arcturus was part of the most prominent of these, variously designated as the celestial king's throne (Tian Wang) or the Blue Dragon's horn (Daijiao); the name Daijiao, meaning "great horn", is more common. Arcturus was given such importance in Chinese celestial mythology because of its status marking the beginning of the lunar calendar, as well as its status as the brightest star in the northern night sky.
Two constellations flanked Daijiao: Yousheti to the right and Zuosheti to the left; they represented companions that orchestrated the seasons. Zuosheti was formed from modern Zeta, Omicron and Pi Boötis, while Yousheti was formed from modern Eta, Tau and Upsilon Boötis. Dixi, the Emperor's ceremonial banquet mat, was north of Arcturus, consisting of the stars 12, 11 and 9 Boötis. Another northern constellation was Qigong, the Seven Dukes, which mostly straddled the Boötes-Hercules border. It included either Delta Boötis or Beta Boötis as its terminus.
The other Chinese constellations made up of the stars of Boötes existed in the modern constellation's north; they are all representations of weapons. Tianqiang, the spear, was formed from Iota, Kappa and Theta Boötis; Genghe, variously representing a lance or shield, was formed from Epsilon, Rho and Sigma Boötis.
There were also two weapons made up of a singular star. Xuange, the halberd, was represented by Lambda Boötis, and Zhaoyao, either the sword or the spear, was represented by Gamma Boötis.
Two Chinese constellations have an uncertain placement in Boötes. Kangchi, the lake, was placed south of Arcturus, though its specific location is disputed. It may have been placed entirely in Boötes, on either side of the Boötes-Virgo border, or on either side of the Virgo-Libra border. The constellation Zhouding, a bronze tripod-mounted container used for food, was sometimes cited as the stars 1, 2 and 6 Boötis. However, it has also been associated with three stars in Coma Berenices.
Boötes is also known to Native American cultures. In Yup'ik language, Boötes is Taluyaq, literally "fish trap," and the funnel-shaped part of the fish trap is known as Ilulirat.
Boötes is a constellation bordered by Virgo to the south, Coma Berenices and Canes Venatici to the west, Ursa Major to the northwest, Draco to the northeast, and Hercules, Corona Borealis and Serpens Caput to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Boo". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 16 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 13 36.1 and 15 49.3 , while the declination coordinates stretch from +7.36° to +55.1°. Covering 907 square degrees, Boötes culminates at midnight around 2 May and ranks 13th in area.
Colloquially, its pattern of stars has been likened to a kite or ice cream cone. However, depictions of Boötes have varied historically. Aratus described him circling the north pole, herding the two bears. Later ancient Greek depictions, described by Ptolemy, have him holding the reins of his hunting dogs (Canes Venatici) in his left hand, with a spear, club, or staff in his right hand. After Hevelius introduced Mons Maenalus in 1681, Boötes was often depicted standing on the Peloponnese mountain. By 1801, when Johann Bode published his Uranographia, Boötes had acquired a sickle, which was also held in his left hand.
The placement of Arcturus has also been mutable through the centuries. Traditionally, Arcturus lay between his thighs, as Ptolemy depicted him. However, Germanicus Caesar deviated from this tradition by placing Arcturus "where his garment is fastened by a knot".
In his Uranometria, Johann Bayer used the Greek letters alpha through to omega and then A to k to label what he saw as the most prominent 35 stars in the constellation, with subsequent astronomers splitting Kappa, Mu, Nu and Pi as two stars each. Nu is also the same star as Psi Herculis. John Flamsteed numbered 54 stars for the constellation.
Located 36.7 light-years from Earth, Arcturus, or Alpha Boötis, is the brightest star in Boötes and the fourth-brightest star in the sky at an apparent magnitude of −0.05; It is also the brightest star north of the celestial equator, just shading out Vega and Capella. Its name comes from the Greek for "bear-keeper". An orange giant of spectral class K1.5III, Arcturus is an ageing star that has exhausted its core supply of hydrogen and cooled and expanded to a diameter of 27 solar diameters, equivalent to approximately 32 million kilometers. Though its mass is approximately one solar mass (M☉), Arcturus shines with 133 times the luminosity of the Sun (L☉).
Bayer located Arcturus above the Herdman's left knee in his Uranometria. Nearby Eta Boötis, or Muphrid, is the uppermost star denoting the left leg. It is a 2.68-magnitude star 37 light-years distant with a spectral class of G0IV, indicating it has just exhausted its core hydrogen and is beginning to expand and cool. It is 9 times as luminous as the Sun and has 2.7 times its diameter. Analysis of its spectrum reveals that it is a spectroscopic binary. Muphrid and Arcturus lie only 3.3 light-years away from each other. Viewed from Arcturus, Muphrid would have a visual magnitude of −2½, while Arcturus would be around visual magnitude −4½ when seen from Muphrid.
Marking the herdsman's head is Beta Boötis, or Nekkar, a yellow giant of magnitude 3.5 and spectral type G8IIIa. Like Arcturus, it has expanded and cooled off the main sequence—likely to have lived most of its stellar life as a blue-white B-type main sequence star. Its common name comes from the Arabic phrase for "ox-driver". It is 219 light-years away and has a luminosity of 58 L☉.
Located 86 light-years distant, Gamma Boötis, or Seginus, is a white giant star of spectral class A7III, with a luminosity 34 times and diameter 3.5 times that of the Sun. It is a Delta Scuti variable, ranging between magnitudes 3.02 and 3.07 every 7 hours. These stars are short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study asteroseismology.
Delta Boötis is a wide double star with a primary of magnitude 3.5 and a secondary of magnitude 7.8. The primary is a yellow giant that has cooled and expanded to 10.4 times the diameter of the Sun. Of spectral class G8IV, it is around 121 light-years away, while the secondary is a yellow main sequence star of spectral type G0V. The two are thought to take 120,000 years to orbit each other.
Mu Boötis, known as Alkalurops, is a triple star popular with amateur astronomers. It has an overall magnitude of 4.3 and is 121 light-years away. Its name is from the Arabic phrase for "club" or "staff". The primary appears to be of magnitude 4.3 and is blue-white. The secondary appears to be of magnitude 6.5, but is actually a close double star itself with a primary of magnitude 7.0 and a secondary of magnitude 7.6. The secondary and tertiary stars have an orbital period of 260 years. The primary has an absolute magnitude of 2.6 and is of spectral class F0. The secondary and tertiary stars are separated by 2 arcseconds; the primary and secondary are separated by 109.1 arcseconds at an angle of 171 degrees.
Nu Boötis is an optical double star. The primary is an orange giant of magnitude 5.0 and the secondary is a white star of magnitude 5.0. The primary is 870 light-years away and the secondary is 430 light-years.
Epsilon Boötis, also known as Izar or Pulcherrima, is a close triple star popular with amateur astronomers and the most prominent binary star in Boötes. The primary is a yellow- or orange-hued magnitude 2.5 giant star, the secondary is a magnitude 4.6 blue-hued main-sequence star, and the tertiary is a magnitude 12.0 star. The system is 210 light-years away. The name "Izar" comes from the Arabic word for "girdle" or "loincloth", referring to its location in the constellation. The name "Pulcherrima" comes from the Latin phrase for "most beautiful", referring to its contrasting colors in a telescope. The primary and secondary stars are separated by 2.9 arcseconds at an angle of 341 degrees; the primary's spectral class is K0 and it has a luminosity of 200 L☉. To the naked eye, Izar has a magnitude of 2.37.
Nearby Rho and Sigma Boötis denote the herdsman's waist. Rho is an orange giant of spectral type K3III located around 160 light-years from Earth. It is ever so slightly variable, wavering by 0.003 of a magnitude from its average of 3.57. Sigma, a yellow-white main-sequence star of spectral type F3V, is suspected of varying in brightness from 4.45 to 4.49. It is around 52 light-years distant.
Traditionally known as Aulād al Dhiʼbah (أولاد الضباع – aulād al dhiʼb), "the Whelps of the Hyenas", Theta, Iota, Kappa and Lambda Boötis (or Xuange) are a small group of stars in the far north of the constellation. The magnitude 4.05 Theta Boötis has a spectral type of F7 and an absolute magnitude of 3.8. Iota Boötis is a triple star with a primary of magnitude 4.8 and spectral class of A7, a secondary of magnitude 7.5, and a tertiary of magnitude 12.6. The primary is 97 light-years away. The primary and secondary stars are separated by 38.5 arcseconds, at an angle of 33 degrees. The primary and tertiary stars are separated by 86.7 arcseconds at an angle of 194 degrees. Both the primary and tertiary appear white in a telescope, but the secondary appears yellow-hued.
Kappa Boötis is another wide double star. The primary is 155 light-years away and has a magnitude of 4.5. The secondary is 196 light-years away and has a magnitude of 6.6. The two components are separated by 13.4 arcseconds, at an angle of 236 degrees. The primary, with spectral class A7, appears white and the secondary appears bluish.
An apparent magnitude 4.18 type A0p star, Lambda Boötis is the prototype of a class of chemically peculiar stars, only some of which pulsate as Delta Scuti-type stars. The distinction between the Lambda Boötis stars as a class of stars with peculiar spectra, and the Delta Scuti stars whose class describes pulsation in low-overtone pressure modes, is an important one. While many Lambda Boötis stars pulsate and are Delta Scuti stars, not many Delta Scuti stars have Lambda Boötis peculiarities, since the Lambda Boötis stars are a much rarer class whose members can be found both inside and outside the Delta Scuti instability strip. Lambda Boötis stars are dwarf stars that can be either spectral class A or F. Like BL Boötis-type stars they are metal-poor. Scientists have had difficulty explaining the characteristics of Lambda Boötis stars, partly because only around 60 confirmed members exist, but also due to heterogeneity in the literature. Lambda has an absolute magnitude of 1.8.
There are two dimmer F-type stars, magnitude 4.83 12 Boötis, class F8; and magnitude 4.93 45 Boötis, class F5. Xi Boötis is a G8 yellow dwarf of magnitude 4.55, and absolute magnitude is 5.5. Two dimmer G-type stars are magnitude 4.86 31 Boötis, class G8, and magnitude 4.76 44 Boötis, class G0.
Of apparent magnitude 4.06, Upsilon Boötis has a spectral class of K5 and an absolute magnitude of −0.3. Dimmer than Upsilon Boötis is magnitude 4.54 Phi Boötis, with a spectral class of K2 and an absolute magnitude of −0.1. Just slightly dimmer than Phi at magnitude 4.60 is O Boötis, which, like Izar, has a spectral class of K0. O Boötis has an absolute magnitude of 0.2. The other four dim stars are magnitude 4.91 6 Boötis, class K4; magnitude 4.86 20 Boötis, class K3; magnitude 4.81 Omega Boötis, class K4; and magnitude 4.83 A Boötis, class K1.
There is one bright B-class star in Boötes; magnitude 4.93 Pi Boötis, also called Alazal. It has a spectral class of B9 and is 40 parsecs from Earth. There is also one M-type star, magnitude 4.81 34 Boötis. It is of class gM0.
Besides Pulcherrima and Alkalurops, there are several other binary stars in Boötes:
44 Boötis (i Boötis) is a double variable star 42 light-years away. It has an overall magnitude of 4.8 and appears yellow to the naked eye. The primary is of magnitude 5.3 and the secondary is of magnitude 6.1; their orbital period is 220 years. The secondary is itself an eclipsing variable star with a range of 0.6 magnitudes; its orbital period is 6.4 hours. It is a W Ursae Majoris variable that ranges in magnitude from a minimum of 7.1 to a maximum of 6.5 every 0.27 days. Both stars are G-type stars. Another eclipsing binary star is ZZ Boötis, which has two F2-type components of almost equal mass, and ranges in magnitude from a minimum of 6.79 to a maximum of 7.44 over a period of 5.0 days.
Two of the brighter Mira-type variable stars in the constellation are R and S Boötis. Both are red giants that range greatly in magnitude—from 6.2 to 13.1 over 223.4 days, and 7.8 to 13.8 over a period of 270.7 days, respectively. Also red giants, V and W Boötis are semi-regular variable stars that range in magnitude from 7.0 to 12.0 over a period of 258 days, and magnitude 4.7 to 5.4 over 450 days, respectively.
BL Boötis is the prototype of its class of pulsating variable stars, the anomalous Cepheids. These stars are somewhat similar to Cepheid variables, but they do not have the same relationship between their period and luminosity. Their periods are similar to RRAB variables; however, they are far brighter than these stars. BL Boötis is a member of the cluster NGC 5466. Anomalous Cepheids are metal poor and have masses not much larger than the Sun's, on average, 1.5 M☉. BL Boötis type stars are a subtype of RR Lyrae variables.
T Boötis was a nova observed in April 1860 at a magnitude of 9.7. It has never been observed since, but that does not preclude the possibility of it being a highly irregular variable star or a recurrent nova.
Extrasolar planets have been discovered encircling ten stars in Boötes as of 2012. Tau Boötis is orbited by a large planet, discovered in 1999. The host star itself is a magnitude 4.5 star of type F7V, 15.6 parsecs from Earth. It has a mass of 1.3 M☉ and a radius of 1.331 solar radii (R☉); a companion, GJ527B, orbits at a distance of 240 AU. Tau Boötis b, the sole planet discovered in the system, orbits at a distance of 0.046 AU every 3.31 days. Discovered through radial velocity measurements, it has a mass of 5.95 Jupiter masses (MJ). This makes it a hot Jupiter. The host star and planet are tidally locked, meaning that the planet's orbit and the star's particularly high rotation are synchronized. Furthermore, a slight variability in the host star's light may be caused by magnetic interactions with the planet. Carbon monoxide is present in the planet's atmosphere. Tau Boötis b does not transit its star, rather, its orbit is inclined 46 degrees.
Like Tau Boötis b, HAT-P-4b is also a hot Jupiter. It is noted for orbiting a particularly metal-rich host star and being of low density. Discovered in 2007, HAT-P-4 b has a mass of 0.68 MJ and a radius of 1.27 RJ. It orbits every 3.05 days at a distance of 0.04 AU. HAT-P-4, the host star, is an F-type star of magnitude 11.2, 310 parsecs from Earth. It is larger than the Sun, with a mass of 1.26 M☉ and a radius of 1.59 R☉.
Boötes is also home to multiple-planet systems. HD 128311 is the host star for a two-planet system, consisting of HD 128311 b and HD 128311 c, discovered in 2002 and 2005, respectively. HD 128311 b is the smaller planet, with a mass of 2.18 MJ; it was discovered through radial velocity observations. It orbits at almost the same distance as Earth, at 1.099 AU; however, its orbital period is significantly longer at 448.6 days.
The larger of the two, HD 128311 c, has a mass of 3.21 MJ and was discovered in the same manner. It orbits every 919 days inclined at 50°, and is 1.76 AU from the host star. The host star, HD 128311, is a K0V-type star located 16.6 parsecs from Earth. It is smaller than the Sun, with a mass of 0.84 M☉ and a radius of 0.73 R☉; it also appears below the threshold of naked-eye visibility at an apparent magnitude of 7.51.
There are several single-planet systems in Boötes. HD 132406 is a Sun-like star of spectral type G0V with an apparent magnitude of 8.45, 231.5 light-years from Earth. It has a mass of 1.09 M☉ and a radius of 1 R☉. The star is orbited by a gas giant, HD 132406 b, discovered in 2007. HD 132406 orbits 1.98 AU from its host star with a period of 974 days and has a mass of 5.61 MJ. The planet was discovered by the radial velocity method.
WASP-23 is a star with one orbiting planet, WASP-23 b. The planet, discovered by the transit method in 2010, orbits every 2.944 days very close to its Sun, at 0.0376 AU. It is smaller than Jupiter, at 0.884 MJ and 0.962 RJ. Its star is a K1V-type star of apparent magnitude 12.7, far below naked-eye visibility, and smaller than the Sun at 0.78 M☉ and 0.765 R☉.
HD 131496 is also encircled by one planet, HD 131496 b. The star is of type K0 and is located 110 parsecs from Earth; it appears at a visual magnitude of 7.96. It is significantly larger than the Sun, with a mass of 1.61 M☉ and a radius of 4.6 solar radii. Its one planet, discovered in 2011 by the radial velocity method, has a mass of 2.2 MJ; its radius is as yet undetermined. HD 131496 b orbits at a distance of 2.09 AU with a period of 883 days.
Another single planetary system in Boötes is the HD 132563 system, a triple star system. The parent star, technically HD 132563B, is a star of magnitude 9.47, 96 parsecs from Earth. It is almost exactly the size of the Sun, with the same radius and a mass only 1% greater. Its planet, HD 132563B b, was discovered in 2011 by the radial velocity method. 1.49 MJ, it orbits 2.62 AU from its star with a period of 1544 days. Its orbit is somewhat elliptical, with an eccentricity of 0.22. HD 132563B b is one of very few planets found in triple star systems; it orbits the isolated member of the system, which is separated from the other components, a spectroscopic binary, by 400 AU.
Also discovered through the radial velocity method, albeit a year earlier, is HD 136418 b, a two-Jupiter-mass planet that orbits the star HD 136418 at a distance of 1.32 AU with a period of 464.3 days. Its host star is a magnitude 7.88 G5-type star, 98.2 parsecs from Earth. It has a radius of 3.4 R☉ and a mass of 1.33 M☉.
WASP-14 b is one of the most massive and dense exoplanets known, with a mass of 7.341 MJ and a radius of 1.281 RJ. Discovered via the transit method, it orbits 0.036 AU from its host star with a period of 2.24 days. WASP-14 b has a density of 4.6 grams per cubic centimeter, making it one of the densest exoplanets known. Its host star, WASP-14, is an F5V-type star of magnitude 9.75, 160 parsecs from Earth. It has a radius of 1.306 R☉ and a mass of 1.211 M☉. It also has a very high proportion of lithium.
Boötes is in a part of the celestial sphere facing away from the plane of our home Milky Way galaxy, and so does not have open clusters or nebulae. Instead, it has one bright globular cluster and many faint galaxies. The globular cluster NGC 5466 has an overall magnitude of 9.1 and a diameter of 11 arcminutes. It is a very loose globular cluster with fairly few stars and may appear as a rich, concentrated open cluster in a telescope. NGC 5466 is classified as a Shapley–Sawyer Concentration Class 12 cluster, reflecting its sparsity. Its fairly large diameter means that it has a low surface brightness, so it appears far dimmer than the catalogued magnitude of 9.1 and requires a large amateur telescope to view. Only approximately 12 stars are resolved by an amateur instrument.
Boötes has two bright galaxies. NGC 5248 (Caldwell 45) is a type Sc galaxy (a variety of spiral galaxy) of magnitude 10.2. It measures 6.5 by 4.9 arcminutes. Fifty million light-years from Earth, NGC 5248 is a member of the Virgo Cluster of galaxies; it has dim outer arms and obvious H II regions, dust lanes and young star clusters. NGC 5676 is another type Sc galaxy of magnitude 10.9. It measures 3.9 by 2.0 arcminutes. Other galaxies include NGC 5008, a type Sc emission-line galaxy, NGC 5548, a type S Seyfert galaxy, NGC 5653, a type S HII galaxy, NGC 5778 (also classified as NGC 5825), a type E galaxy that is the brightest of its cluster, NGC 5886, and NGC 5888, a type SBb galaxy. NGC 5698 is a barred spiral galaxy, notable for being the host of the 2005 supernova SN 2005bc, which peaked at magnitude 15.3.
Further away lies the 250-million-light-year-diameter Boötes void, a huge space largely empty of galaxies. Discovered by Robert Kirshner and colleagues in 1981, it is roughly 700 million light-years from Earth. Beyond it and within the bounds of the constellation, lie two superclusters at around 830 million and 1 billion light-years distant.
The Hercules–Corona Borealis Great Wall, the largest-known structure in the Universe, covers a significant part of Boötes.
Boötes is home to the Quadrantid meteor shower, the most prolific annual meteor shower. It was discovered in January 1835 and named in 1864 by Alexander Herschel. The radiant is located in northern Boötes near Kappa Boötis, in its namesake former constellation of Quadrans Muralis. Quadrantid meteors are dim, but have a peak visible hourly rate of approximately 100 per hour on January 3–4. The zenithal hourly rate of the Quadrantids is approximately 130 meteors per hour at their peak; it is also a very narrow shower.
The Quadrantids are notoriously difficult to observe because of a low radiant and often inclement weather. The parent body of the meteor shower has been disputed for decades; however, Peter Jenniskens has proposed 2003 EH1, a minor planet, as the parent. 2003 EH1 may be linked to C/1490 Y1, a comet previously thought to be a potential parent body for the Quadrantids.
2003 EH1 is a short-period comet of the Jupiter family; 500 years ago, it experienced a catastrophic breakup event. It is now dormant. The Quadrantids had notable displays in 1982, 1985 and 2004. Meteors from this shower often appear to have a blue hue and travel at a moderate speed of 41.5–43 kilometers per second.
On April 28, 1984, a remarkable outburst of the normally placid Alpha Bootids was observed by visual observer Frank Witte from 00:00 to 2:30 UTC. In a 6 cm telescope, he observed 433 meteors in a field of view near Arcturus with a diameter of less than 1°. Peter Jenniskens comments that this outburst resembled a "typical dust trail crossing". The Alpha Bootids normally begin on April 14, peaking on April 27 and 28, and finishing on May 12. Its meteors are slow-moving, with a velocity of 20.9 kilometers per second. They may be related to Comet 73P/Schwassmann–Wachmann 3, but this connection is only theorized.
The June Bootids, also known as the Iota Draconids, is a meteor shower associated with the comet 7P/Pons–Winnecke, first recognized on May 27, 1916, by William F. Denning. The shower, with its slow meteors, was not observed prior to 1916 because Earth did not cross the comet's dust trail until Jupiter perturbed Pons–Winnecke's orbit, causing it to come within 0.03 AU (4.5 million km; 2.8 million mi) of Earth's orbit the first year the June Bootids were observed.
In 1982, E. A. Reznikov discovered that the 1916 outburst was caused by material released from the comet in 1819. Another outburst of the June Bootids was not observed until 1998, because Comet Pons–Winnecke's orbit was not in a favorable position. However, on June 27, 1998, an outburst of meteors radiating from Boötes, later confirmed to be associated with Pons-Winnecke, was observed. They were incredibly long-lived, with trails of the brightest meteors lasting several seconds at times. Many fireballs, green-hued trails, and even some meteors that cast shadows were observed throughout the outburst, which had a maximum zenithal hourly rate of 200–300 meteors per hour.
Two Russian astronomers determined in 2002 that material ejected from the comet in 1825 was responsible for the 1998 outburst. Ejecta from the comet dating to 1819, 1825 and 1830 was predicted to enter Earth's atmosphere on June 23, 2004. The predictions of a shower less spectacular than the 1998 showing were borne out in a display that had a maximum zenithal hourly rate of 16–20 meteors per hour that night. The June Bootids are not expected to have another outburst in the next 50 years.
Typically, only 1–2 dim, very slow meteors are visible per hour; the average June Bootid has a magnitude of 5.0. It is related to the Alpha Draconids and the Bootids-Draconids. The shower lasts from June 27 to July 5, with a peak on the night of June 28. The June Bootids are classified as a class III shower (variable), and has an average entry velocity of 18 kilometers per second. Its radiant is located 7 degrees north of Beta Boötis.
The Beta Bootids is a weak shower that begins on January 5, peaks on January 16, and ends on January 18. Its meteors travel at 43 km/s. The January Bootids is a short, young meteor shower that begins on January 9, peaks from January 16 to January 18, and ends on January 18.
The Phi Bootids is another weak shower radiating from Boötes. It begins on April 16, peaks on April 30 and May 1, and ends on May 12. Its meteors are slow-moving, with a velocity of 15.1 km/s. They were discovered in 2006. The shower's peak hourly rate can be as high as six meteors per hour. Though named for a star in Boötes, the Phi Bootid radiant has moved into Hercules. The meteor stream is associated with three different asteroids: 1620 Geographos, 2062 Aten and 1978 CA.
The Lambda Bootids, part of the Bootid-Coronae Borealid Complex, are a weak annual shower with moderately fast meteors; 41.75 km/s. The complex includes the Lambda Bootids, as well as the Theta Coronae Borealids and Xi Coronae Borealids. All of the Bootid-Coronae Borealid showers are Jupiter family comet showers; the streams in the complex have highly inclined orbits.
There are several minor showers in Boötes, some of whose existence is yet to be verified. The Rho Bootids radiate from near the namesake star, and were hypothesized in 2010. The average Rho Bootid has an entry velocity of 43 km/s. It peaks in November and lasts for three days.
The Rho Bootid shower is part of the SMA complex, a group of meteor showers related to the Taurids, which is in turn linked to the comet 2P/Encke. However, the link to the Taurid shower remains unconfirmed and may be a chance correlation. Another such shower is the Gamma Bootids, which were hypothesized in 2006. Gamma Bootids have an entry velocity of 50.3 km/s. The Nu Bootids, hypothesized in 2012, have faster meteors, with an entry velocity of 62.8 km/s.
Citations
References | [
{
"paragraph_id": 0,
"text": "Boötes (/boʊˈoʊtiːz/ boh-OH-teez) is a constellation in the northern sky, located between 0° and +60° declination, and 13 and 16 hours of right ascension on the celestial sphere. The name comes from Latin: Boōtēs, which comes from Greek: Βοώτης, translit. Boṓtēs 'herdsman' or 'plowman' (literally, 'ox-driver'; from βοῦς boûs 'cow').",
"title": ""
},
{
"paragraph_id": 1,
"text": "One of the 48 constellations described by the 2nd-century astronomer Ptolemy, Boötes is now one of the 88 modern constellations. It contains the fourth-brightest star in the night sky, the orange giant Arcturus. Epsilon Boötis, or Izar, is a colourful multiple star popular with amateur astronomers. Boötes is home to many other bright stars, including eight above the fourth magnitude and an additional 21 above the fifth magnitude, making a total of 29 stars easily visible to the naked eye.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In ancient Babylon, the stars of Boötes were known as SHU.PA. They were apparently depicted as the god Enlil, who was the leader of the Babylonian pantheon and special patron of farmers. Boötes may have been represented by the animal foreleg constellation in ancient Egypt, resembling that of an ox sufficiently to have been originally proposed as the \"foreleg of ox\" by Berio.",
"title": "History and mythology"
},
{
"paragraph_id": 3,
"text": "Homer mentions Boötes in the Odyssey as a celestial reference for navigation, describing it as \"late-setting\" or \"slow to set\". Exactly whom Boötes is supposed to represent in Greek mythology is not clear. According to one version, he was a son of Demeter, Philomenus, twin brother of Plutus, a plowman who drove the oxen in the constellation Ursa Major. This agrees with the constellation's name. The ancient Greeks saw the asterism now called the \"Big Dipper\" or \"Plough\" as a cart with oxen. Some myths say that Boötes invented the plow and was memorialized for his ingenuity as a constellation.",
"title": "History and mythology"
},
{
"paragraph_id": 4,
"text": "Another myth associated with Boötes by Hyginus is that of Icarius, who was schooled as a grape farmer and winemaker by Dionysus. Icarius made wine so strong that those who drank it appeared poisoned, which caused shepherds to avenge their supposedly poisoned friends by killing Icarius. Maera, Icarius' dog, brought his daughter Erigone to her father's body, whereupon both she and the dog committed suicide. Zeus then chose to honor all three by placing them in the sky as constellations: Icarius as Boötes, Erigone as Virgo, and Maera as Canis Major or Canis Minor.",
"title": "History and mythology"
},
{
"paragraph_id": 5,
"text": "Following another reading, the constellation is identified with Arcas and also referred to as Arcas and Arcturus, son of Zeus and Callisto. Arcas was brought up by his maternal grandfather Lycaon, to whom one day Zeus went and had a meal. To verify that the guest was really the king of the gods, Lycaon killed his grandson and prepared a meal made from his flesh. Zeus noticed and became very angry, transforming Lycaon into a wolf and giving life back to his son. In the meantime Callisto had been transformed into a she-bear by Zeus's wife Hera, who was angry at Zeus's infidelity. This is corroborated by the Greek name for Boötes, Arctophylax, which means \"Bear Watcher\".",
"title": "History and mythology"
},
{
"paragraph_id": 6,
"text": "Callisto, in the form of a bear was almost killed by her son, who was out hunting. Zeus rescued her, taking her into the sky where she became Ursa Major, \"the Great Bear\". Arcturus, the name of the constellation's brightest star, comes from the Greek word meaning \"guardian of the bear\". Sometimes Arcturus is depicted as leading the hunting dogs of nearby Canes Venatici and driving the bears of Ursa Major and Ursa Minor.",
"title": "History and mythology"
},
{
"paragraph_id": 7,
"text": "Several former constellations were formed from stars now included in Boötes. Quadrans Muralis, the Quadrant, was a constellation created near Beta Boötis from faint stars. It was designated in 1795 by Jérôme Lalande, an astronomer who used a quadrant to perform detailed astronometric measurements. Lalande worked with Nicole-Reine Lepaute and others to predict the 1758 return of Halley's Comet. Quadrans Muralis was formed from the stars of eastern Boötes, western Hercules and Draco. It was originally called Le Mural by Jean Fortin in his 1795 Atlas Céleste; it was not given the name Quadrans Muralis until Johann Bode's 1801 Uranographia. The constellation was quite faint, with its brightest stars reaching the 5th magnitude. Mons Maenalus, representing the Maenalus mountains, was created by Johannes Hevelius in 1687 at the foot of the constellation's figure. The mountain was named for the son of Lycaon, Maenalus. The mountain, one of Diana's hunting grounds, was also holy to Pan.",
"title": "History and mythology"
},
{
"paragraph_id": 8,
"text": "The stars of Boötes were incorporated into many different Chinese constellations. Arcturus was part of the most prominent of these, variously designated as the celestial king's throne (Tian Wang) or the Blue Dragon's horn (Daijiao); the name Daijiao, meaning \"great horn\", is more common. Arcturus was given such importance in Chinese celestial mythology because of its status marking the beginning of the lunar calendar, as well as its status as the brightest star in the northern night sky.",
"title": "History and mythology"
},
{
"paragraph_id": 9,
"text": "Two constellations flanked Daijiao: Yousheti to the right and Zuosheti to the left; they represented companions that orchestrated the seasons. Zuosheti was formed from modern Zeta, Omicron and Pi Boötis, while Yousheti was formed from modern Eta, Tau and Upsilon Boötis. Dixi, the Emperor's ceremonial banquet mat, was north of Arcturus, consisting of the stars 12, 11 and 9 Boötis. Another northern constellation was Qigong, the Seven Dukes, which mostly straddled the Boötes-Hercules border. It included either Delta Boötis or Beta Boötis as its terminus.",
"title": "History and mythology"
},
{
"paragraph_id": 10,
"text": "The other Chinese constellations made up of the stars of Boötes existed in the modern constellation's north; they are all representations of weapons. Tianqiang, the spear, was formed from Iota, Kappa and Theta Boötis; Genghe, variously representing a lance or shield, was formed from Epsilon, Rho and Sigma Boötis.",
"title": "History and mythology"
},
{
"paragraph_id": 11,
"text": "There were also two weapons made up of a singular star. Xuange, the halberd, was represented by Lambda Boötis, and Zhaoyao, either the sword or the spear, was represented by Gamma Boötis.",
"title": "History and mythology"
},
{
"paragraph_id": 12,
"text": "Two Chinese constellations have an uncertain placement in Boötes. Kangchi, the lake, was placed south of Arcturus, though its specific location is disputed. It may have been placed entirely in Boötes, on either side of the Boötes-Virgo border, or on either side of the Virgo-Libra border. The constellation Zhouding, a bronze tripod-mounted container used for food, was sometimes cited as the stars 1, 2 and 6 Boötis. However, it has also been associated with three stars in Coma Berenices.",
"title": "History and mythology"
},
{
"paragraph_id": 13,
"text": "Boötes is also known to Native American cultures. In Yup'ik language, Boötes is Taluyaq, literally \"fish trap,\" and the funnel-shaped part of the fish trap is known as Ilulirat.",
"title": "History and mythology"
},
{
"paragraph_id": 14,
"text": "Boötes is a constellation bordered by Virgo to the south, Coma Berenices and Canes Venatici to the west, Ursa Major to the northwest, Draco to the northeast, and Hercules, Corona Borealis and Serpens Caput to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is \"Boo\". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 16 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between 13 36.1 and 15 49.3 , while the declination coordinates stretch from +7.36° to +55.1°. Covering 907 square degrees, Boötes culminates at midnight around 2 May and ranks 13th in area.",
"title": "Characteristics"
},
{
"paragraph_id": 15,
"text": "Colloquially, its pattern of stars has been likened to a kite or ice cream cone. However, depictions of Boötes have varied historically. Aratus described him circling the north pole, herding the two bears. Later ancient Greek depictions, described by Ptolemy, have him holding the reins of his hunting dogs (Canes Venatici) in his left hand, with a spear, club, or staff in his right hand. After Hevelius introduced Mons Maenalus in 1681, Boötes was often depicted standing on the Peloponnese mountain. By 1801, when Johann Bode published his Uranographia, Boötes had acquired a sickle, which was also held in his left hand.",
"title": "Characteristics"
},
{
"paragraph_id": 16,
"text": "The placement of Arcturus has also been mutable through the centuries. Traditionally, Arcturus lay between his thighs, as Ptolemy depicted him. However, Germanicus Caesar deviated from this tradition by placing Arcturus \"where his garment is fastened by a knot\".",
"title": "Characteristics"
},
{
"paragraph_id": 17,
"text": "In his Uranometria, Johann Bayer used the Greek letters alpha through to omega and then A to k to label what he saw as the most prominent 35 stars in the constellation, with subsequent astronomers splitting Kappa, Mu, Nu and Pi as two stars each. Nu is also the same star as Psi Herculis. John Flamsteed numbered 54 stars for the constellation.",
"title": "Features"
},
{
"paragraph_id": 18,
"text": "Located 36.7 light-years from Earth, Arcturus, or Alpha Boötis, is the brightest star in Boötes and the fourth-brightest star in the sky at an apparent magnitude of −0.05; It is also the brightest star north of the celestial equator, just shading out Vega and Capella. Its name comes from the Greek for \"bear-keeper\". An orange giant of spectral class K1.5III, Arcturus is an ageing star that has exhausted its core supply of hydrogen and cooled and expanded to a diameter of 27 solar diameters, equivalent to approximately 32 million kilometers. Though its mass is approximately one solar mass (M☉), Arcturus shines with 133 times the luminosity of the Sun (L☉).",
"title": "Features"
},
{
"paragraph_id": 19,
"text": "Bayer located Arcturus above the Herdman's left knee in his Uranometria. Nearby Eta Boötis, or Muphrid, is the uppermost star denoting the left leg. It is a 2.68-magnitude star 37 light-years distant with a spectral class of G0IV, indicating it has just exhausted its core hydrogen and is beginning to expand and cool. It is 9 times as luminous as the Sun and has 2.7 times its diameter. Analysis of its spectrum reveals that it is a spectroscopic binary. Muphrid and Arcturus lie only 3.3 light-years away from each other. Viewed from Arcturus, Muphrid would have a visual magnitude of −2½, while Arcturus would be around visual magnitude −4½ when seen from Muphrid.",
"title": "Features"
},
{
"paragraph_id": 20,
"text": "Marking the herdsman's head is Beta Boötis, or Nekkar, a yellow giant of magnitude 3.5 and spectral type G8IIIa. Like Arcturus, it has expanded and cooled off the main sequence—likely to have lived most of its stellar life as a blue-white B-type main sequence star. Its common name comes from the Arabic phrase for \"ox-driver\". It is 219 light-years away and has a luminosity of 58 L☉.",
"title": "Features"
},
{
"paragraph_id": 21,
"text": "Located 86 light-years distant, Gamma Boötis, or Seginus, is a white giant star of spectral class A7III, with a luminosity 34 times and diameter 3.5 times that of the Sun. It is a Delta Scuti variable, ranging between magnitudes 3.02 and 3.07 every 7 hours. These stars are short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study asteroseismology.",
"title": "Features"
},
{
"paragraph_id": 22,
"text": "Delta Boötis is a wide double star with a primary of magnitude 3.5 and a secondary of magnitude 7.8. The primary is a yellow giant that has cooled and expanded to 10.4 times the diameter of the Sun. Of spectral class G8IV, it is around 121 light-years away, while the secondary is a yellow main sequence star of spectral type G0V. The two are thought to take 120,000 years to orbit each other.",
"title": "Features"
},
{
"paragraph_id": 23,
"text": "Mu Boötis, known as Alkalurops, is a triple star popular with amateur astronomers. It has an overall magnitude of 4.3 and is 121 light-years away. Its name is from the Arabic phrase for \"club\" or \"staff\". The primary appears to be of magnitude 4.3 and is blue-white. The secondary appears to be of magnitude 6.5, but is actually a close double star itself with a primary of magnitude 7.0 and a secondary of magnitude 7.6. The secondary and tertiary stars have an orbital period of 260 years. The primary has an absolute magnitude of 2.6 and is of spectral class F0. The secondary and tertiary stars are separated by 2 arcseconds; the primary and secondary are separated by 109.1 arcseconds at an angle of 171 degrees.",
"title": "Features"
},
{
"paragraph_id": 24,
"text": "Nu Boötis is an optical double star. The primary is an orange giant of magnitude 5.0 and the secondary is a white star of magnitude 5.0. The primary is 870 light-years away and the secondary is 430 light-years.",
"title": "Features"
},
{
"paragraph_id": 25,
"text": "Epsilon Boötis, also known as Izar or Pulcherrima, is a close triple star popular with amateur astronomers and the most prominent binary star in Boötes. The primary is a yellow- or orange-hued magnitude 2.5 giant star, the secondary is a magnitude 4.6 blue-hued main-sequence star, and the tertiary is a magnitude 12.0 star. The system is 210 light-years away. The name \"Izar\" comes from the Arabic word for \"girdle\" or \"loincloth\", referring to its location in the constellation. The name \"Pulcherrima\" comes from the Latin phrase for \"most beautiful\", referring to its contrasting colors in a telescope. The primary and secondary stars are separated by 2.9 arcseconds at an angle of 341 degrees; the primary's spectral class is K0 and it has a luminosity of 200 L☉. To the naked eye, Izar has a magnitude of 2.37.",
"title": "Features"
},
{
"paragraph_id": 26,
"text": "Nearby Rho and Sigma Boötis denote the herdsman's waist. Rho is an orange giant of spectral type K3III located around 160 light-years from Earth. It is ever so slightly variable, wavering by 0.003 of a magnitude from its average of 3.57. Sigma, a yellow-white main-sequence star of spectral type F3V, is suspected of varying in brightness from 4.45 to 4.49. It is around 52 light-years distant.",
"title": "Features"
},
{
"paragraph_id": 27,
"text": "Traditionally known as Aulād al Dhiʼbah (أولاد الضباع – aulād al dhiʼb), \"the Whelps of the Hyenas\", Theta, Iota, Kappa and Lambda Boötis (or Xuange) are a small group of stars in the far north of the constellation. The magnitude 4.05 Theta Boötis has a spectral type of F7 and an absolute magnitude of 3.8. Iota Boötis is a triple star with a primary of magnitude 4.8 and spectral class of A7, a secondary of magnitude 7.5, and a tertiary of magnitude 12.6. The primary is 97 light-years away. The primary and secondary stars are separated by 38.5 arcseconds, at an angle of 33 degrees. The primary and tertiary stars are separated by 86.7 arcseconds at an angle of 194 degrees. Both the primary and tertiary appear white in a telescope, but the secondary appears yellow-hued.",
"title": "Features"
},
{
"paragraph_id": 28,
"text": "Kappa Boötis is another wide double star. The primary is 155 light-years away and has a magnitude of 4.5. The secondary is 196 light-years away and has a magnitude of 6.6. The two components are separated by 13.4 arcseconds, at an angle of 236 degrees. The primary, with spectral class A7, appears white and the secondary appears bluish.",
"title": "Features"
},
{
"paragraph_id": 29,
"text": "An apparent magnitude 4.18 type A0p star, Lambda Boötis is the prototype of a class of chemically peculiar stars, only some of which pulsate as Delta Scuti-type stars. The distinction between the Lambda Boötis stars as a class of stars with peculiar spectra, and the Delta Scuti stars whose class describes pulsation in low-overtone pressure modes, is an important one. While many Lambda Boötis stars pulsate and are Delta Scuti stars, not many Delta Scuti stars have Lambda Boötis peculiarities, since the Lambda Boötis stars are a much rarer class whose members can be found both inside and outside the Delta Scuti instability strip. Lambda Boötis stars are dwarf stars that can be either spectral class A or F. Like BL Boötis-type stars they are metal-poor. Scientists have had difficulty explaining the characteristics of Lambda Boötis stars, partly because only around 60 confirmed members exist, but also due to heterogeneity in the literature. Lambda has an absolute magnitude of 1.8.",
"title": "Features"
},
{
"paragraph_id": 30,
"text": "There are two dimmer F-type stars, magnitude 4.83 12 Boötis, class F8; and magnitude 4.93 45 Boötis, class F5. Xi Boötis is a G8 yellow dwarf of magnitude 4.55, and absolute magnitude is 5.5. Two dimmer G-type stars are magnitude 4.86 31 Boötis, class G8, and magnitude 4.76 44 Boötis, class G0.",
"title": "Features"
},
{
"paragraph_id": 31,
"text": "Of apparent magnitude 4.06, Upsilon Boötis has a spectral class of K5 and an absolute magnitude of −0.3. Dimmer than Upsilon Boötis is magnitude 4.54 Phi Boötis, with a spectral class of K2 and an absolute magnitude of −0.1. Just slightly dimmer than Phi at magnitude 4.60 is O Boötis, which, like Izar, has a spectral class of K0. O Boötis has an absolute magnitude of 0.2. The other four dim stars are magnitude 4.91 6 Boötis, class K4; magnitude 4.86 20 Boötis, class K3; magnitude 4.81 Omega Boötis, class K4; and magnitude 4.83 A Boötis, class K1.",
"title": "Features"
},
{
"paragraph_id": 32,
"text": "There is one bright B-class star in Boötes; magnitude 4.93 Pi Boötis, also called Alazal. It has a spectral class of B9 and is 40 parsecs from Earth. There is also one M-type star, magnitude 4.81 34 Boötis. It is of class gM0.",
"title": "Features"
},
{
"paragraph_id": 33,
"text": "Besides Pulcherrima and Alkalurops, there are several other binary stars in Boötes:",
"title": "Features"
},
{
"paragraph_id": 34,
"text": "44 Boötis (i Boötis) is a double variable star 42 light-years away. It has an overall magnitude of 4.8 and appears yellow to the naked eye. The primary is of magnitude 5.3 and the secondary is of magnitude 6.1; their orbital period is 220 years. The secondary is itself an eclipsing variable star with a range of 0.6 magnitudes; its orbital period is 6.4 hours. It is a W Ursae Majoris variable that ranges in magnitude from a minimum of 7.1 to a maximum of 6.5 every 0.27 days. Both stars are G-type stars. Another eclipsing binary star is ZZ Boötis, which has two F2-type components of almost equal mass, and ranges in magnitude from a minimum of 6.79 to a maximum of 7.44 over a period of 5.0 days.",
"title": "Features"
},
{
"paragraph_id": 35,
"text": "Two of the brighter Mira-type variable stars in the constellation are R and S Boötis. Both are red giants that range greatly in magnitude—from 6.2 to 13.1 over 223.4 days, and 7.8 to 13.8 over a period of 270.7 days, respectively. Also red giants, V and W Boötis are semi-regular variable stars that range in magnitude from 7.0 to 12.0 over a period of 258 days, and magnitude 4.7 to 5.4 over 450 days, respectively.",
"title": "Features"
},
{
"paragraph_id": 36,
"text": "BL Boötis is the prototype of its class of pulsating variable stars, the anomalous Cepheids. These stars are somewhat similar to Cepheid variables, but they do not have the same relationship between their period and luminosity. Their periods are similar to RRAB variables; however, they are far brighter than these stars. BL Boötis is a member of the cluster NGC 5466. Anomalous Cepheids are metal poor and have masses not much larger than the Sun's, on average, 1.5 M☉. BL Boötis type stars are a subtype of RR Lyrae variables.",
"title": "Features"
},
{
"paragraph_id": 37,
"text": "T Boötis was a nova observed in April 1860 at a magnitude of 9.7. It has never been observed since, but that does not preclude the possibility of it being a highly irregular variable star or a recurrent nova.",
"title": "Features"
},
{
"paragraph_id": 38,
"text": "Extrasolar planets have been discovered encircling ten stars in Boötes as of 2012. Tau Boötis is orbited by a large planet, discovered in 1999. The host star itself is a magnitude 4.5 star of type F7V, 15.6 parsecs from Earth. It has a mass of 1.3 M☉ and a radius of 1.331 solar radii (R☉); a companion, GJ527B, orbits at a distance of 240 AU. Tau Boötis b, the sole planet discovered in the system, orbits at a distance of 0.046 AU every 3.31 days. Discovered through radial velocity measurements, it has a mass of 5.95 Jupiter masses (MJ). This makes it a hot Jupiter. The host star and planet are tidally locked, meaning that the planet's orbit and the star's particularly high rotation are synchronized. Furthermore, a slight variability in the host star's light may be caused by magnetic interactions with the planet. Carbon monoxide is present in the planet's atmosphere. Tau Boötis b does not transit its star, rather, its orbit is inclined 46 degrees.",
"title": "Features"
},
{
"paragraph_id": 39,
"text": "Like Tau Boötis b, HAT-P-4b is also a hot Jupiter. It is noted for orbiting a particularly metal-rich host star and being of low density. Discovered in 2007, HAT-P-4 b has a mass of 0.68 MJ and a radius of 1.27 RJ. It orbits every 3.05 days at a distance of 0.04 AU. HAT-P-4, the host star, is an F-type star of magnitude 11.2, 310 parsecs from Earth. It is larger than the Sun, with a mass of 1.26 M☉ and a radius of 1.59 R☉.",
"title": "Features"
},
{
"paragraph_id": 40,
"text": "Boötes is also home to multiple-planet systems. HD 128311 is the host star for a two-planet system, consisting of HD 128311 b and HD 128311 c, discovered in 2002 and 2005, respectively. HD 128311 b is the smaller planet, with a mass of 2.18 MJ; it was discovered through radial velocity observations. It orbits at almost the same distance as Earth, at 1.099 AU; however, its orbital period is significantly longer at 448.6 days.",
"title": "Features"
},
{
"paragraph_id": 41,
"text": "The larger of the two, HD 128311 c, has a mass of 3.21 MJ and was discovered in the same manner. It orbits every 919 days inclined at 50°, and is 1.76 AU from the host star. The host star, HD 128311, is a K0V-type star located 16.6 parsecs from Earth. It is smaller than the Sun, with a mass of 0.84 M☉ and a radius of 0.73 R☉; it also appears below the threshold of naked-eye visibility at an apparent magnitude of 7.51.",
"title": "Features"
},
{
"paragraph_id": 42,
"text": "There are several single-planet systems in Boötes. HD 132406 is a Sun-like star of spectral type G0V with an apparent magnitude of 8.45, 231.5 light-years from Earth. It has a mass of 1.09 M☉ and a radius of 1 R☉. The star is orbited by a gas giant, HD 132406 b, discovered in 2007. HD 132406 orbits 1.98 AU from its host star with a period of 974 days and has a mass of 5.61 MJ. The planet was discovered by the radial velocity method.",
"title": "Features"
},
{
"paragraph_id": 43,
"text": "WASP-23 is a star with one orbiting planet, WASP-23 b. The planet, discovered by the transit method in 2010, orbits every 2.944 days very close to its Sun, at 0.0376 AU. It is smaller than Jupiter, at 0.884 MJ and 0.962 RJ. Its star is a K1V-type star of apparent magnitude 12.7, far below naked-eye visibility, and smaller than the Sun at 0.78 M☉ and 0.765 R☉.",
"title": "Features"
},
{
"paragraph_id": 44,
"text": "HD 131496 is also encircled by one planet, HD 131496 b. The star is of type K0 and is located 110 parsecs from Earth; it appears at a visual magnitude of 7.96. It is significantly larger than the Sun, with a mass of 1.61 M☉ and a radius of 4.6 solar radii. Its one planet, discovered in 2011 by the radial velocity method, has a mass of 2.2 MJ; its radius is as yet undetermined. HD 131496 b orbits at a distance of 2.09 AU with a period of 883 days.",
"title": "Features"
},
{
"paragraph_id": 45,
"text": "Another single planetary system in Boötes is the HD 132563 system, a triple star system. The parent star, technically HD 132563B, is a star of magnitude 9.47, 96 parsecs from Earth. It is almost exactly the size of the Sun, with the same radius and a mass only 1% greater. Its planet, HD 132563B b, was discovered in 2011 by the radial velocity method. 1.49 MJ, it orbits 2.62 AU from its star with a period of 1544 days. Its orbit is somewhat elliptical, with an eccentricity of 0.22. HD 132563B b is one of very few planets found in triple star systems; it orbits the isolated member of the system, which is separated from the other components, a spectroscopic binary, by 400 AU.",
"title": "Features"
},
{
"paragraph_id": 46,
"text": "Also discovered through the radial velocity method, albeit a year earlier, is HD 136418 b, a two-Jupiter-mass planet that orbits the star HD 136418 at a distance of 1.32 AU with a period of 464.3 days. Its host star is a magnitude 7.88 G5-type star, 98.2 parsecs from Earth. It has a radius of 3.4 R☉ and a mass of 1.33 M☉.",
"title": "Features"
},
{
"paragraph_id": 47,
"text": "WASP-14 b is one of the most massive and dense exoplanets known, with a mass of 7.341 MJ and a radius of 1.281 RJ. Discovered via the transit method, it orbits 0.036 AU from its host star with a period of 2.24 days. WASP-14 b has a density of 4.6 grams per cubic centimeter, making it one of the densest exoplanets known. Its host star, WASP-14, is an F5V-type star of magnitude 9.75, 160 parsecs from Earth. It has a radius of 1.306 R☉ and a mass of 1.211 M☉. It also has a very high proportion of lithium.",
"title": "Features"
},
{
"paragraph_id": 48,
"text": "Boötes is in a part of the celestial sphere facing away from the plane of our home Milky Way galaxy, and so does not have open clusters or nebulae. Instead, it has one bright globular cluster and many faint galaxies. The globular cluster NGC 5466 has an overall magnitude of 9.1 and a diameter of 11 arcminutes. It is a very loose globular cluster with fairly few stars and may appear as a rich, concentrated open cluster in a telescope. NGC 5466 is classified as a Shapley–Sawyer Concentration Class 12 cluster, reflecting its sparsity. Its fairly large diameter means that it has a low surface brightness, so it appears far dimmer than the catalogued magnitude of 9.1 and requires a large amateur telescope to view. Only approximately 12 stars are resolved by an amateur instrument.",
"title": "Features"
},
{
"paragraph_id": 49,
"text": "Boötes has two bright galaxies. NGC 5248 (Caldwell 45) is a type Sc galaxy (a variety of spiral galaxy) of magnitude 10.2. It measures 6.5 by 4.9 arcminutes. Fifty million light-years from Earth, NGC 5248 is a member of the Virgo Cluster of galaxies; it has dim outer arms and obvious H II regions, dust lanes and young star clusters. NGC 5676 is another type Sc galaxy of magnitude 10.9. It measures 3.9 by 2.0 arcminutes. Other galaxies include NGC 5008, a type Sc emission-line galaxy, NGC 5548, a type S Seyfert galaxy, NGC 5653, a type S HII galaxy, NGC 5778 (also classified as NGC 5825), a type E galaxy that is the brightest of its cluster, NGC 5886, and NGC 5888, a type SBb galaxy. NGC 5698 is a barred spiral galaxy, notable for being the host of the 2005 supernova SN 2005bc, which peaked at magnitude 15.3.",
"title": "Features"
},
{
"paragraph_id": 50,
"text": "Further away lies the 250-million-light-year-diameter Boötes void, a huge space largely empty of galaxies. Discovered by Robert Kirshner and colleagues in 1981, it is roughly 700 million light-years from Earth. Beyond it and within the bounds of the constellation, lie two superclusters at around 830 million and 1 billion light-years distant.",
"title": "Features"
},
{
"paragraph_id": 51,
"text": "The Hercules–Corona Borealis Great Wall, the largest-known structure in the Universe, covers a significant part of Boötes.",
"title": "Features"
},
{
"paragraph_id": 52,
"text": "Boötes is home to the Quadrantid meteor shower, the most prolific annual meteor shower. It was discovered in January 1835 and named in 1864 by Alexander Herschel. The radiant is located in northern Boötes near Kappa Boötis, in its namesake former constellation of Quadrans Muralis. Quadrantid meteors are dim, but have a peak visible hourly rate of approximately 100 per hour on January 3–4. The zenithal hourly rate of the Quadrantids is approximately 130 meteors per hour at their peak; it is also a very narrow shower.",
"title": "Features"
},
{
"paragraph_id": 53,
"text": "The Quadrantids are notoriously difficult to observe because of a low radiant and often inclement weather. The parent body of the meteor shower has been disputed for decades; however, Peter Jenniskens has proposed 2003 EH1, a minor planet, as the parent. 2003 EH1 may be linked to C/1490 Y1, a comet previously thought to be a potential parent body for the Quadrantids.",
"title": "Features"
},
{
"paragraph_id": 54,
"text": "2003 EH1 is a short-period comet of the Jupiter family; 500 years ago, it experienced a catastrophic breakup event. It is now dormant. The Quadrantids had notable displays in 1982, 1985 and 2004. Meteors from this shower often appear to have a blue hue and travel at a moderate speed of 41.5–43 kilometers per second.",
"title": "Features"
},
{
"paragraph_id": 55,
"text": "On April 28, 1984, a remarkable outburst of the normally placid Alpha Bootids was observed by visual observer Frank Witte from 00:00 to 2:30 UTC. In a 6 cm telescope, he observed 433 meteors in a field of view near Arcturus with a diameter of less than 1°. Peter Jenniskens comments that this outburst resembled a \"typical dust trail crossing\". The Alpha Bootids normally begin on April 14, peaking on April 27 and 28, and finishing on May 12. Its meteors are slow-moving, with a velocity of 20.9 kilometers per second. They may be related to Comet 73P/Schwassmann–Wachmann 3, but this connection is only theorized.",
"title": "Features"
},
{
"paragraph_id": 56,
"text": "The June Bootids, also known as the Iota Draconids, is a meteor shower associated with the comet 7P/Pons–Winnecke, first recognized on May 27, 1916, by William F. Denning. The shower, with its slow meteors, was not observed prior to 1916 because Earth did not cross the comet's dust trail until Jupiter perturbed Pons–Winnecke's orbit, causing it to come within 0.03 AU (4.5 million km; 2.8 million mi) of Earth's orbit the first year the June Bootids were observed.",
"title": "Features"
},
{
"paragraph_id": 57,
"text": "In 1982, E. A. Reznikov discovered that the 1916 outburst was caused by material released from the comet in 1819. Another outburst of the June Bootids was not observed until 1998, because Comet Pons–Winnecke's orbit was not in a favorable position. However, on June 27, 1998, an outburst of meteors radiating from Boötes, later confirmed to be associated with Pons-Winnecke, was observed. They were incredibly long-lived, with trails of the brightest meteors lasting several seconds at times. Many fireballs, green-hued trails, and even some meteors that cast shadows were observed throughout the outburst, which had a maximum zenithal hourly rate of 200–300 meteors per hour.",
"title": "Features"
},
{
"paragraph_id": 58,
"text": "Two Russian astronomers determined in 2002 that material ejected from the comet in 1825 was responsible for the 1998 outburst. Ejecta from the comet dating to 1819, 1825 and 1830 was predicted to enter Earth's atmosphere on June 23, 2004. The predictions of a shower less spectacular than the 1998 showing were borne out in a display that had a maximum zenithal hourly rate of 16–20 meteors per hour that night. The June Bootids are not expected to have another outburst in the next 50 years.",
"title": "Features"
},
{
"paragraph_id": 59,
"text": "Typically, only 1–2 dim, very slow meteors are visible per hour; the average June Bootid has a magnitude of 5.0. It is related to the Alpha Draconids and the Bootids-Draconids. The shower lasts from June 27 to July 5, with a peak on the night of June 28. The June Bootids are classified as a class III shower (variable), and has an average entry velocity of 18 kilometers per second. Its radiant is located 7 degrees north of Beta Boötis.",
"title": "Features"
},
{
"paragraph_id": 60,
"text": "The Beta Bootids is a weak shower that begins on January 5, peaks on January 16, and ends on January 18. Its meteors travel at 43 km/s. The January Bootids is a short, young meteor shower that begins on January 9, peaks from January 16 to January 18, and ends on January 18.",
"title": "Features"
},
{
"paragraph_id": 61,
"text": "The Phi Bootids is another weak shower radiating from Boötes. It begins on April 16, peaks on April 30 and May 1, and ends on May 12. Its meteors are slow-moving, with a velocity of 15.1 km/s. They were discovered in 2006. The shower's peak hourly rate can be as high as six meteors per hour. Though named for a star in Boötes, the Phi Bootid radiant has moved into Hercules. The meteor stream is associated with three different asteroids: 1620 Geographos, 2062 Aten and 1978 CA.",
"title": "Features"
},
{
"paragraph_id": 62,
"text": "The Lambda Bootids, part of the Bootid-Coronae Borealid Complex, are a weak annual shower with moderately fast meteors; 41.75 km/s. The complex includes the Lambda Bootids, as well as the Theta Coronae Borealids and Xi Coronae Borealids. All of the Bootid-Coronae Borealid showers are Jupiter family comet showers; the streams in the complex have highly inclined orbits.",
"title": "Features"
},
{
"paragraph_id": 63,
"text": "There are several minor showers in Boötes, some of whose existence is yet to be verified. The Rho Bootids radiate from near the namesake star, and were hypothesized in 2010. The average Rho Bootid has an entry velocity of 43 km/s. It peaks in November and lasts for three days.",
"title": "Features"
},
{
"paragraph_id": 64,
"text": "The Rho Bootid shower is part of the SMA complex, a group of meteor showers related to the Taurids, which is in turn linked to the comet 2P/Encke. However, the link to the Taurid shower remains unconfirmed and may be a chance correlation. Another such shower is the Gamma Bootids, which were hypothesized in 2006. Gamma Bootids have an entry velocity of 50.3 km/s. The Nu Bootids, hypothesized in 2012, have faster meteors, with an entry velocity of 62.8 km/s.",
"title": "Features"
},
{
"paragraph_id": 65,
"text": "Citations",
"title": "References"
},
{
"paragraph_id": 66,
"text": "References",
"title": "References"
}
] | Boötes is a constellation in the northern sky, located between 0° and +60° declination, and 13 and 16 hours of right ascension on the celestial sphere. The name comes from Latin: Boōtēs, which comes from Greek: Βοώτης, translit. Boṓtēs 'herdsman' or 'plowman'. One of the 48 constellations described by the 2nd-century astronomer Ptolemy, Boötes is now one of the 88 modern constellations. It contains the fourth-brightest star in the night sky, the orange giant Arcturus. Epsilon Boötis, or Izar, is a colourful multiple star popular with amateur astronomers. Boötes is home to many other bright stars, including eight above the fourth magnitude and an additional 21 above the fifth magnitude, making a total of 29 stars easily visible to the naked eye. | 2001-09-27T03:05:42Z | 2023-12-22T19:32:18Z | [
"Template:Solar radius",
"Template:Jupiter mass",
"Template:Reflist",
"Template:Commons",
"Template:Citation needed",
"Template:Solar mass",
"Template:Sfn",
"Template:Cite web",
"Template:Sky",
"Template:Short description",
"Template:Infobox constellation",
"Template:See also",
"Template:Refend",
"Template:Lang-la",
"Template:Lang",
"Template:Respell",
"Template:About",
"Template:IPAc-en",
"Template:Constellations",
"Template:Authority control",
"Template:Circa",
"Template:Cite news",
"Template:Cite journal",
"Template:Cite encyclopedia",
"Template:Stars of Boötes",
"Template:Portal bar",
"Template:Jupiter radius",
"Template:Convert",
"Template:Refbegin",
"Template:Cite book",
"Template:RA",
"Template:Solar luminosity",
"Template:Lang-grc-gre"
] | https://en.wikipedia.org/wiki/Bo%C3%B6tes |
4,203 | Bernardino Ochino | Bernardino Ochino (1487–1564) was an Italian, who was raised a Roman Catholic and later turned to Protestantism and became a Protestant reformer.
Bernardino Ochino was born in Siena, the son of the barber Domenico Ochino, and at the age of 7 or 8, in around 1504, was entrusted to the order of Franciscan Friars. From 1510 he studied medicine at Perugia.
At the age of 38, Ochino transferred himself in 1534 to the newly founded Order of Friars Minor Capuchin. By then he was the close friend of Juan de Valdés, Pietro Bembo, Vittoria Colonna, Pietro Martire, Carnesecchi. In 1538 he was elected vicar-general of his order. In 1539, urged by Bembo, he visited Venice and delivered a course of sermons showing a sympathy with justification by faith, which appeared more clearly in his Dialogues published the same year. He was suspected and denounced, but nothing ensued until the establishment of the Inquisition in Rome in June 1542, at the instigation of Cardinal Giovanni Pietro Carafa. Ochino received a citation to Rome, and set out to obey it about the middle of August. According to his own statement, he was deterred from presenting himself at Rome by the warnings of Cardinal Contarini, whom he found at Bologna, dying of poison administered by the reactionary party.
Ochino turned aside to Florence, and after some hesitation went across the Alps to Geneva. He was cordially received by John Calvin, and published within two years several volumes of Prediche, controversial tracts rationalizing his change of religion. He also addressed replies to marchioness Vittoria Colonna, Claudio Tolomei, and other Italian sympathizers who were reluctant to go to the same length as himself. His own breach with the Roman Catholic Church was final.
In 1545 Ochino became minister of the Italian Protestant congregation at Augsburg. From this time dates his contact with Caspar Schwenckfeld. He was compelled to flee when, in January 1547, the city was occupied by the imperial forces for the Diet of Augsburg.
Ochino found asylum in England, where he was made a prebendary of Canterbury Cathedral, received a pension from Edward VI's privy purse, and composed his major work, the Tragoedie or Dialoge of the unjuste usurped primacie of the Bishop of Rome. This text, originally written in Latin, is extant only in the 1549 translation of Bishop John Ponet. The form is a series of dialogues. Lucifer, enraged at the spread of Jesus's kingdom, convokes the fiends in council, and resolves to set up the pope as antichrist. The state, represented by the emperor Phocas, is persuaded to connive at the pope's assumption of spiritual authority; the other churches are intimidated into acquiescence; Lucifer's projects seem fully accomplished, when Heaven raises up Henry VIII of England and his son for their overthrow.
Several of Ochino's Prediche were translated into English by Anna Cooke; and he published numerous controversial treatises on the Continent. Ochino's Che Cosa è Christo was translated into Latin and English by the future Queen Elizabeth I of England in 1547.
In 1553 the accession of Mary I drove Ochino from England. He went to Basel, where Lelio Sozzini and the lawyer Martino Muralto were sent to secure Ochino as pastor of the Italian church at Zürich, which Ochino accepted. The Italian congregation there was composed mainly of refugees from Locarno. There for 10 years Ochino wrote books which gave increasing evidence of his alienation from the orthodoxy around him. The most important of these was the Labyrinth, a discussion of the freedom of the will, covertly undermining the Calvinistic doctrine of predestination.
In 1563 a long simmering storm burst on Ochino with the publication of his Thirty Dialogues, in one of which his adversaries maintained that he had justified polygamy under the disguise of a pretended refutation. His dialogues on divorce and against the Trinity were also considered heretical.
Ochino was not given opportunity to defend himself, and was banished from Zürich. After being refused admission by other Protestant cities, he directed his steps towards Poland, at that time the most tolerant state in Europe. He had not resided there long when an edict appeared (August 8, 1564) banishing all foreign dissidents. Fleeing the country, he encountered the plague at Pińczów; three of his four children were carried off; and he himself, worn out by misfortune, died in solitude and obscurity at Slavkov in Moravia, about the end of 1564.
Ochino's reputation among Protestants was low. He was charged by Thomas Browne in 1643 with the authorship of the legendary-apocryphal heretical treatise De tribus Impostoribus, as well as with having carried his alleged approval of polygamy into practice.
His biographer Karl Benrath justified him, representing him as a fervent evangelist and at the same time as a speculative thinker with a passion for free inquiry. The picture is of Ochino always learning and unlearning and arguing out difficult questions with himself in his dialogues, frequently without attaining to any absolute conviction.
This article incorporates text from the 1902 Encyclopædia Britannica, which is in the public domain. | [
{
"paragraph_id": 0,
"text": "Bernardino Ochino (1487–1564) was an Italian, who was raised a Roman Catholic and later turned to Protestantism and became a Protestant reformer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Bernardino Ochino was born in Siena, the son of the barber Domenico Ochino, and at the age of 7 or 8, in around 1504, was entrusted to the order of Franciscan Friars. From 1510 he studied medicine at Perugia.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "At the age of 38, Ochino transferred himself in 1534 to the newly founded Order of Friars Minor Capuchin. By then he was the close friend of Juan de Valdés, Pietro Bembo, Vittoria Colonna, Pietro Martire, Carnesecchi. In 1538 he was elected vicar-general of his order. In 1539, urged by Bembo, he visited Venice and delivered a course of sermons showing a sympathy with justification by faith, which appeared more clearly in his Dialogues published the same year. He was suspected and denounced, but nothing ensued until the establishment of the Inquisition in Rome in June 1542, at the instigation of Cardinal Giovanni Pietro Carafa. Ochino received a citation to Rome, and set out to obey it about the middle of August. According to his own statement, he was deterred from presenting himself at Rome by the warnings of Cardinal Contarini, whom he found at Bologna, dying of poison administered by the reactionary party.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "Ochino turned aside to Florence, and after some hesitation went across the Alps to Geneva. He was cordially received by John Calvin, and published within two years several volumes of Prediche, controversial tracts rationalizing his change of religion. He also addressed replies to marchioness Vittoria Colonna, Claudio Tolomei, and other Italian sympathizers who were reluctant to go to the same length as himself. His own breach with the Roman Catholic Church was final.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "In 1545 Ochino became minister of the Italian Protestant congregation at Augsburg. From this time dates his contact with Caspar Schwenckfeld. He was compelled to flee when, in January 1547, the city was occupied by the imperial forces for the Diet of Augsburg.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Ochino found asylum in England, where he was made a prebendary of Canterbury Cathedral, received a pension from Edward VI's privy purse, and composed his major work, the Tragoedie or Dialoge of the unjuste usurped primacie of the Bishop of Rome. This text, originally written in Latin, is extant only in the 1549 translation of Bishop John Ponet. The form is a series of dialogues. Lucifer, enraged at the spread of Jesus's kingdom, convokes the fiends in council, and resolves to set up the pope as antichrist. The state, represented by the emperor Phocas, is persuaded to connive at the pope's assumption of spiritual authority; the other churches are intimidated into acquiescence; Lucifer's projects seem fully accomplished, when Heaven raises up Henry VIII of England and his son for their overthrow.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Several of Ochino's Prediche were translated into English by Anna Cooke; and he published numerous controversial treatises on the Continent. Ochino's Che Cosa è Christo was translated into Latin and English by the future Queen Elizabeth I of England in 1547.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "In 1553 the accession of Mary I drove Ochino from England. He went to Basel, where Lelio Sozzini and the lawyer Martino Muralto were sent to secure Ochino as pastor of the Italian church at Zürich, which Ochino accepted. The Italian congregation there was composed mainly of refugees from Locarno. There for 10 years Ochino wrote books which gave increasing evidence of his alienation from the orthodoxy around him. The most important of these was the Labyrinth, a discussion of the freedom of the will, covertly undermining the Calvinistic doctrine of predestination.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "In 1563 a long simmering storm burst on Ochino with the publication of his Thirty Dialogues, in one of which his adversaries maintained that he had justified polygamy under the disguise of a pretended refutation. His dialogues on divorce and against the Trinity were also considered heretical.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "Ochino was not given opportunity to defend himself, and was banished from Zürich. After being refused admission by other Protestant cities, he directed his steps towards Poland, at that time the most tolerant state in Europe. He had not resided there long when an edict appeared (August 8, 1564) banishing all foreign dissidents. Fleeing the country, he encountered the plague at Pińczów; three of his four children were carried off; and he himself, worn out by misfortune, died in solitude and obscurity at Slavkov in Moravia, about the end of 1564.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "Ochino's reputation among Protestants was low. He was charged by Thomas Browne in 1643 with the authorship of the legendary-apocryphal heretical treatise De tribus Impostoribus, as well as with having carried his alleged approval of polygamy into practice.",
"title": "Legacy"
},
{
"paragraph_id": 11,
"text": "His biographer Karl Benrath justified him, representing him as a fervent evangelist and at the same time as a speculative thinker with a passion for free inquiry. The picture is of Ochino always learning and unlearning and arguing out difficult questions with himself in his dialogues, frequently without attaining to any absolute conviction.",
"title": "Legacy"
},
{
"paragraph_id": 12,
"text": "This article incorporates text from the 1902 Encyclopædia Britannica, which is in the public domain.",
"title": "External links"
}
] | Bernardino Ochino (1487–1564) was an Italian, who was raised a Roman Catholic and later turned to Protestantism and became a Protestant reformer. | 2002-02-25T15:51:15Z | 2023-12-30T23:54:14Z | [
"Template:1902 Britannica",
"Template:Authority control",
"Template:Short description",
"Template:More citations needed",
"Template:Reflist",
"Template:Librivox author"
] | https://en.wikipedia.org/wiki/Bernardino_Ochino |
4,204 | Bay of Quinte | The Bay of Quinte (/ˈkwɪnti/) is a long, narrow bay shaped like the letter "Z" on the northern shore of Lake Ontario in the province of Ontario, Canada. It is just west of the head of the Saint Lawrence River that drains the Great Lakes into the Gulf of Saint Lawrence. It is located about 200 kilometres (120 mi) east of Toronto and 350 kilometres (220 mi) west of Montreal.
The name "Quinte" is derived from "Kenté" or Kentio, an Iroquoian village located near the south shore of the Bay. Later on, an early French Catholic mission was built at Kenté, located on the north shore of what is now Prince Edward County, leading to the Bay being named after the Mission. Officially, in the Mohawk language, the community is called Kenhtèːke, which means "the place of the bay". The Cayuga name is Tayędaːneːgęˀ or Detgayęːdaːnegęˀ, "land of two logs."
The Bay, as it is known locally, provides some of the best trophy walleye angling in North America as well as most sport fish common to the great lakes. The bay is subject to algal blooms in late summer. Zebra mussels as well as the other invasive species found in the Great Lakes are present.
The Quinte area played a vital role in bootlegging during prohibition in the United States, with large volumes of liquor being produced in the area, and shipped via boat on the bay to Lake Ontario finally arriving in New York State where it was distributed. Illegal sales of liquor accounted for many fortunes in and around Belleville.
Tourism in the area is significant, especially in the summer months due to the Bay of Quinte and its fishing, local golf courses, provincial parks, and wineries.
The northern side of the bay is defined by Ontario's mainland, while the southern side follows the shore of the Prince Edward County headland. Beginning in the east with the outlet to Lake Ontario, the bay runs west-southwest for 25 kilometres (16 mi) to Picton (although this section is also called Adolphus Reach), where it turns north-northwest for another 20 kilometres (12 mi) as far as Deseronto. From there it turns south-southwest again for another 40 kilometres (25 mi), running past Big Island on the south and Belleville on the north. The width of the bay rarely exceeds two kilometres (1.2 mi). The bay ends at Trenton (Quinte West) and the Trent River, both also on the north side. The Murray Canal has been cut through the "Carrying Place", the few kilometres separating the end of the bay and Lake Ontario on the west side. The Trent River is part of the Trent-Severn Waterway, a canal connecting Lake Ontario to Lake Simcoe and then Georgian Bay on Lake Huron.
There are several sub-bays off the Bay of Quinte, including Hay Bay, Big Bay, and Muscote Bay.
Quinte is also a region comprising several communities situated along the Bay of Quinte, including Quinte West, Brighton and the City of Belleville, which is the largest city in the Quinte Region, and represents a midpoint between Montreal, Ottawa, and Toronto.
The Greater Bay of Quinte area includes the municipalities of Brighton, Quinte West, Belleville, Prince Edward County, and Greater Napanee as well as the Native Tyendinaga Mohawk Territory. Overall population of the area exceeds 200,000.
The Mohawks of the Bay of Quinte (Kenhtè:ke Kanyen'kehá:ka) live on traditional Tyendinaga Mohawk Territory. Their reserve Band number 244, their current land base, is 73 km (18,000 acres) on the Bay of Quinte in southeastern Ontario east of Belleville and immediately to the west of Deseronto.
The community takes its name from a variant spelling of Mohawk leader Joseph Brant's traditional Mohawk name, Thayendanegea (standardized spelling Thayentiné:ken), which means 'two pieces of fire wood beside each other'. Officially, in the Mohawk language, the community is called "Kenhtè:ke" (Tyendinaga), which means "on the bay", and was the birthplace of Tekanawí:ta. The Cayuga name is Tyendinaga, Tayęda:ne:gęˀ or Detgayę:da:negęˀ, "land of two logs."
The Quinte Region, specifically the City of Belleville, is home to Loyalist College of Applied Arts and Technology. Other post-secondary schools in the region include Maxwell College of Advanced Technology, CDI College, and Quinte Literacy. Secondary schools in the region include Albert College (private school) and Sir James Whitney (a school for the deaf and severely hearing-impaired).
The Bay of Quinte region is a hub for industry in eastern Ontario. The region is home to a diverse cluster of domestic and multi-national manufacturing and logistics companies. Sectors include; food processing, auto-parts, plastics and packaging, consumer goods, and more. The region's close proximity to North American markets, strong labour force and start-up and operating costs have attracted attention and new investment from companies all over the globe. Industry in the Bay of Quinte region is supported by a workforce of over 11,000.
Investment attraction and industrial retention are supported regionally by the Quinte Economic Development Commission.
Just a few of over 350 industries located in the Bay of Quinte Region include:
44°09′N 77°15′W / 44.150°N 77.250°W / 44.150; -77.250 | [
{
"paragraph_id": 0,
"text": "The Bay of Quinte (/ˈkwɪnti/) is a long, narrow bay shaped like the letter \"Z\" on the northern shore of Lake Ontario in the province of Ontario, Canada. It is just west of the head of the Saint Lawrence River that drains the Great Lakes into the Gulf of Saint Lawrence. It is located about 200 kilometres (120 mi) east of Toronto and 350 kilometres (220 mi) west of Montreal.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The name \"Quinte\" is derived from \"Kenté\" or Kentio, an Iroquoian village located near the south shore of the Bay. Later on, an early French Catholic mission was built at Kenté, located on the north shore of what is now Prince Edward County, leading to the Bay being named after the Mission. Officially, in the Mohawk language, the community is called Kenhtèːke, which means \"the place of the bay\". The Cayuga name is Tayędaːneːgęˀ or Detgayęːdaːnegęˀ, \"land of two logs.\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Bay, as it is known locally, provides some of the best trophy walleye angling in North America as well as most sport fish common to the great lakes. The bay is subject to algal blooms in late summer. Zebra mussels as well as the other invasive species found in the Great Lakes are present.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Quinte area played a vital role in bootlegging during prohibition in the United States, with large volumes of liquor being produced in the area, and shipped via boat on the bay to Lake Ontario finally arriving in New York State where it was distributed. Illegal sales of liquor accounted for many fortunes in and around Belleville.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Tourism in the area is significant, especially in the summer months due to the Bay of Quinte and its fishing, local golf courses, provincial parks, and wineries.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The northern side of the bay is defined by Ontario's mainland, while the southern side follows the shore of the Prince Edward County headland. Beginning in the east with the outlet to Lake Ontario, the bay runs west-southwest for 25 kilometres (16 mi) to Picton (although this section is also called Adolphus Reach), where it turns north-northwest for another 20 kilometres (12 mi) as far as Deseronto. From there it turns south-southwest again for another 40 kilometres (25 mi), running past Big Island on the south and Belleville on the north. The width of the bay rarely exceeds two kilometres (1.2 mi). The bay ends at Trenton (Quinte West) and the Trent River, both also on the north side. The Murray Canal has been cut through the \"Carrying Place\", the few kilometres separating the end of the bay and Lake Ontario on the west side. The Trent River is part of the Trent-Severn Waterway, a canal connecting Lake Ontario to Lake Simcoe and then Georgian Bay on Lake Huron.",
"title": "Geography"
},
{
"paragraph_id": 6,
"text": "There are several sub-bays off the Bay of Quinte, including Hay Bay, Big Bay, and Muscote Bay.",
"title": "Geography"
},
{
"paragraph_id": 7,
"text": "Quinte is also a region comprising several communities situated along the Bay of Quinte, including Quinte West, Brighton and the City of Belleville, which is the largest city in the Quinte Region, and represents a midpoint between Montreal, Ottawa, and Toronto.",
"title": "Bay of Quinte Region"
},
{
"paragraph_id": 8,
"text": "The Greater Bay of Quinte area includes the municipalities of Brighton, Quinte West, Belleville, Prince Edward County, and Greater Napanee as well as the Native Tyendinaga Mohawk Territory. Overall population of the area exceeds 200,000.",
"title": "Bay of Quinte Region"
},
{
"paragraph_id": 9,
"text": "The Mohawks of the Bay of Quinte (Kenhtè:ke Kanyen'kehá:ka) live on traditional Tyendinaga Mohawk Territory. Their reserve Band number 244, their current land base, is 73 km (18,000 acres) on the Bay of Quinte in southeastern Ontario east of Belleville and immediately to the west of Deseronto.",
"title": "Bay of Quinte Region"
},
{
"paragraph_id": 10,
"text": "The community takes its name from a variant spelling of Mohawk leader Joseph Brant's traditional Mohawk name, Thayendanegea (standardized spelling Thayentiné:ken), which means 'two pieces of fire wood beside each other'. Officially, in the Mohawk language, the community is called \"Kenhtè:ke\" (Tyendinaga), which means \"on the bay\", and was the birthplace of Tekanawí:ta. The Cayuga name is Tyendinaga, Tayęda:ne:gęˀ or Detgayę:da:negęˀ, \"land of two logs.\"",
"title": "Bay of Quinte Region"
},
{
"paragraph_id": 11,
"text": "The Quinte Region, specifically the City of Belleville, is home to Loyalist College of Applied Arts and Technology. Other post-secondary schools in the region include Maxwell College of Advanced Technology, CDI College, and Quinte Literacy. Secondary schools in the region include Albert College (private school) and Sir James Whitney (a school for the deaf and severely hearing-impaired).",
"title": "Bay of Quinte Region"
},
{
"paragraph_id": 12,
"text": "The Bay of Quinte region is a hub for industry in eastern Ontario. The region is home to a diverse cluster of domestic and multi-national manufacturing and logistics companies. Sectors include; food processing, auto-parts, plastics and packaging, consumer goods, and more. The region's close proximity to North American markets, strong labour force and start-up and operating costs have attracted attention and new investment from companies all over the globe. Industry in the Bay of Quinte region is supported by a workforce of over 11,000.",
"title": "Bay of Quinte Region"
},
{
"paragraph_id": 13,
"text": "Investment attraction and industrial retention are supported regionally by the Quinte Economic Development Commission.",
"title": "Bay of Quinte Region"
},
{
"paragraph_id": 14,
"text": "Just a few of over 350 industries located in the Bay of Quinte Region include:",
"title": "Bay of Quinte Region"
},
{
"paragraph_id": 15,
"text": "44°09′N 77°15′W / 44.150°N 77.250°W / 44.150; -77.250",
"title": "External links"
}
] | The Bay of Quinte is a long, narrow bay shaped like the letter "Z" on the northern shore of Lake Ontario in the province of Ontario, Canada. It is just west of the head of the Saint Lawrence River that drains the Great Lakes into the Gulf of Saint Lawrence. It is located about 200 kilometres (120 mi) east of Toronto and 350 kilometres (220 mi) west of Montreal. The name "Quinte" is derived from "Kenté" or Kentio, an Iroquoian village located near the south shore of the Bay. Later on, an early French Catholic mission was built at Kenté, located on the north shore of what is now Prince Edward County, leading to the Bay being named after the Mission. Officially, in the Mohawk language, the community is called Kenhtèːke, which means "the place of the bay". The Cayuga name is Tayędaːneːgęˀ or Detgayęːdaːnegęˀ, "land of two logs." The Bay, as it is known locally, provides some of the best trophy walleye angling in North America as well as most sport fish common to the great lakes. The bay is subject to algal blooms in late summer. Zebra mussels as well as the other invasive species found in the Great Lakes are present. The Quinte area played a vital role in bootlegging during prohibition in the United States, with large volumes of liquor being produced in the area, and shipped via boat on the bay to Lake Ontario finally arriving in New York State where it was distributed. Illegal sales of liquor accounted for many fortunes in and around Belleville. Tourism in the area is significant, especially in the summer months due to the Bay of Quinte and its fishing, local golf courses, provincial parks, and wineries. | 2001-09-17T16:48:25Z | 2023-09-30T20:38:26Z | [
"Template:Short description",
"Template:IPAc-en",
"Template:Commons category",
"Template:Cite book",
"Template:Redirect",
"Template:Convert",
"Template:Lang",
"Template:Clear",
"Template:For",
"Template:Coord",
"Template:Cite web",
"Template:Authority control",
"Template:Location map",
"Template:OSM Location map",
"Template:Reflist",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Bay_of_Quinte |
4,207 | Bassoon | The bassoon is a musical instrument in the woodwind family, which plays in the tenor and bass ranges. It is composed of six pieces, and is usually made of wood. It is known for its distinctive tone color, wide range, versatility, and virtuosity. It is a non-transposing instrument and typically its music is written in the bass and tenor clefs, and sometimes in the treble. There are two forms of modern bassoon: the Buffet (or French) and Heckel (or German) systems. It is typically played while sitting using a seat strap, but can be played while standing if the player has a harness to hold the instrument. Sound is produced by rolling both lips over the reed and blowing direct air pressure to cause the reed to vibrate. Its fingering system can be quite complex when compared to those of other instruments. Appearing in its modern form in the 19th century, the bassoon figures prominently in orchestral, concert band, and chamber music literature, and is occasionally heard in pop, rock, and jazz settings as well. One who plays a bassoon is called a bassoonist.
The word bassoon comes from French basson and from Italian bassone (basso with the augmentative suffix -one). However, the Italian name for the same instrument is fagotto, in Spanish, Dutch, Czech, Polish and Romanian it is fagot, and in German Fagott. Fagot is an Old French word meaning a bundle of sticks. The dulcian came to be known as fagotto in Italy. However, the usual etymology that equates fagotto with "bundle of sticks" is somewhat misleading, as the latter term did not come into general use until later. However an early English variation, "faget", was used as early as 1450 to refer to firewood, which is 100 years before the earliest recorded use of the dulcian (1550). Further citation is needed to prove the lack of relation between the meaning "bundle of sticks" and "fagotto" (Italian) or variants. Some think that it may resemble the Roman fasces, a standard of bound sticks with an axe. A further discrepancy lies in the fact that the dulcian was carved out of a single block of wood—in other words, a single "stick" and not a bundle.
The range of the bassoon begins at B♭1 (the first one below the bass staff) and extends upward over three octaves, roughly to the G above the treble staff (G5). However, most writing for bassoon rarely calls for notes above C5 or D5; even Stravinsky's opening solo in The Rite of Spring only ascends to D5. Notes higher than this are possible, but seldom written, as they are difficult to produce (often requiring specific reed design features to ensure reliability), and at any rate are quite homogeneous in timbre to the same pitches on cor anglais, which can produce them with relative ease. French bassoon has greater facility in the extreme high register, and so repertoire written for it is somewhat likelier to include very high notes, although repertoire for French system can be executed on German system without alterations and vice versa.
The extensive high register of the bassoon and its frequent role as a lyric tenor have meant that tenor clef is very commonly employed in its literature after the Baroque, partly to avoid excessive ledger lines, and, beginning in the 20th century, treble clef is also seen for similar reasons.
Like other woodwind instruments, the lowest note is fixed, but A1 is possible with a special extension to the instrument—see "Extended techniques" below.
Although the primary tone hole pitches are a pitched perfect 5th lower than other non-transposing Western woodwinds (effectively an octave beneath English horn) the bassoon is non-transposing, meaning that notes sounded match the written pitch.
The bassoon disassembles into six main pieces, including the reed. The bell (6), extending upward; the bass joint (or long joint) (5), connecting the bell and the boot; the boot (or butt) (4), at the bottom of the instrument and folding over on itself; the wing joint (or tenor joint) (3), which extends from boot to bocal; and the bocal (or crook) (2), a crooked metal tube that attaches the wing joint to a reed (1) (listen).
The bore of the bassoon is conical, like that of the oboe and the saxophone, and the two adjoining bores of the boot joint are connected at the bottom of the instrument with a U-shaped metal connector. Both bore and tone holes are precision-machined, and each instrument is finished by hand for proper tuning. The walls of the bassoon are thicker at various points along the bore; here, the tone holes are drilled at an angle to the axis of the bore, which reduces the distance between the holes on the exterior. This ensures coverage by the fingers of the average adult hand. Playing is facilitated by closing the distance between the widely spaced holes with a complex system of key work, which extends throughout nearly the entire length of the instrument. The overall height of the bassoon stretches to 1.34 m (4 ft 5 in) tall, but the total sounding length is 2.54 m (8 ft 4 in) considering that the tube is doubled back on itself. There are also short-reach bassoons made for the benefit of young or petite players.
A modern beginner's bassoon is generally made of maple, with medium-hardness types such as sycamore maple and sugar maple preferred. Less-expensive models are also made of materials such as polypropylene and ebonite, primarily for student and outdoor use. Metal bassoons were made in the past but have not been produced by any major manufacturer since 1889.
The art of reed-making has been practiced for several hundred years, some of the earliest known reeds having been made for the dulcian, a predecessor of the bassoon. Current methods of reed-making consist of a set of basic methods; however, individual bassoonists' playing styles vary greatly and thus require that reeds be customized to best suit their respective bassoonist. Advanced players usually make their own reeds to this end. With regards to commercially made reeds, many companies and individuals offer pre-made reeds for sale, but players often find that such reeds still require adjustments to suit their particular playing style.
Modern bassoon reeds, made of Arundo donax cane, are often made by the players themselves, although beginner bassoonists tend to buy their reeds from professional reed makers or use reeds made by their teachers. Reeds begin with a length of tube cane that is split into three or four pieces using a tool called a cane splitter. The cane is then trimmed and gouged to the desired thickness, leaving the bark attached. After soaking, the gouged cane is cut to the proper shape and milled to the desired thickness, or profiled, by removing material from the bark side. This can be done by hand with a file; more frequently it is done with a machine or tool designed for the purpose. After the profiled cane has soaked once again it is folded over in the middle. Prior to soaking, the reed maker will have lightly scored the bark with parallel lines with a knife; this ensures that the cane will assume a cylindrical shape during the forming stage.
On the bark portion, the reed maker binds on one, two, or three coils or loops of brass wire to aid in the final forming process. The exact placement of these loops can vary somewhat depending on the reed maker. The bound reed blank is then wrapped with thick cotton or linen thread to protect it, and a conical steel mandrel (which sometimes has been heated in a flame) is quickly inserted in between the blades. Using a special pair of pliers, the reed maker presses down the cane, making it conform to the shape of the mandrel. (The steam generated by the heated mandrel causes the cane to permanently assume the shape of the mandrel.) The upper portion of the cavity thus created is called the "throat", and its shape has an influence on the final playing characteristics of the reed. The lower, mostly cylindrical portion will be reamed out with a special tool called a reamer, allowing the reed to fit on the bocal.
After the reed has dried, the wires are tightened around the reed, which has shrunk after drying, or replaced completely. The lower part is sealed (a nitrocellulose-based cement such as Duco may be used) and then wrapped with thread to ensure both that no air leaks out through the bottom of the reed and that the reed maintains its shape. The wrapping itself is often sealed with Duco or clear nail varnish (polish). Electrical tape can also be used as a wrapping for amateur reed makers. The bulge in the wrapping is sometimes referred to as the "Turk's head"—it serves as a convenient handle when inserting the reed on the bocal. Alternatively, hot glue, epoxy, or heat shrink wrap may be used to seal the tube of the reed. The thread wrapping (commonly known as a "Turban" due to the criss-crossing fabric) is still more common in commercially sold reeds.
To finish the reed, the end of the reed blank, originally at the center of the unfolded piece of cane, is cut off, creating an opening. The blades above the first wire are now roughly 27–30 mm (1.1–1.2 in) long. For the reed to play, a slight bevel must be created at the tip with a knife, although there is also a machine that can perform this function. Other adjustments with the reed knife may be necessary, depending on the hardness, the profile of the cane, and the requirements of the player. The reed opening may also need to be adjusted by squeezing either the first or second wire with the pliers. Additional material may be removed from the sides (the "channels") or tip to balance the reed. Additionally, if the "e" in the bass clef staff is sagging in pitch, it may be necessary to "clip" the reed by removing 1–2 mm (0.039–0.079 in) from its length using a pair of very sharp scissors or the equivalent.
Music historians generally consider the dulcian to be the forerunner of the modern bassoon, as the two instruments share many characteristics: a double reed fitted to a metal crook, obliquely drilled tone holes and a conical bore that doubles back on itself. The origins of the dulcian are obscure, but by the mid-16th century it was available in as many as eight different sizes, from soprano to great bass. A full consort of dulcians was a rarity; its primary function seems to have been to provide the bass in the typical wind band of the time, either loud (shawms) or soft (recorders), indicating a remarkable ability to vary dynamics to suit the need. Otherwise, dulcian technique was rather primitive, with eight finger holes and two keys, indicating that it could play in only a limited number of key signatures.
Circumstantial evidence indicates that the baroque bassoon was a newly invented instrument, rather than a simple modification of the old dulcian. The dulcian was not immediately supplanted, but continued to be used well into the 18th century by Bach and others; and, presumably for reasons of interchangeability, repertoire from this time is very unlikely to go beyond the smaller compass of the dulcian. The man most likely responsible for developing the true bassoon was Martin Hotteterre (d.1712), who may also have invented the three-piece flûte traversière (transverse flute) and the hautbois (baroque oboe). Some historians believe that sometime in the 1650s, Hotteterre conceived the bassoon in four sections (bell, bass joint, boot and wing joint), an arrangement that allowed greater accuracy in machining the bore compared to the one-piece dulcian. He also extended the compass down to B♭ by adding two keys. An alternate view maintains Hotteterre was one of several craftsmen responsible for the development of the early bassoon. These may have included additional members of the Hotteterre family, as well as other French makers active around the same time. No original French bassoon from this period survives, but if it did, it would most likely resemble the earliest extant bassoons of Johann Christoph Denner and Richard Haka from the 1680s. Sometime around 1700, a fourth key (G♯) was added, and it was for this type of instrument that composers such as Antonio Vivaldi, Bach, and Georg Philipp Telemann wrote their demanding music. A fifth key, for the low E♭, was added during the first half of the 18th century. Notable makers of the 4-key and 5-key baroque bassoon include J.H. Eichentopf (c. 1678–1769), J. Poerschmann (1680–1757), Thomas Stanesby, Jr. (1668–1734), G.H. Scherer (1703–1778), and Prudent Thieriot (1732–1786).
Increasing demands on capabilities of instruments and players in the 19th century—particularly larger concert halls requiring greater volume and the rise of virtuoso composer-performers—spurred further refinement. Increased sophistication, both in manufacturing techniques and acoustical knowledge, made possible great improvements in the instrument's playability.
The modern bassoon exists in two distinct primary forms, the Buffet (or "French") system and the Heckel ("German") system. Most of the world plays the Heckel system, while the Buffet system is primarily played in France, Belgium, and parts of Latin America. A number of other types of bassoons have been constructed by various instrument makers, such as the rare Galandronome. Owing to the ubiquity of the Heckel system in English-speaking countries, references in English to the contemporary bassoon always mean the Heckel system, with the Buffet system being explicitly qualified where it appears.
The design of the modern bassoon owes a great deal to the performer, teacher, and composer Carl Almenräder. Assisted by the German acoustic researcher Gottfried Weber, he developed the 17-key bassoon with a range spanning four octaves. Almenräder's improvements to the bassoon began with an 1823 treatise describing ways of improving intonation, response, and technical ease of playing by augmenting and rearranging the keywork. Subsequent articles further developed his ideas. His employment at Schott gave him the freedom to construct and test instruments according to these new designs, and he published the results in Caecilia, Schott's house journal. Almenräder continued publishing and building instruments until his death in 1846, and Ludwig van Beethoven himself requested one of the newly made instruments after hearing of the papers. In 1831, Almenräder left Schott to start his own factory with a partner, Johann Adam Heckel.
Heckel and two generations of descendants continued to refine the bassoon, and their instruments became the standard, with other makers following. Because of their superior singing tone quality (an improvement upon one of the main drawbacks of the Almenräder instruments), the Heckel instruments competed for prominence with the reformed Wiener system, a Boehm-style bassoon, and a completely keyed instrument devised by Charles-Joseph Sax, father of Adolphe Sax. F.W. Kruspe implemented a latecomer attempt in 1893 to reform the fingering system, but it failed to catch on. Other attempts to improve the instrument included a 24-keyed model and a single-reed mouthpiece, but both these had adverse effects on tone and were abandoned.
Coming into the 20th century, the Heckel-style German model of bassoon dominated the field. Heckel himself had made over 1,100 instruments by the turn of the 20th century (serial numbers begin at 3,000), and the British makers' instruments were no longer desirable for the changing pitch requirements of the symphony orchestra, remaining primarily in military band use.
Except for a brief 1940s wartime conversion to ball bearing manufacture, the Heckel concern has produced instruments continuously to the present day. Heckel bassoons are considered by many to be the best, although a range of Heckel-style instruments is available from several other manufacturers, all with slightly different playing characteristics.
Because its mechanism is primitive compared to most modern woodwinds, makers have occasionally attempted to "reinvent" the bassoon. In the 1960s, Giles Brindley began to develop what he called the "logical bassoon", which aimed to improve intonation and evenness of tone through use of an electrically activated mechanism, making possible key combinations too complex for the human hand to manage. Brindley's logical bassoon was never marketed.
The Buffet system bassoon achieved its basic acoustical properties somewhat earlier than the Heckel. Thereafter, it continued to develop in a more conservative manner. While the early history of the Heckel bassoon included a complete overhaul of the instrument in both acoustics and key work, the development of the Buffet system consisted primarily of incremental improvements to the key work. This minimalist approach of the Buffet deprived it of improved consistency of intonation, ease of operation, and increased power, which is found in Heckel bassoons, but the Buffet is considered by some to have a more vocal and expressive quality. The conductor John Foulds lamented in 1934 the dominance of the Heckel-style bassoon, considering them too homogeneous in sound with the horn. The modern Buffet system has 22 keys with its range being the same as the Heckel; although Buffet instruments have greater facility in the upper registers, reaching E5 and F5 with far greater ease and less air resistance.
Compared to the Heckel bassoon, Buffet system bassoons have a narrower bore and simpler mechanism, requiring different, and often more complex fingerings for many notes. Switching between Heckel and Buffet, or vice versa, requires extensive retraining. French woodwind instruments' tone in general exhibits a certain amount of "edge", with more of a vocal quality than is usual elsewhere, and the Buffet bassoon is no exception. This sound has been utilised effectively in writing for Buffet bassoon, but is less inclined to blend than the tone of the Heckel bassoon. As with all bassoons, the tone varies considerably, depending on individual instrument, reed, and performer. In the hands of a lesser player, the Heckel bassoon can sound flat and woody, but good players succeed in producing a vibrant, singing tone. Conversely, a poorly played Buffet can sound buzzy and nasal, but good players succeed in producing a warm, expressive sound.
Though the United Kingdom once favored the French system, Buffet-system instruments are no longer made there and the last prominent British player of the French system retired in the 1980s. However, with continued use in some regions and its distinctive tone, the Buffet continues to have a place in modern bassoon playing, particularly in France, where it originated. Buffet-model bassoons are currently made in Paris by Buffet Crampon and the atelier Ducasse (Romainville, France). The Selmer Company stopped fabrication of French system bassoons around the year 2012. Some players, for example the late Gerald Corey in Canada, have learned to play both types and will alternate between them depending on the repertoire.
Prior to 1760, the early ancestor of the bassoon was the dulcian. It was used to reinforce the bass line in wind ensembles called consorts. However, its use in concert orchestras was sporadic until the late 17th century when double reeds began to make their way into standard instrumentation. Increasing use of the dulcian as a basso continuo instrument meant that it began to be included in opera orchestras, in works such as those by Reinhard Keiser and Jean-Baptiste Lully. Meanwhile, as the dulcian advanced technologically and was able to achieve more virtuosity, composers such as Joseph Bodin de Boismortier, Johann Ernst Galliard, Johann Friedrich Fasch and Georg Philipp Telemann wrote demanding solo and ensemble music for the instrument. Antonio Vivaldi brought it to prominence by featuring it in thirty-nine concerti.
While the bassoon was still often used to give clarity to the bassline due to its sonorous low register, the capabilities of wind instruments grew as technology advanced during the Classical era. This allowed the instrument to play in more keys than the dulcian. Joseph Haydn took advantage of this in his Symphony No. 45 ("Farewell Symphony"), in which the bassoon plays in F-sharp minor. Following with these advances, composers also began to exploit the bassoon for its unique color, flexibility, and virtuosic ability, rather than for its perfunctory ability to double the bass line. Those who did this include Ludwig van Beethoven in his three Duos for Clarinet and Bassoon (WoO 27) for clarinet and bassoon and Niccolo Paganini in his duets for violin and bassoon. In his Bassoon Concerto in B-flat major, K. 191, W. A. Mozart utilized all aspects of the bassoon's expressiveness with its contrasts in register, staccato playing, and expressive sound, and was especially noted for its singing quality in the second movement. This concerto is often considered one of the most important works in all of the bassoon's repertoire, even today.
The bassoon's similarity to the human voice, in addition to its newfound virtuosic ability, was another quality many composers took advantage of during the classical era. After 1730, the German bassoon's range expended up to B♭4, and much higher with the French instrument. Technological advances also caused the bassoon's tenor register sound to become more resonant, and playing in this register grew in popularity, especially in the Austro-Germanic musical world. Pedagogues such as Josef Frohlich instructed students to practice scales, thirds, and fourths as vocal students would. In 1829, he wrote that the bassoon was capable of expressing "the worthy, the virile, the solemn, the great, the sublime, composure, mildness, intimacy, emotion, longing, heartfulness, reverence, and soulful ardour." In G.F. Brandt's performance of Carl Maria von Weber's Concerto for Bassoon in F Major, Op. 75 (J. 127) it was also likened to the human voice. In France, Pierre Cugnier described the bassoon's role as encompassing not only the bass part, but also to accompany the voice and harp, play in pairs with clarinets and horns in Harmonie, and to play in "nearly all types of music," including concerti, which were much more common than the sonatas of the previous era. Both Cugnier and Étienne Ozi emphasized the importance of the bassoon's similarity to the singing voice.
The role of the bassoon in the orchestra varied depending on the country. In the Viennese orchestra the instrument offered a three-dimensional sound to the ensemble by doubling other instruments such as violins, as heard in Mozart's overture to The Marriage of Figaro, K 492. where it plays a rather technical part alongside the strings. He also wrote for the bassoon to change its timbre depending on which instrument it was paired with; warmer with clarinets, hollow with flutes, and dark and dignified with violins. In Germany and Scandinavian countries, orchestras typically featured only two bassoons. But in France, orchestras increased the number to four in the latter half of the nineteenth century. In England, the bassoonist's role varied depending on the ensemble. Johann Christian Bach wrote two concertos for solo bassoon, and it also appeared in more supportive roles such as accompanying church choirs after the Puritan revolution destroyed most church organs. In the American colonies, the bassoon was typically seen in a chamber setting. After the Revolutionary War, bassoonists were found in wind bands that gave public performances. By 1800, there was at least one bassoon in the United States Marine Band. In South America, the bassoon also appeared in small orchestras, bands, and military musique (similar to Harmonie ensembles).
The role of the bassoon during the Romantic era varied between a role as a supportive bass instrument and a role as a virtuosic, expressive, solo instrument. In fact, it was very much considered an instrument that could be used in almost any circumstance. The comparison of the bassoon's sound to the human voice continued on during this time, as much of the pedagogy surrounded emulating this sound. Giuseppe Verdi used the instrument's lyrical, singing voice to evoke emotion in pieces such as his Messa da Requiem. Eugene Jancourt compared the use of vibrato on the bassoon to that of singers, and Luigi Orselli wrote that the bassoon blended well with human voice. He also noted the function of the bassoon in the French orchestra at the time, which served to support the sound of the viola, reinforce staccato sound, and double the bass, clarinet, flute, and oboe. Emphasis also began to be placed on the unique sound of the bassoon's staccato, which might be described as quite short and aggressive, such as in Hector Berlioz's Symphonie fantastique, Op. 14 in the fifth movement. Paul Dukas utilized the staccato to depict the image of two brooms coming to life in The Sorcerer's Apprentice.
It was common for there to be only two bassoons in German orchestras. Austrian and British military bands also only carried two bassoons, and were mainly used for accompaniment and offbeat playing. In France, Hector Berlioz also made it fashionable to use more than two bassoons; he often scored for three or four, and at time wrote for up to eight such as in his l’Impériale.
At this point, composers expected bassoons to be as virtuosic as the other wind instruments, as they often wrote solos challenging the range and technique of the instrument. Examples of this include Nikolai Rimsky-Korsakov's bassoon solo and cadenza following the clarinet in Sheherazade, Op. 35 and in Richard Wagner's Tannhäuser, which required the bassoonist to triple tongue and also play up to the top of its range at an E5. Wagner also used the bassoon for its staccato ability in his work, and often wrote his three bassoon parts in thirds to evoke a darker sound with noticeable tone color. In Modest Mussorgsky's Night on Bald Mountain, the bassoons play fortissimo alongside other bass instruments in order to evoke "the voice of the Devil."
At this point in time, the development of the bassoon slowed. Rather than making large leaps in technological improvements, tiny imperfections in the instrument's function were corrected. The instrument became quite versatile throughout the twentieth century; the instrument was at this point able to play three octaves, a variety of different trills, and maintained stable intonation across all registers and dynamic levels. The pedagogy among bassoonists varied among different countries, and so the overall instrument itself played a variety of roles. As was a common theme in previous eras, the bassoon was valued by composers for its unique voice, and its use rose higher in pitch. A famous example of this is in Igor Stravinsky's Rite of Spring in which the bassoon must play in its highest register in order to mimic the Russian dudka. Composers also wrote for the bassoon's middle register, such as in Stravinsky's "Berceuse" in The Firebird and Symphony No. 5 in E-flat major, op. 82 by Jean Sibelius's. They also continued to highlight the staccato sound of the bassoon, as heard in Sergei Prokofiev's Humorous Scherzo. In Sergei Prokofiev's Peter and the Wolf, the part of the grandfather is played by the bassoon.
In orchestral settings, most orchestras from the beginning of the twentieth century to the present have three or four bassoonists, with the fourth typically covering contrabassoon as well. Greater emphasis on the use of timbre, vibrato, and phrasing began to appear in bassoon pedagogy, and many followed Marcel Tabuteau's philosophy on musical phrasing. Vibrato began to be used in ensemble playing, depending on the phrasing of the music. The bassoon was, and currently is, expected to be fluent with other woodwinds in terms of virtuosity and technique. Examples of this include the cadenza for bassoons in Maurice Ravel's Rapsodie espagnole and the multi-finger trills used in Stravinsky's Octet.
In the twentieth century, the bassoon was less of a concerto soloist, and when it was, the accompanying ensemble was made softer and quieter. In addition, it was no longer used in marching bands, though still existed in concert bands with one or two of them. Orchestral repertoire remained very much the same Austro-Germanic tradition throughout most Western countries. It mostly appeared in solo, chamber, and symphonic settings. By the mid-1900s, broadcasting and recording grew in popularity, allowing for new opportunities for bassoonists, and leading to a slow decline of live performances. Much of the new music for bassoon in the late twentieth and early twenty-first centuries, often included extended techniques and was written for solo or chamber settings. One piece that included extended techniques was Luciano Berio's Sequenza XII, which called for microtonal fingerings, glissandos, and timbral trills. Double and triple tonguing, flutter tonguing, multiphonics, quarter-tones, and singing are all utilized in Bruno Bartolozzi's Concertazioni. There were also a variety of concerti and bassoon and piano pieces written, such as John Williams's Five Sacred Trees and André Previn's Sonata for bassoon and piano. There were also "performance" pieces such as Peter Schickele's Sonata Abassoonata, which required the bassoonist to be both a musician and an actor. The bassoon quartet became prominent at this time, with pieces such as Daniel Dorff's It Takes Four to Tango.
The bassoon is infrequently used as a jazz instrument and rarely seen in a jazz ensemble. It first began appearing in the 1920s, when Garvin Bushell began incorporating the bassoon in his performances. Specific calls for its use occurred in Paul Whiteman's group, the unusual octets of Alec Wilder, and a few other session appearances. The next few decades saw the instrument used only sporadically, as symphonic jazz fell out of favor, but the 1960s saw artists such as Yusef Lateef and Chick Corea incorporate bassoon into their recordings. Lateef's diverse and eclectic instrumentation saw the bassoon as a natural addition (see, e.g., The Centaur and the Phoenix (1960) which features bassoon as part of a 6-man horn section, including a few solos) while Corea employed the bassoon in combination with flautist Hubert Laws.
More recently, Illinois Jacquet, Ray Pizzi, Frank Tiberi, and Marshall Allen have both doubled on bassoon in addition to their saxophone performances. Bassoonist Karen Borca, a performer of free jazz, is one of the few jazz musicians to play only bassoon; Michael Rabinowitz, the Spanish bassoonist Javier Abad, and James Lassen, an American resident in Bergen, Norway, are others. Katherine Young plays the bassoon in the ensembles of Anthony Braxton. Lindsay Cooper, Paul Hanson, the Brazilian bassoonist Alexandre Silvério, Trent Jacobs and Daniel Smith are also currently using the bassoon in jazz. French bassoonists Jean-Jacques Decreux and Alexandre Ouzounoff have both recorded jazz, exploiting the flexibility of the Buffet system instrument to good effect.
In conjunction with the use of electronic pickups and amplification, the instrument began to be used more somewhat in jazz and rock settings. However, the bassoon is still quite rare as a regular member of rock bands. Several 1960s pop music hits feature the bassoon, including "The Tears of a Clown" by Smokey Robinson and the Miracles (the bassoonist was Charles R. Sirard), "Jennifer Juniper" by Donovan, "59th Street Bridge Song" by Harpers Bizarre, and the oompah bassoon underlying The New Vaudeville Band's "Winchester Cathedral". From 1974 to 1978, the bassoon was played by Lindsay Cooper in the British avant-garde band Henry Cow. The Leonard Nimoy song "The Ballad of Bilbo Baggins" features the bassoon. In the 1970s it was played, in the British medieval/progressive rock band Gryphon, by Brian Gulland, as well as by the American band Ambrosia, where it was played by drummer Burleigh Drummond. The Belgian Rock in Opposition-band Univers Zero is also known for its use of the bassoon.
More recently, These New Puritans's 2010 album Hidden makes heavy use of the instrument throughout; their principal songwriter, Jack Barnett, claimed repeatedly to be "writing a lot of music for bassoon" in the run-up to its recording. The rock band Better Than Ezra took their name from a passage in Ernest Hemingway's A Moveable Feast in which the author comments that listening to an annoyingly talkative person is still "better than Ezra learning how to play the bassoon", referring to Ezra Pound.
British psychedelic/progressive rock band Knifeworld features the bassoon playing of Chloe Herrington, who also plays for experimental chamber rock orchestra Chrome Hoof.
Fiona Apple featured the bassoon in the opening track of her 2004 album Extraordinary Machine.
In 2016, the bassoon was featured on the album Gang Signs and Prayers by UK ”grime" artist Stormzy. Played by UK bassoonist Louise Watson, the bassoon is heard in the tracks "Cold" and "Mr Skeng" as a complement to the electronic synthesizer bass lines typically found in this genre.
The Cartoon Network animated series Over the Garden Wall features a bassoon in episode 6 entitled "Lullaby in Frogland", where the main character is encouraged to play the bassoon to impress a group of frogs.
The character Jan Bellows in the Hulu series Only Murders in the Building is a professional bassoonist.
The bassoon is held diagonally in front of the player, but unlike the flute, oboe and clarinet, it cannot be easily supported by the player's hands alone. Some means of additional support is usually required; the most common ones are a seat strap attached to the base of the boot joint, which is laid across the chair seat prior to sitting down, or a neck strap or shoulder harness attached to the top of the boot joint. Occasionally a spike similar to those used for the cello or the bass clarinet is attached to the bottom of the boot joint and rests on the floor. It is possible to play while standing up if the player uses a neck strap or similar harness, or if the seat strap is tied to the belt. Sometimes a device called a balance hanger is used when playing in a standing position. This is installed between the instrument and the neck strap, and shifts the point of support closer to the center of gravity, adjusting the distribution of weight between the two hands.
The bassoon is played with both hands in a stationary position, the left above the right, with five main finger holes on the front of the instrument (nearest the audience) plus a sixth that is activated by an open-standing key. Five additional keys on the front are controlled by the little fingers of each hand. The back of the instrument (nearest the player) has twelve or more keys to be controlled by the thumbs, the exact number varying depending on model.
To stabilize the right hand, many bassoonists use an adjustable comma-shaped apparatus called a "crutch", or a hand rest, which mounts to the boot joint. The crutch is secured with a thumb screw, which also allows the distance that it protrudes from the bassoon to be adjusted. Players rest the curve of the right hand where the thumb joins the palm against the crutch. The crutch also keeps the right hand from tiring and enables the player to keep the finger pads flat on the finger holes and keys.
An aspect of bassoon technique not found on any other woodwind is called flicking. It involves the left hand thumb momentarily pressing, or "flicking" the high A, C and D keys at the beginning of certain notes in the middle octave to achieve a clean slur from a lower note. This eliminates cracking, or brief multiphonics that happens without the use of this technique. Alternatively, a similar method is called "venting", which requires that the register key be used as part of the full fingering as opposed to being open momentarily at the start of the note. This is sometimes called the "European style"; venting raises the intonation of the notes slightly, and it can be advantageous when tuning to higher frequencies. Some bassoonists flick A and B♭ when tongued, for clarity of articulation, but flicking (or venting) is practically ubiquitous for slurs.
While flicking is used to slur up to higher notes, the whisper key is used for lower notes. From the A♭ right below middle C and lower, the whisper key is pressed with the left thumb and held for the duration of the note. This prevents cracking, as low notes can sometimes crack into a higher octave. Both flicking and using the whisper key is especially important to ensure notes speak properly during slurring between high and low registers.
While bassoons are usually critically tuned at the factory, the player nonetheless has a great degree of flexibility of pitch control through the use of breath support, embouchure, and reed profile. Players can also use alternate fingerings to adjust the pitch of many notes. Similar to other woodwind instruments, the length of the bassoon can be increased to lower pitch or decreased to raise pitch. On the bassoon, this is done preferably by changing the bocal to one of a different length, (lengths are denoted by a number on the bocal, usually starting at 0 for the shortest length, and 3 for the longest, but there are some manufacturers who will use other numbers) but it is possible to push the bocal in or out slightly to grossly adjust the pitch.
The bassoon embouchure is a very important aspect of producing a full, round, and rich sound on the instrument. The lips are both rolled over the teeth, often with the upper lip further along in an "overbite". The lips provide micromuscular pressure on the entire circumference of the reed, which grossly controls intonation and harmonic excitement, and thus must be constantly modulated with every change of note. How far along the reed the lips are placed affects both tone (with less reed in the mouth making the sound more edged or "reedy", and more reed making it smooth and less projectile) and the way the reed will respond to pressure.
The musculature employed in a bassoon embouchure is primarily around the lips, which pressure the reed into the shapes needed for the desired sound. The jaw is raised or lowered to adjust the oral cavity for better reed control, but the jaw muscles are used much less for upward vertical pressure than in single reeds, only being substantially employed in the very high register. However, double reed students often "bite" the reed with these muscles because the control and tone of the labial and other muscles is still developing, but this generally makes the sound sharp and "choked" as it contracts the aperture of the reed and stifles the vibration of its blades.
Apart from the embouchure proper, students must also develop substantial muscle tone and control in the diaphragm, throat, neck and upper chest, which are all employed to increase and direct air pressure. Air pressure is a very important aspect of the tone, intonation and projection of double reed instruments, affecting these qualities as much, or more than the embouchure does.
Attacking a note on the bassoon with imprecise amounts of muscle or air pressure for the desired pitch will result in poor intonation, cracking or multiphonics, accidentally producing the incorrect partial, or the reed not speaking at all. These problems are compounded by the individual qualities of reeds, which are categorically inconsistent in behaviour for inherent and exherent reasons.
The muscle requirements and variability of reeds mean it takes some time for bassoonists (and oboists) to develop an embouchure that exhibits consistent control across all reeds, dynamics and playing environments.
The fingering technique of the bassoon varies more between players, by a wide margin, than that of any other orchestral woodwind. The complex mechanism and acoustics mean the bassoon lacks simple fingerings of good sound quality or intonation for some notes (especially in the higher range), but, conversely, there is a great variety of superior, but generally more complicated, fingerings for them. Typically, the simpler fingerings for such notes are used as alternate or trill fingerings, and the bassoonist will use as "full fingering" one or several of the more complex executions possible, for optimal sound quality. The fingerings used are at the discretion of the bassoonist, and, for particular passages, he or she may experiment to find new alternate fingerings that are thus idiomatic to the player.
These elements have resulted in both "full" and alternate fingerings differing extensively between bassoonists, and are further informed by factors such as cultural difference in what sound is sought, how reeds are made, and regional variation in tuning frequencies (necessitating sharper or flatter fingerings). Regional enclaves of bassoonists tend to have some uniformity in technique, but on a global scale, technique differs such that two given bassoonists may share no fingerings for certain notes. Owing to these factors, ubiquitous bassoon technique can only be partially notated.
The left thumb operates nine keys: B♭1, B1, C2, D2, D5, C5 (also B4), two keys when combined create A4, and the whisper key. The whisper key should be held down for notes between and including F2 and G♯3 and certain other notes; it can be omitted, but the pitch will destabilise. Additional notes can be created with the left thumb keys; the D2 and bottom key above the whisper key on the tenor joint (C♯ key) together create both C♯3 and C♯4. The same bottom tenor-joint key is also used, with additional fingering, to create E5 and F5. D5 and C5 together create C♯5. When the two keys on the tenor joint to create A4 are used with slightly altered fingering on the boot joint, B♭4 is created. The whisper key may also be used at certain points throughout the instrument's high register, along with other fingerings, to alter sound quality as desired.
The right thumb operates four keys. The uppermost key is used to produce B♭2 and B♭3, and may be used in B4,F♯4, C5, D5, F5, and E♭5. The large circular key, otherwise known as the "pancake key", is held down for all the lowest notes from E2 down to B♭1. It is also used, like the whisper key, in additional fingerings for muting the sound. For example, in Ravel's "Boléro", the bassoon is asked to play the ostinato on G4. This is easy to perform with the normal fingering for G4, but Ravel directs that the player should also depress the E2 key (pancake key) to mute the sound (this being written with Buffet system in mind; the G fingering on which involves the Bb key – sometimes called "French" G on Heckel). The next key operated by the right thumb is known as the "spatula key": its primary use is to produce F♯2 and F♯3. The lowermost key is used less often: it is used to produce A♭2 (G♯2) and A♭3 (G♯3), in a manner that avoids sliding the right fourth finger from another note.
The four fingers of the left hand can each be used in two different positions. The key normally operated by the index finger is primarily used for E5, also serving for trills in the lower register. Its main assignment is the upper tone hole. This hole can be closed fully, or partially by rolling down the finger. This half-holing technique is used to overblow F♯3, G3 and G♯3. The middle finger typically stays on the centre hole on the tenor joint. It can also move to a lever used for E♭5, also a trill key. The ring finger operates, on most models, one key. Some bassoons have an alternate E♭ key above the tone hole, predominantly for trills, but many do not. The smallest finger operates two side keys on the bass joint. The lower key is typically used for C♯2, but can be used for muting or flattening notes in the tenor register. The upper key is used for E♭2, E4, F4, F♯4, A4, B♭4, B4, C5, C♯5, and D5; it flattens G3 and is the standard fingering for it in many places that tune to lower Hertz levels such as A440.
The four fingers of the right hand have at least one assignment each. The index finger stays over one hole, except that when E♭5 is played a side key at the top of the boot is used (this key also provides a C♯3 trill, albeit sharp on D). The middle finger remains stationary over the hole with a ring around it, and this ring and other pads are lifted when the smallest finger on the right hand pushes a lever. The ring finger typically remains stationary on the lower ring-finger key. However, the upper ring-finger key can be used, typically for B♭2 and B♭3, in place of the top thumb key on the front of the boot joint; this key comes from the oboe, and some bassoons do not have it because the thumb fingering is practically universal. The smallest finger operates three keys. The backmost one, closest to the bassoonist, is held down throughout most of the bass register. F♯4 may be created with this key, as well as G4, B♭4, B4, and C5 (the latter three employing solely it to flatten and stabilise the pitch). The lowest key for the smallest finger on the right hand is primarily used for A♭2 (G♯2) and A♭3 (G♯3) but can be used to improve D5, E♭5, and F5. The frontmost key is used, in addition to the thumb key, to create G♭2 and G♭3; on many bassoons this key operates a different tone hole to the thumb key and produces a slightly flatter F♯ ("duplicated F♯"); some techniques use one as standard for both octaves and the other for utility, but others use the thumb key for the lower and the fourth finger for the higher.
Many extended techniques can be performed on the bassoon, such as multiphonics, flutter-tonguing, circular breathing, double tonguing, and harmonics. In the case of the bassoon, flutter-tonguing may be accomplished by "gargling" in the back of the throat as well as by the conventional method of rolling Rs. Multiphonics on the bassoon are plentiful, and can be achieved by using particular alternative fingerings, but are generally heavily influenced by embouchure position. Also, again using certain fingerings, notes may be produced on the instrument that sound lower pitches than the actual range of the instrument. These notes tend to sound very gravelly and out of tune, but technically sound below the low B♭.
The bassoonist may also produce lower notes than the bottom B♭ by extending the length of bell. This can be achieved by inserting a specially made "low A extension" into the bell, but may also be achieved with a small paper or rubber tube or a clarinet/cor anglais bell sitting inside the bassoon bell (although the note may tend sharp). The effect of this is to convert the lower B♭ into a lower note, almost always A natural; this broadly lowers the pitch of the instrument (most noticeably in the lower register) and will often accordingly convert the lowest B to B♭ (and render the neighbouring C very flat). The idea of using low A was begun by Richard Wagner, who wanted to extend the range of the bassoon. Many passages in his later operas require the low A as well as the B-flat immediately above it; this is possible on a normal bassoon using an extension which also flattens low B to B♭, but all extensions to the bell have significant effects on intonation and sound quality in the bottom register of the instrument, and passages such as this are more often realised with comparative ease by the contrabassoon.
Some bassoons have been specially made to allow bassoonists to realize similar passages. These bassoons are made with a "Wagner bell" which is an extended bell with a key for both the low A and the low B-flat, but they are not widespread; bassoons with Wagner bells suffer similar intonational problems as a bassoon with an ordinary A extension, and a bassoon must be constructed specifically to accommodate one, making the extension option far less complicated. Extending the bassoon's range even lower than the A, though possible, would have even stronger effects on pitch and make the instrument effectively unusable.
Despite the logistic difficulties of the note, Wagner was not the only composer to write the low A. Another composer who has required the bassoon to be chromatic down to low A is Gustav Mahler. Richard Strauss also calls for the low A in his opera Intermezzo. Some works have optional low As, as in Carl Nielsen's Wind Quintet, op. 43, which includes an optional low A for the final cadence of the work.
The complex fingering system and the expense and lack of access to quality bassoon reeds can make the bassoon more of a challenge to learn than some of the other woodwind instruments. Cost is another factor in a person's decision to pursue the bassoon. Prices may range from US$7,000 to over $45,000 for a high-quality instrument. In North America, schoolchildren may take up bassoon only after starting on another reed instrument, such as clarinet or saxophone.
Students in America often begin to pursue the study of bassoon performance and technique in the middle years of their music education, often in association with their school band program. Students are often provided with a school instrument and encouraged to pursue lessons with private instructors. Students typically receive instruction in proper posture, hand position, embouchure, repertoire, and tone production. | [
{
"paragraph_id": 0,
"text": "The bassoon is a musical instrument in the woodwind family, which plays in the tenor and bass ranges. It is composed of six pieces, and is usually made of wood. It is known for its distinctive tone color, wide range, versatility, and virtuosity. It is a non-transposing instrument and typically its music is written in the bass and tenor clefs, and sometimes in the treble. There are two forms of modern bassoon: the Buffet (or French) and Heckel (or German) systems. It is typically played while sitting using a seat strap, but can be played while standing if the player has a harness to hold the instrument. Sound is produced by rolling both lips over the reed and blowing direct air pressure to cause the reed to vibrate. Its fingering system can be quite complex when compared to those of other instruments. Appearing in its modern form in the 19th century, the bassoon figures prominently in orchestral, concert band, and chamber music literature, and is occasionally heard in pop, rock, and jazz settings as well. One who plays a bassoon is called a bassoonist.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The word bassoon comes from French basson and from Italian bassone (basso with the augmentative suffix -one). However, the Italian name for the same instrument is fagotto, in Spanish, Dutch, Czech, Polish and Romanian it is fagot, and in German Fagott. Fagot is an Old French word meaning a bundle of sticks. The dulcian came to be known as fagotto in Italy. However, the usual etymology that equates fagotto with \"bundle of sticks\" is somewhat misleading, as the latter term did not come into general use until later. However an early English variation, \"faget\", was used as early as 1450 to refer to firewood, which is 100 years before the earliest recorded use of the dulcian (1550). Further citation is needed to prove the lack of relation between the meaning \"bundle of sticks\" and \"fagotto\" (Italian) or variants. Some think that it may resemble the Roman fasces, a standard of bound sticks with an axe. A further discrepancy lies in the fact that the dulcian was carved out of a single block of wood—in other words, a single \"stick\" and not a bundle.",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "The range of the bassoon begins at B♭1 (the first one below the bass staff) and extends upward over three octaves, roughly to the G above the treble staff (G5). However, most writing for bassoon rarely calls for notes above C5 or D5; even Stravinsky's opening solo in The Rite of Spring only ascends to D5. Notes higher than this are possible, but seldom written, as they are difficult to produce (often requiring specific reed design features to ensure reliability), and at any rate are quite homogeneous in timbre to the same pitches on cor anglais, which can produce them with relative ease. French bassoon has greater facility in the extreme high register, and so repertoire written for it is somewhat likelier to include very high notes, although repertoire for French system can be executed on German system without alterations and vice versa.",
"title": "Characteristics"
},
{
"paragraph_id": 3,
"text": "The extensive high register of the bassoon and its frequent role as a lyric tenor have meant that tenor clef is very commonly employed in its literature after the Baroque, partly to avoid excessive ledger lines, and, beginning in the 20th century, treble clef is also seen for similar reasons.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "Like other woodwind instruments, the lowest note is fixed, but A1 is possible with a special extension to the instrument—see \"Extended techniques\" below.",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "Although the primary tone hole pitches are a pitched perfect 5th lower than other non-transposing Western woodwinds (effectively an octave beneath English horn) the bassoon is non-transposing, meaning that notes sounded match the written pitch.",
"title": "Characteristics"
},
{
"paragraph_id": 6,
"text": "The bassoon disassembles into six main pieces, including the reed. The bell (6), extending upward; the bass joint (or long joint) (5), connecting the bell and the boot; the boot (or butt) (4), at the bottom of the instrument and folding over on itself; the wing joint (or tenor joint) (3), which extends from boot to bocal; and the bocal (or crook) (2), a crooked metal tube that attaches the wing joint to a reed (1) (listen).",
"title": "Construction"
},
{
"paragraph_id": 7,
"text": "The bore of the bassoon is conical, like that of the oboe and the saxophone, and the two adjoining bores of the boot joint are connected at the bottom of the instrument with a U-shaped metal connector. Both bore and tone holes are precision-machined, and each instrument is finished by hand for proper tuning. The walls of the bassoon are thicker at various points along the bore; here, the tone holes are drilled at an angle to the axis of the bore, which reduces the distance between the holes on the exterior. This ensures coverage by the fingers of the average adult hand. Playing is facilitated by closing the distance between the widely spaced holes with a complex system of key work, which extends throughout nearly the entire length of the instrument. The overall height of the bassoon stretches to 1.34 m (4 ft 5 in) tall, but the total sounding length is 2.54 m (8 ft 4 in) considering that the tube is doubled back on itself. There are also short-reach bassoons made for the benefit of young or petite players.",
"title": "Construction"
},
{
"paragraph_id": 8,
"text": "A modern beginner's bassoon is generally made of maple, with medium-hardness types such as sycamore maple and sugar maple preferred. Less-expensive models are also made of materials such as polypropylene and ebonite, primarily for student and outdoor use. Metal bassoons were made in the past but have not been produced by any major manufacturer since 1889.",
"title": "Construction"
},
{
"paragraph_id": 9,
"text": "The art of reed-making has been practiced for several hundred years, some of the earliest known reeds having been made for the dulcian, a predecessor of the bassoon. Current methods of reed-making consist of a set of basic methods; however, individual bassoonists' playing styles vary greatly and thus require that reeds be customized to best suit their respective bassoonist. Advanced players usually make their own reeds to this end. With regards to commercially made reeds, many companies and individuals offer pre-made reeds for sale, but players often find that such reeds still require adjustments to suit their particular playing style.",
"title": "Construction"
},
{
"paragraph_id": 10,
"text": "Modern bassoon reeds, made of Arundo donax cane, are often made by the players themselves, although beginner bassoonists tend to buy their reeds from professional reed makers or use reeds made by their teachers. Reeds begin with a length of tube cane that is split into three or four pieces using a tool called a cane splitter. The cane is then trimmed and gouged to the desired thickness, leaving the bark attached. After soaking, the gouged cane is cut to the proper shape and milled to the desired thickness, or profiled, by removing material from the bark side. This can be done by hand with a file; more frequently it is done with a machine or tool designed for the purpose. After the profiled cane has soaked once again it is folded over in the middle. Prior to soaking, the reed maker will have lightly scored the bark with parallel lines with a knife; this ensures that the cane will assume a cylindrical shape during the forming stage.",
"title": "Construction"
},
{
"paragraph_id": 11,
"text": "On the bark portion, the reed maker binds on one, two, or three coils or loops of brass wire to aid in the final forming process. The exact placement of these loops can vary somewhat depending on the reed maker. The bound reed blank is then wrapped with thick cotton or linen thread to protect it, and a conical steel mandrel (which sometimes has been heated in a flame) is quickly inserted in between the blades. Using a special pair of pliers, the reed maker presses down the cane, making it conform to the shape of the mandrel. (The steam generated by the heated mandrel causes the cane to permanently assume the shape of the mandrel.) The upper portion of the cavity thus created is called the \"throat\", and its shape has an influence on the final playing characteristics of the reed. The lower, mostly cylindrical portion will be reamed out with a special tool called a reamer, allowing the reed to fit on the bocal.",
"title": "Construction"
},
{
"paragraph_id": 12,
"text": "After the reed has dried, the wires are tightened around the reed, which has shrunk after drying, or replaced completely. The lower part is sealed (a nitrocellulose-based cement such as Duco may be used) and then wrapped with thread to ensure both that no air leaks out through the bottom of the reed and that the reed maintains its shape. The wrapping itself is often sealed with Duco or clear nail varnish (polish). Electrical tape can also be used as a wrapping for amateur reed makers. The bulge in the wrapping is sometimes referred to as the \"Turk's head\"—it serves as a convenient handle when inserting the reed on the bocal. Alternatively, hot glue, epoxy, or heat shrink wrap may be used to seal the tube of the reed. The thread wrapping (commonly known as a \"Turban\" due to the criss-crossing fabric) is still more common in commercially sold reeds.",
"title": "Construction"
},
{
"paragraph_id": 13,
"text": "To finish the reed, the end of the reed blank, originally at the center of the unfolded piece of cane, is cut off, creating an opening. The blades above the first wire are now roughly 27–30 mm (1.1–1.2 in) long. For the reed to play, a slight bevel must be created at the tip with a knife, although there is also a machine that can perform this function. Other adjustments with the reed knife may be necessary, depending on the hardness, the profile of the cane, and the requirements of the player. The reed opening may also need to be adjusted by squeezing either the first or second wire with the pliers. Additional material may be removed from the sides (the \"channels\") or tip to balance the reed. Additionally, if the \"e\" in the bass clef staff is sagging in pitch, it may be necessary to \"clip\" the reed by removing 1–2 mm (0.039–0.079 in) from its length using a pair of very sharp scissors or the equivalent.",
"title": "Construction"
},
{
"paragraph_id": 14,
"text": "Music historians generally consider the dulcian to be the forerunner of the modern bassoon, as the two instruments share many characteristics: a double reed fitted to a metal crook, obliquely drilled tone holes and a conical bore that doubles back on itself. The origins of the dulcian are obscure, but by the mid-16th century it was available in as many as eight different sizes, from soprano to great bass. A full consort of dulcians was a rarity; its primary function seems to have been to provide the bass in the typical wind band of the time, either loud (shawms) or soft (recorders), indicating a remarkable ability to vary dynamics to suit the need. Otherwise, dulcian technique was rather primitive, with eight finger holes and two keys, indicating that it could play in only a limited number of key signatures.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Circumstantial evidence indicates that the baroque bassoon was a newly invented instrument, rather than a simple modification of the old dulcian. The dulcian was not immediately supplanted, but continued to be used well into the 18th century by Bach and others; and, presumably for reasons of interchangeability, repertoire from this time is very unlikely to go beyond the smaller compass of the dulcian. The man most likely responsible for developing the true bassoon was Martin Hotteterre (d.1712), who may also have invented the three-piece flûte traversière (transverse flute) and the hautbois (baroque oboe). Some historians believe that sometime in the 1650s, Hotteterre conceived the bassoon in four sections (bell, bass joint, boot and wing joint), an arrangement that allowed greater accuracy in machining the bore compared to the one-piece dulcian. He also extended the compass down to B♭ by adding two keys. An alternate view maintains Hotteterre was one of several craftsmen responsible for the development of the early bassoon. These may have included additional members of the Hotteterre family, as well as other French makers active around the same time. No original French bassoon from this period survives, but if it did, it would most likely resemble the earliest extant bassoons of Johann Christoph Denner and Richard Haka from the 1680s. Sometime around 1700, a fourth key (G♯) was added, and it was for this type of instrument that composers such as Antonio Vivaldi, Bach, and Georg Philipp Telemann wrote their demanding music. A fifth key, for the low E♭, was added during the first half of the 18th century. Notable makers of the 4-key and 5-key baroque bassoon include J.H. Eichentopf (c. 1678–1769), J. Poerschmann (1680–1757), Thomas Stanesby, Jr. (1668–1734), G.H. Scherer (1703–1778), and Prudent Thieriot (1732–1786).",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Increasing demands on capabilities of instruments and players in the 19th century—particularly larger concert halls requiring greater volume and the rise of virtuoso composer-performers—spurred further refinement. Increased sophistication, both in manufacturing techniques and acoustical knowledge, made possible great improvements in the instrument's playability.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The modern bassoon exists in two distinct primary forms, the Buffet (or \"French\") system and the Heckel (\"German\") system. Most of the world plays the Heckel system, while the Buffet system is primarily played in France, Belgium, and parts of Latin America. A number of other types of bassoons have been constructed by various instrument makers, such as the rare Galandronome. Owing to the ubiquity of the Heckel system in English-speaking countries, references in English to the contemporary bassoon always mean the Heckel system, with the Buffet system being explicitly qualified where it appears.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The design of the modern bassoon owes a great deal to the performer, teacher, and composer Carl Almenräder. Assisted by the German acoustic researcher Gottfried Weber, he developed the 17-key bassoon with a range spanning four octaves. Almenräder's improvements to the bassoon began with an 1823 treatise describing ways of improving intonation, response, and technical ease of playing by augmenting and rearranging the keywork. Subsequent articles further developed his ideas. His employment at Schott gave him the freedom to construct and test instruments according to these new designs, and he published the results in Caecilia, Schott's house journal. Almenräder continued publishing and building instruments until his death in 1846, and Ludwig van Beethoven himself requested one of the newly made instruments after hearing of the papers. In 1831, Almenräder left Schott to start his own factory with a partner, Johann Adam Heckel.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Heckel and two generations of descendants continued to refine the bassoon, and their instruments became the standard, with other makers following. Because of their superior singing tone quality (an improvement upon one of the main drawbacks of the Almenräder instruments), the Heckel instruments competed for prominence with the reformed Wiener system, a Boehm-style bassoon, and a completely keyed instrument devised by Charles-Joseph Sax, father of Adolphe Sax. F.W. Kruspe implemented a latecomer attempt in 1893 to reform the fingering system, but it failed to catch on. Other attempts to improve the instrument included a 24-keyed model and a single-reed mouthpiece, but both these had adverse effects on tone and were abandoned.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Coming into the 20th century, the Heckel-style German model of bassoon dominated the field. Heckel himself had made over 1,100 instruments by the turn of the 20th century (serial numbers begin at 3,000), and the British makers' instruments were no longer desirable for the changing pitch requirements of the symphony orchestra, remaining primarily in military band use.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Except for a brief 1940s wartime conversion to ball bearing manufacture, the Heckel concern has produced instruments continuously to the present day. Heckel bassoons are considered by many to be the best, although a range of Heckel-style instruments is available from several other manufacturers, all with slightly different playing characteristics.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Because its mechanism is primitive compared to most modern woodwinds, makers have occasionally attempted to \"reinvent\" the bassoon. In the 1960s, Giles Brindley began to develop what he called the \"logical bassoon\", which aimed to improve intonation and evenness of tone through use of an electrically activated mechanism, making possible key combinations too complex for the human hand to manage. Brindley's logical bassoon was never marketed.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The Buffet system bassoon achieved its basic acoustical properties somewhat earlier than the Heckel. Thereafter, it continued to develop in a more conservative manner. While the early history of the Heckel bassoon included a complete overhaul of the instrument in both acoustics and key work, the development of the Buffet system consisted primarily of incremental improvements to the key work. This minimalist approach of the Buffet deprived it of improved consistency of intonation, ease of operation, and increased power, which is found in Heckel bassoons, but the Buffet is considered by some to have a more vocal and expressive quality. The conductor John Foulds lamented in 1934 the dominance of the Heckel-style bassoon, considering them too homogeneous in sound with the horn. The modern Buffet system has 22 keys with its range being the same as the Heckel; although Buffet instruments have greater facility in the upper registers, reaching E5 and F5 with far greater ease and less air resistance.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Compared to the Heckel bassoon, Buffet system bassoons have a narrower bore and simpler mechanism, requiring different, and often more complex fingerings for many notes. Switching between Heckel and Buffet, or vice versa, requires extensive retraining. French woodwind instruments' tone in general exhibits a certain amount of \"edge\", with more of a vocal quality than is usual elsewhere, and the Buffet bassoon is no exception. This sound has been utilised effectively in writing for Buffet bassoon, but is less inclined to blend than the tone of the Heckel bassoon. As with all bassoons, the tone varies considerably, depending on individual instrument, reed, and performer. In the hands of a lesser player, the Heckel bassoon can sound flat and woody, but good players succeed in producing a vibrant, singing tone. Conversely, a poorly played Buffet can sound buzzy and nasal, but good players succeed in producing a warm, expressive sound.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Though the United Kingdom once favored the French system, Buffet-system instruments are no longer made there and the last prominent British player of the French system retired in the 1980s. However, with continued use in some regions and its distinctive tone, the Buffet continues to have a place in modern bassoon playing, particularly in France, where it originated. Buffet-model bassoons are currently made in Paris by Buffet Crampon and the atelier Ducasse (Romainville, France). The Selmer Company stopped fabrication of French system bassoons around the year 2012. Some players, for example the late Gerald Corey in Canada, have learned to play both types and will alternate between them depending on the repertoire.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Prior to 1760, the early ancestor of the bassoon was the dulcian. It was used to reinforce the bass line in wind ensembles called consorts. However, its use in concert orchestras was sporadic until the late 17th century when double reeds began to make their way into standard instrumentation. Increasing use of the dulcian as a basso continuo instrument meant that it began to be included in opera orchestras, in works such as those by Reinhard Keiser and Jean-Baptiste Lully. Meanwhile, as the dulcian advanced technologically and was able to achieve more virtuosity, composers such as Joseph Bodin de Boismortier, Johann Ernst Galliard, Johann Friedrich Fasch and Georg Philipp Telemann wrote demanding solo and ensemble music for the instrument. Antonio Vivaldi brought it to prominence by featuring it in thirty-nine concerti.",
"title": "Use in ensembles"
},
{
"paragraph_id": 27,
"text": "While the bassoon was still often used to give clarity to the bassline due to its sonorous low register, the capabilities of wind instruments grew as technology advanced during the Classical era. This allowed the instrument to play in more keys than the dulcian. Joseph Haydn took advantage of this in his Symphony No. 45 (\"Farewell Symphony\"), in which the bassoon plays in F-sharp minor. Following with these advances, composers also began to exploit the bassoon for its unique color, flexibility, and virtuosic ability, rather than for its perfunctory ability to double the bass line. Those who did this include Ludwig van Beethoven in his three Duos for Clarinet and Bassoon (WoO 27) for clarinet and bassoon and Niccolo Paganini in his duets for violin and bassoon. In his Bassoon Concerto in B-flat major, K. 191, W. A. Mozart utilized all aspects of the bassoon's expressiveness with its contrasts in register, staccato playing, and expressive sound, and was especially noted for its singing quality in the second movement. This concerto is often considered one of the most important works in all of the bassoon's repertoire, even today.",
"title": "Use in ensembles"
},
{
"paragraph_id": 28,
"text": "The bassoon's similarity to the human voice, in addition to its newfound virtuosic ability, was another quality many composers took advantage of during the classical era. After 1730, the German bassoon's range expended up to B♭4, and much higher with the French instrument. Technological advances also caused the bassoon's tenor register sound to become more resonant, and playing in this register grew in popularity, especially in the Austro-Germanic musical world. Pedagogues such as Josef Frohlich instructed students to practice scales, thirds, and fourths as vocal students would. In 1829, he wrote that the bassoon was capable of expressing \"the worthy, the virile, the solemn, the great, the sublime, composure, mildness, intimacy, emotion, longing, heartfulness, reverence, and soulful ardour.\" In G.F. Brandt's performance of Carl Maria von Weber's Concerto for Bassoon in F Major, Op. 75 (J. 127) it was also likened to the human voice. In France, Pierre Cugnier described the bassoon's role as encompassing not only the bass part, but also to accompany the voice and harp, play in pairs with clarinets and horns in Harmonie, and to play in \"nearly all types of music,\" including concerti, which were much more common than the sonatas of the previous era. Both Cugnier and Étienne Ozi emphasized the importance of the bassoon's similarity to the singing voice.",
"title": "Use in ensembles"
},
{
"paragraph_id": 29,
"text": "The role of the bassoon in the orchestra varied depending on the country. In the Viennese orchestra the instrument offered a three-dimensional sound to the ensemble by doubling other instruments such as violins, as heard in Mozart's overture to The Marriage of Figaro, K 492. where it plays a rather technical part alongside the strings. He also wrote for the bassoon to change its timbre depending on which instrument it was paired with; warmer with clarinets, hollow with flutes, and dark and dignified with violins. In Germany and Scandinavian countries, orchestras typically featured only two bassoons. But in France, orchestras increased the number to four in the latter half of the nineteenth century. In England, the bassoonist's role varied depending on the ensemble. Johann Christian Bach wrote two concertos for solo bassoon, and it also appeared in more supportive roles such as accompanying church choirs after the Puritan revolution destroyed most church organs. In the American colonies, the bassoon was typically seen in a chamber setting. After the Revolutionary War, bassoonists were found in wind bands that gave public performances. By 1800, there was at least one bassoon in the United States Marine Band. In South America, the bassoon also appeared in small orchestras, bands, and military musique (similar to Harmonie ensembles).",
"title": "Use in ensembles"
},
{
"paragraph_id": 30,
"text": "The role of the bassoon during the Romantic era varied between a role as a supportive bass instrument and a role as a virtuosic, expressive, solo instrument. In fact, it was very much considered an instrument that could be used in almost any circumstance. The comparison of the bassoon's sound to the human voice continued on during this time, as much of the pedagogy surrounded emulating this sound. Giuseppe Verdi used the instrument's lyrical, singing voice to evoke emotion in pieces such as his Messa da Requiem. Eugene Jancourt compared the use of vibrato on the bassoon to that of singers, and Luigi Orselli wrote that the bassoon blended well with human voice. He also noted the function of the bassoon in the French orchestra at the time, which served to support the sound of the viola, reinforce staccato sound, and double the bass, clarinet, flute, and oboe. Emphasis also began to be placed on the unique sound of the bassoon's staccato, which might be described as quite short and aggressive, such as in Hector Berlioz's Symphonie fantastique, Op. 14 in the fifth movement. Paul Dukas utilized the staccato to depict the image of two brooms coming to life in The Sorcerer's Apprentice.",
"title": "Use in ensembles"
},
{
"paragraph_id": 31,
"text": "It was common for there to be only two bassoons in German orchestras. Austrian and British military bands also only carried two bassoons, and were mainly used for accompaniment and offbeat playing. In France, Hector Berlioz also made it fashionable to use more than two bassoons; he often scored for three or four, and at time wrote for up to eight such as in his l’Impériale.",
"title": "Use in ensembles"
},
{
"paragraph_id": 32,
"text": "At this point, composers expected bassoons to be as virtuosic as the other wind instruments, as they often wrote solos challenging the range and technique of the instrument. Examples of this include Nikolai Rimsky-Korsakov's bassoon solo and cadenza following the clarinet in Sheherazade, Op. 35 and in Richard Wagner's Tannhäuser, which required the bassoonist to triple tongue and also play up to the top of its range at an E5. Wagner also used the bassoon for its staccato ability in his work, and often wrote his three bassoon parts in thirds to evoke a darker sound with noticeable tone color. In Modest Mussorgsky's Night on Bald Mountain, the bassoons play fortissimo alongside other bass instruments in order to evoke \"the voice of the Devil.\"",
"title": "Use in ensembles"
},
{
"paragraph_id": 33,
"text": "At this point in time, the development of the bassoon slowed. Rather than making large leaps in technological improvements, tiny imperfections in the instrument's function were corrected. The instrument became quite versatile throughout the twentieth century; the instrument was at this point able to play three octaves, a variety of different trills, and maintained stable intonation across all registers and dynamic levels. The pedagogy among bassoonists varied among different countries, and so the overall instrument itself played a variety of roles. As was a common theme in previous eras, the bassoon was valued by composers for its unique voice, and its use rose higher in pitch. A famous example of this is in Igor Stravinsky's Rite of Spring in which the bassoon must play in its highest register in order to mimic the Russian dudka. Composers also wrote for the bassoon's middle register, such as in Stravinsky's \"Berceuse\" in The Firebird and Symphony No. 5 in E-flat major, op. 82 by Jean Sibelius's. They also continued to highlight the staccato sound of the bassoon, as heard in Sergei Prokofiev's Humorous Scherzo. In Sergei Prokofiev's Peter and the Wolf, the part of the grandfather is played by the bassoon.",
"title": "Use in ensembles"
},
{
"paragraph_id": 34,
"text": "In orchestral settings, most orchestras from the beginning of the twentieth century to the present have three or four bassoonists, with the fourth typically covering contrabassoon as well. Greater emphasis on the use of timbre, vibrato, and phrasing began to appear in bassoon pedagogy, and many followed Marcel Tabuteau's philosophy on musical phrasing. Vibrato began to be used in ensemble playing, depending on the phrasing of the music. The bassoon was, and currently is, expected to be fluent with other woodwinds in terms of virtuosity and technique. Examples of this include the cadenza for bassoons in Maurice Ravel's Rapsodie espagnole and the multi-finger trills used in Stravinsky's Octet.",
"title": "Use in ensembles"
},
{
"paragraph_id": 35,
"text": "In the twentieth century, the bassoon was less of a concerto soloist, and when it was, the accompanying ensemble was made softer and quieter. In addition, it was no longer used in marching bands, though still existed in concert bands with one or two of them. Orchestral repertoire remained very much the same Austro-Germanic tradition throughout most Western countries. It mostly appeared in solo, chamber, and symphonic settings. By the mid-1900s, broadcasting and recording grew in popularity, allowing for new opportunities for bassoonists, and leading to a slow decline of live performances. Much of the new music for bassoon in the late twentieth and early twenty-first centuries, often included extended techniques and was written for solo or chamber settings. One piece that included extended techniques was Luciano Berio's Sequenza XII, which called for microtonal fingerings, glissandos, and timbral trills. Double and triple tonguing, flutter tonguing, multiphonics, quarter-tones, and singing are all utilized in Bruno Bartolozzi's Concertazioni. There were also a variety of concerti and bassoon and piano pieces written, such as John Williams's Five Sacred Trees and André Previn's Sonata for bassoon and piano. There were also \"performance\" pieces such as Peter Schickele's Sonata Abassoonata, which required the bassoonist to be both a musician and an actor. The bassoon quartet became prominent at this time, with pieces such as Daniel Dorff's It Takes Four to Tango.",
"title": "Use in ensembles"
},
{
"paragraph_id": 36,
"text": "The bassoon is infrequently used as a jazz instrument and rarely seen in a jazz ensemble. It first began appearing in the 1920s, when Garvin Bushell began incorporating the bassoon in his performances. Specific calls for its use occurred in Paul Whiteman's group, the unusual octets of Alec Wilder, and a few other session appearances. The next few decades saw the instrument used only sporadically, as symphonic jazz fell out of favor, but the 1960s saw artists such as Yusef Lateef and Chick Corea incorporate bassoon into their recordings. Lateef's diverse and eclectic instrumentation saw the bassoon as a natural addition (see, e.g., The Centaur and the Phoenix (1960) which features bassoon as part of a 6-man horn section, including a few solos) while Corea employed the bassoon in combination with flautist Hubert Laws.",
"title": "Use in ensembles"
},
{
"paragraph_id": 37,
"text": "More recently, Illinois Jacquet, Ray Pizzi, Frank Tiberi, and Marshall Allen have both doubled on bassoon in addition to their saxophone performances. Bassoonist Karen Borca, a performer of free jazz, is one of the few jazz musicians to play only bassoon; Michael Rabinowitz, the Spanish bassoonist Javier Abad, and James Lassen, an American resident in Bergen, Norway, are others. Katherine Young plays the bassoon in the ensembles of Anthony Braxton. Lindsay Cooper, Paul Hanson, the Brazilian bassoonist Alexandre Silvério, Trent Jacobs and Daniel Smith are also currently using the bassoon in jazz. French bassoonists Jean-Jacques Decreux and Alexandre Ouzounoff have both recorded jazz, exploiting the flexibility of the Buffet system instrument to good effect.",
"title": "Use in ensembles"
},
{
"paragraph_id": 38,
"text": "In conjunction with the use of electronic pickups and amplification, the instrument began to be used more somewhat in jazz and rock settings. However, the bassoon is still quite rare as a regular member of rock bands. Several 1960s pop music hits feature the bassoon, including \"The Tears of a Clown\" by Smokey Robinson and the Miracles (the bassoonist was Charles R. Sirard), \"Jennifer Juniper\" by Donovan, \"59th Street Bridge Song\" by Harpers Bizarre, and the oompah bassoon underlying The New Vaudeville Band's \"Winchester Cathedral\". From 1974 to 1978, the bassoon was played by Lindsay Cooper in the British avant-garde band Henry Cow. The Leonard Nimoy song \"The Ballad of Bilbo Baggins\" features the bassoon. In the 1970s it was played, in the British medieval/progressive rock band Gryphon, by Brian Gulland, as well as by the American band Ambrosia, where it was played by drummer Burleigh Drummond. The Belgian Rock in Opposition-band Univers Zero is also known for its use of the bassoon.",
"title": "Use in ensembles"
},
{
"paragraph_id": 39,
"text": "More recently, These New Puritans's 2010 album Hidden makes heavy use of the instrument throughout; their principal songwriter, Jack Barnett, claimed repeatedly to be \"writing a lot of music for bassoon\" in the run-up to its recording. The rock band Better Than Ezra took their name from a passage in Ernest Hemingway's A Moveable Feast in which the author comments that listening to an annoyingly talkative person is still \"better than Ezra learning how to play the bassoon\", referring to Ezra Pound.",
"title": "Use in ensembles"
},
{
"paragraph_id": 40,
"text": "British psychedelic/progressive rock band Knifeworld features the bassoon playing of Chloe Herrington, who also plays for experimental chamber rock orchestra Chrome Hoof.",
"title": "Use in ensembles"
},
{
"paragraph_id": 41,
"text": "Fiona Apple featured the bassoon in the opening track of her 2004 album Extraordinary Machine.",
"title": "Use in ensembles"
},
{
"paragraph_id": 42,
"text": "In 2016, the bassoon was featured on the album Gang Signs and Prayers by UK ”grime\" artist Stormzy. Played by UK bassoonist Louise Watson, the bassoon is heard in the tracks \"Cold\" and \"Mr Skeng\" as a complement to the electronic synthesizer bass lines typically found in this genre.",
"title": "Use in ensembles"
},
{
"paragraph_id": 43,
"text": "The Cartoon Network animated series Over the Garden Wall features a bassoon in episode 6 entitled \"Lullaby in Frogland\", where the main character is encouraged to play the bassoon to impress a group of frogs.",
"title": "Use in ensembles"
},
{
"paragraph_id": 44,
"text": "The character Jan Bellows in the Hulu series Only Murders in the Building is a professional bassoonist.",
"title": "Use in ensembles"
},
{
"paragraph_id": 45,
"text": "The bassoon is held diagonally in front of the player, but unlike the flute, oboe and clarinet, it cannot be easily supported by the player's hands alone. Some means of additional support is usually required; the most common ones are a seat strap attached to the base of the boot joint, which is laid across the chair seat prior to sitting down, or a neck strap or shoulder harness attached to the top of the boot joint. Occasionally a spike similar to those used for the cello or the bass clarinet is attached to the bottom of the boot joint and rests on the floor. It is possible to play while standing up if the player uses a neck strap or similar harness, or if the seat strap is tied to the belt. Sometimes a device called a balance hanger is used when playing in a standing position. This is installed between the instrument and the neck strap, and shifts the point of support closer to the center of gravity, adjusting the distribution of weight between the two hands.",
"title": "Technique"
},
{
"paragraph_id": 46,
"text": "The bassoon is played with both hands in a stationary position, the left above the right, with five main finger holes on the front of the instrument (nearest the audience) plus a sixth that is activated by an open-standing key. Five additional keys on the front are controlled by the little fingers of each hand. The back of the instrument (nearest the player) has twelve or more keys to be controlled by the thumbs, the exact number varying depending on model.",
"title": "Technique"
},
{
"paragraph_id": 47,
"text": "To stabilize the right hand, many bassoonists use an adjustable comma-shaped apparatus called a \"crutch\", or a hand rest, which mounts to the boot joint. The crutch is secured with a thumb screw, which also allows the distance that it protrudes from the bassoon to be adjusted. Players rest the curve of the right hand where the thumb joins the palm against the crutch. The crutch also keeps the right hand from tiring and enables the player to keep the finger pads flat on the finger holes and keys.",
"title": "Technique"
},
{
"paragraph_id": 48,
"text": "An aspect of bassoon technique not found on any other woodwind is called flicking. It involves the left hand thumb momentarily pressing, or \"flicking\" the high A, C and D keys at the beginning of certain notes in the middle octave to achieve a clean slur from a lower note. This eliminates cracking, or brief multiphonics that happens without the use of this technique. Alternatively, a similar method is called \"venting\", which requires that the register key be used as part of the full fingering as opposed to being open momentarily at the start of the note. This is sometimes called the \"European style\"; venting raises the intonation of the notes slightly, and it can be advantageous when tuning to higher frequencies. Some bassoonists flick A and B♭ when tongued, for clarity of articulation, but flicking (or venting) is practically ubiquitous for slurs.",
"title": "Technique"
},
{
"paragraph_id": 49,
"text": "While flicking is used to slur up to higher notes, the whisper key is used for lower notes. From the A♭ right below middle C and lower, the whisper key is pressed with the left thumb and held for the duration of the note. This prevents cracking, as low notes can sometimes crack into a higher octave. Both flicking and using the whisper key is especially important to ensure notes speak properly during slurring between high and low registers.",
"title": "Technique"
},
{
"paragraph_id": 50,
"text": "While bassoons are usually critically tuned at the factory, the player nonetheless has a great degree of flexibility of pitch control through the use of breath support, embouchure, and reed profile. Players can also use alternate fingerings to adjust the pitch of many notes. Similar to other woodwind instruments, the length of the bassoon can be increased to lower pitch or decreased to raise pitch. On the bassoon, this is done preferably by changing the bocal to one of a different length, (lengths are denoted by a number on the bocal, usually starting at 0 for the shortest length, and 3 for the longest, but there are some manufacturers who will use other numbers) but it is possible to push the bocal in or out slightly to grossly adjust the pitch.",
"title": "Technique"
},
{
"paragraph_id": 51,
"text": "The bassoon embouchure is a very important aspect of producing a full, round, and rich sound on the instrument. The lips are both rolled over the teeth, often with the upper lip further along in an \"overbite\". The lips provide micromuscular pressure on the entire circumference of the reed, which grossly controls intonation and harmonic excitement, and thus must be constantly modulated with every change of note. How far along the reed the lips are placed affects both tone (with less reed in the mouth making the sound more edged or \"reedy\", and more reed making it smooth and less projectile) and the way the reed will respond to pressure.",
"title": "Technique"
},
{
"paragraph_id": 52,
"text": "The musculature employed in a bassoon embouchure is primarily around the lips, which pressure the reed into the shapes needed for the desired sound. The jaw is raised or lowered to adjust the oral cavity for better reed control, but the jaw muscles are used much less for upward vertical pressure than in single reeds, only being substantially employed in the very high register. However, double reed students often \"bite\" the reed with these muscles because the control and tone of the labial and other muscles is still developing, but this generally makes the sound sharp and \"choked\" as it contracts the aperture of the reed and stifles the vibration of its blades.",
"title": "Technique"
},
{
"paragraph_id": 53,
"text": "Apart from the embouchure proper, students must also develop substantial muscle tone and control in the diaphragm, throat, neck and upper chest, which are all employed to increase and direct air pressure. Air pressure is a very important aspect of the tone, intonation and projection of double reed instruments, affecting these qualities as much, or more than the embouchure does.",
"title": "Technique"
},
{
"paragraph_id": 54,
"text": "Attacking a note on the bassoon with imprecise amounts of muscle or air pressure for the desired pitch will result in poor intonation, cracking or multiphonics, accidentally producing the incorrect partial, or the reed not speaking at all. These problems are compounded by the individual qualities of reeds, which are categorically inconsistent in behaviour for inherent and exherent reasons.",
"title": "Technique"
},
{
"paragraph_id": 55,
"text": "The muscle requirements and variability of reeds mean it takes some time for bassoonists (and oboists) to develop an embouchure that exhibits consistent control across all reeds, dynamics and playing environments.",
"title": "Technique"
},
{
"paragraph_id": 56,
"text": "The fingering technique of the bassoon varies more between players, by a wide margin, than that of any other orchestral woodwind. The complex mechanism and acoustics mean the bassoon lacks simple fingerings of good sound quality or intonation for some notes (especially in the higher range), but, conversely, there is a great variety of superior, but generally more complicated, fingerings for them. Typically, the simpler fingerings for such notes are used as alternate or trill fingerings, and the bassoonist will use as \"full fingering\" one or several of the more complex executions possible, for optimal sound quality. The fingerings used are at the discretion of the bassoonist, and, for particular passages, he or she may experiment to find new alternate fingerings that are thus idiomatic to the player.",
"title": "Technique"
},
{
"paragraph_id": 57,
"text": "These elements have resulted in both \"full\" and alternate fingerings differing extensively between bassoonists, and are further informed by factors such as cultural difference in what sound is sought, how reeds are made, and regional variation in tuning frequencies (necessitating sharper or flatter fingerings). Regional enclaves of bassoonists tend to have some uniformity in technique, but on a global scale, technique differs such that two given bassoonists may share no fingerings for certain notes. Owing to these factors, ubiquitous bassoon technique can only be partially notated.",
"title": "Technique"
},
{
"paragraph_id": 58,
"text": "The left thumb operates nine keys: B♭1, B1, C2, D2, D5, C5 (also B4), two keys when combined create A4, and the whisper key. The whisper key should be held down for notes between and including F2 and G♯3 and certain other notes; it can be omitted, but the pitch will destabilise. Additional notes can be created with the left thumb keys; the D2 and bottom key above the whisper key on the tenor joint (C♯ key) together create both C♯3 and C♯4. The same bottom tenor-joint key is also used, with additional fingering, to create E5 and F5. D5 and C5 together create C♯5. When the two keys on the tenor joint to create A4 are used with slightly altered fingering on the boot joint, B♭4 is created. The whisper key may also be used at certain points throughout the instrument's high register, along with other fingerings, to alter sound quality as desired.",
"title": "Technique"
},
{
"paragraph_id": 59,
"text": "The right thumb operates four keys. The uppermost key is used to produce B♭2 and B♭3, and may be used in B4,F♯4, C5, D5, F5, and E♭5. The large circular key, otherwise known as the \"pancake key\", is held down for all the lowest notes from E2 down to B♭1. It is also used, like the whisper key, in additional fingerings for muting the sound. For example, in Ravel's \"Boléro\", the bassoon is asked to play the ostinato on G4. This is easy to perform with the normal fingering for G4, but Ravel directs that the player should also depress the E2 key (pancake key) to mute the sound (this being written with Buffet system in mind; the G fingering on which involves the Bb key – sometimes called \"French\" G on Heckel). The next key operated by the right thumb is known as the \"spatula key\": its primary use is to produce F♯2 and F♯3. The lowermost key is used less often: it is used to produce A♭2 (G♯2) and A♭3 (G♯3), in a manner that avoids sliding the right fourth finger from another note.",
"title": "Technique"
},
{
"paragraph_id": 60,
"text": "The four fingers of the left hand can each be used in two different positions. The key normally operated by the index finger is primarily used for E5, also serving for trills in the lower register. Its main assignment is the upper tone hole. This hole can be closed fully, or partially by rolling down the finger. This half-holing technique is used to overblow F♯3, G3 and G♯3. The middle finger typically stays on the centre hole on the tenor joint. It can also move to a lever used for E♭5, also a trill key. The ring finger operates, on most models, one key. Some bassoons have an alternate E♭ key above the tone hole, predominantly for trills, but many do not. The smallest finger operates two side keys on the bass joint. The lower key is typically used for C♯2, but can be used for muting or flattening notes in the tenor register. The upper key is used for E♭2, E4, F4, F♯4, A4, B♭4, B4, C5, C♯5, and D5; it flattens G3 and is the standard fingering for it in many places that tune to lower Hertz levels such as A440.",
"title": "Technique"
},
{
"paragraph_id": 61,
"text": "The four fingers of the right hand have at least one assignment each. The index finger stays over one hole, except that when E♭5 is played a side key at the top of the boot is used (this key also provides a C♯3 trill, albeit sharp on D). The middle finger remains stationary over the hole with a ring around it, and this ring and other pads are lifted when the smallest finger on the right hand pushes a lever. The ring finger typically remains stationary on the lower ring-finger key. However, the upper ring-finger key can be used, typically for B♭2 and B♭3, in place of the top thumb key on the front of the boot joint; this key comes from the oboe, and some bassoons do not have it because the thumb fingering is practically universal. The smallest finger operates three keys. The backmost one, closest to the bassoonist, is held down throughout most of the bass register. F♯4 may be created with this key, as well as G4, B♭4, B4, and C5 (the latter three employing solely it to flatten and stabilise the pitch). The lowest key for the smallest finger on the right hand is primarily used for A♭2 (G♯2) and A♭3 (G♯3) but can be used to improve D5, E♭5, and F5. The frontmost key is used, in addition to the thumb key, to create G♭2 and G♭3; on many bassoons this key operates a different tone hole to the thumb key and produces a slightly flatter F♯ (\"duplicated F♯\"); some techniques use one as standard for both octaves and the other for utility, but others use the thumb key for the lower and the fourth finger for the higher.",
"title": "Technique"
},
{
"paragraph_id": 62,
"text": "Many extended techniques can be performed on the bassoon, such as multiphonics, flutter-tonguing, circular breathing, double tonguing, and harmonics. In the case of the bassoon, flutter-tonguing may be accomplished by \"gargling\" in the back of the throat as well as by the conventional method of rolling Rs. Multiphonics on the bassoon are plentiful, and can be achieved by using particular alternative fingerings, but are generally heavily influenced by embouchure position. Also, again using certain fingerings, notes may be produced on the instrument that sound lower pitches than the actual range of the instrument. These notes tend to sound very gravelly and out of tune, but technically sound below the low B♭.",
"title": "Technique"
},
{
"paragraph_id": 63,
"text": "The bassoonist may also produce lower notes than the bottom B♭ by extending the length of bell. This can be achieved by inserting a specially made \"low A extension\" into the bell, but may also be achieved with a small paper or rubber tube or a clarinet/cor anglais bell sitting inside the bassoon bell (although the note may tend sharp). The effect of this is to convert the lower B♭ into a lower note, almost always A natural; this broadly lowers the pitch of the instrument (most noticeably in the lower register) and will often accordingly convert the lowest B to B♭ (and render the neighbouring C very flat). The idea of using low A was begun by Richard Wagner, who wanted to extend the range of the bassoon. Many passages in his later operas require the low A as well as the B-flat immediately above it; this is possible on a normal bassoon using an extension which also flattens low B to B♭, but all extensions to the bell have significant effects on intonation and sound quality in the bottom register of the instrument, and passages such as this are more often realised with comparative ease by the contrabassoon.",
"title": "Technique"
},
{
"paragraph_id": 64,
"text": "Some bassoons have been specially made to allow bassoonists to realize similar passages. These bassoons are made with a \"Wagner bell\" which is an extended bell with a key for both the low A and the low B-flat, but they are not widespread; bassoons with Wagner bells suffer similar intonational problems as a bassoon with an ordinary A extension, and a bassoon must be constructed specifically to accommodate one, making the extension option far less complicated. Extending the bassoon's range even lower than the A, though possible, would have even stronger effects on pitch and make the instrument effectively unusable.",
"title": "Technique"
},
{
"paragraph_id": 65,
"text": "Despite the logistic difficulties of the note, Wagner was not the only composer to write the low A. Another composer who has required the bassoon to be chromatic down to low A is Gustav Mahler. Richard Strauss also calls for the low A in his opera Intermezzo. Some works have optional low As, as in Carl Nielsen's Wind Quintet, op. 43, which includes an optional low A for the final cadence of the work.",
"title": "Technique"
},
{
"paragraph_id": 66,
"text": "The complex fingering system and the expense and lack of access to quality bassoon reeds can make the bassoon more of a challenge to learn than some of the other woodwind instruments. Cost is another factor in a person's decision to pursue the bassoon. Prices may range from US$7,000 to over $45,000 for a high-quality instrument. In North America, schoolchildren may take up bassoon only after starting on another reed instrument, such as clarinet or saxophone.",
"title": "Technique"
},
{
"paragraph_id": 67,
"text": "Students in America often begin to pursue the study of bassoon performance and technique in the middle years of their music education, often in association with their school band program. Students are often provided with a school instrument and encouraged to pursue lessons with private instructors. Students typically receive instruction in proper posture, hand position, embouchure, repertoire, and tone production.",
"title": "Technique"
}
] | The bassoon is a musical instrument in the woodwind family, which plays in the tenor and bass ranges. It is composed of six pieces, and is usually made of wood. It is known for its distinctive tone color, wide range, versatility, and virtuosity. It is a non-transposing instrument and typically its music is written in the bass and tenor clefs, and sometimes in the treble. There are two forms of modern bassoon: the Buffet and Heckel systems. It is typically played while sitting using a seat strap, but can be played while standing if the player has a harness to hold the instrument. Sound is produced by rolling both lips over the reed and blowing direct air pressure to cause the reed to vibrate. Its fingering system can be quite complex when compared to those of other instruments. Appearing in its modern form in the 19th century, the bassoon figures prominently in orchestral, concert band, and chamber music literature, and is occasionally heard in pop, rock, and jazz settings as well. One who plays a bassoon is called a bassoonist. | 2001-09-18T12:30:09Z | 2023-12-08T21:07:21Z | [
"Template:Short description",
"Template:Music",
"Template:ISSN",
"Template:Sfn",
"Template:ISBN",
"Template:Cite journal",
"Template:Full citation needed",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Woodwinds",
"Template:Listen",
"Template:Convert",
"Template:Cite book",
"Template:Wikisource1911Enc",
"Template:Bass (sound)",
"Template:More citations needed",
"Template:Infobox Instrument",
"Template:Lang",
"Template:Cite web",
"Template:Commons category",
"Template:Double reed",
"Template:Audio",
"Template:Reflist",
"Template:Cite dictionary"
] | https://en.wikipedia.org/wiki/Bassoon |
4,210 | Bipedalism | Bipedalism is a form of terrestrial locomotion where a tetrapod moves by means of its two rear (or lower) limbs or legs. An animal or machine that usually moves in a bipedal manner is known as a biped /ˈbaɪpɛd/, meaning 'two feet' (from Latin bis 'double' and pes 'foot'). Types of bipedal movement include walking or running (a bipedal gait) and hopping.
Several groups of modern species are habitual bipeds whose normal method of locomotion is two-legged. In the Triassic period some groups of archosaurs (a group that includes crocodiles and dinosaurs) developed bipedalism; among the dinosaurs, all the early forms and many later groups were habitual or exclusive bipeds; the birds are members of a clade of exclusively bipedal dinosaurs, the theropods. Within mammals, habitual bipedalism has evolved multiple times, with the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and hominin apes (australopithecines, including humans) as well as various other extinct groups evolving the trait independently. A larger number of modern species intermittently or briefly use a bipedal gait. Several lizard species move bipedally when running, usually to escape from threats. Many primate and bear species will adopt a bipedal gait in order to reach food or explore their environment, though there are a few cases where they walk on their hind limbs only. Several arboreal primate species, such as gibbons and indriids, exclusively walk on two legs during the brief periods they spend on the ground. Many animals rear up on their hind legs while fighting or copulating. Some animals commonly stand on their hind legs to reach food, keep watch, threaten a competitor or predator, or pose in courtship, but do not move bipedally.
The word is derived from the Latin words bi(s) 'two' and ped- 'foot', as contrasted with quadruped 'four feet'.
Limited and exclusive bipedalism can offer a species several advantages. Bipedalism raises the head; this allows a greater field of vision with improved detection of distant dangers or resources, access to deeper water for wading animals and allows the animals to reach higher food sources with their mouths. While upright, non-locomotory limbs become free for other uses, including manipulation (in primates and rodents), flight (in birds), digging (in the giant pangolin), combat (in bears, great apes and the large monitor lizard) or camouflage.
The maximum bipedal speed appears slower than the maximum speed of quadrupedal movement with a flexible backbone – both the ostrich and the red kangaroo can reach speeds of 70 km/h (43 mph), while the cheetah can exceed 100 km/h (62 mph). Even though bipedalism is slower at first, over long distances, it has allowed humans to outrun most other animals according to the endurance running hypothesis. Bipedality in kangaroo rats has been hypothesized to improve locomotor performance, which could aid in escaping from predators.
Zoologists often label behaviors, including bipedalism, as "facultative" (i.e. optional) or "obligate" (the animal has no reasonable alternative). Even this distinction is not completely clear-cut — for example, humans other than infants normally walk and run in biped fashion, but almost all can crawl on hands and knees when necessary. There are even reports of humans who normally walk on all fours with their feet but not their knees on the ground, but these cases are a result of conditions such as Uner Tan syndrome — very rare genetic neurological disorders rather than normal behavior. Even if one ignores exceptions caused by some kind of injury or illness, there are many unclear cases, including the fact that "normal" humans can crawl on hands and knees. This article therefore avoids the terms "facultative" and "obligate", and focuses on the range of styles of locomotion normally used by various groups of animals. Normal humans may be considered "obligate" bipeds because the alternatives are very uncomfortable and usually only resorted to when walking is impossible.
There are a number of states of movement commonly associated with bipedalism.
The great majority of living terrestrial vertebrates are quadrupeds, with bipedalism exhibited by only a handful of living groups. Humans, gibbons and large birds walk by raising one foot at a time. On the other hand, most macropods, smaller birds, lemurs and bipedal rodents move by hopping on both legs simultaneously. Tree kangaroos are able to walk or hop, most commonly alternating feet when moving arboreally and hopping on both feet simultaneously when on the ground.
Many species of lizards become bipedal during high-speed, sprint locomotion, including the world's fastest lizard, the spiny-tailed iguana (genus Ctenosaura).
The first known biped is the bolosaurid Eudibamus whose fossils date from 290 million years ago. Its long hind-legs, short forelegs, and distinctive joints all suggest bipedalism. The species became extinct in the early Permian.
All birds are bipeds, as is the case for all theropod dinosaurs. However, hoatzin chicks have claws on their wings which they use for climbing.
Bipedalism evolved more than once in archosaurs, the group that includes both dinosaurs and crocodilians. All dinosaurs are thought to be descended from a fully bipedal ancestor, perhaps similar to Eoraptor.
Dinosaurs diverged from their archosaur ancestors approximately 230 million years ago during the Middle to Late Triassic period, roughly 20 million years after the Permian-Triassic extinction event wiped out an estimated 95 percent of all life on Earth. Radiometric dating of fossils from the early dinosaur genus Eoraptor establishes its presence in the fossil record at this time. Paleontologists suspect Eoraptor resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as Marasuchus and Lagerpeton in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators.
Bipedal movement also re-evolved in a number of other dinosaur lineages such as the iguanodonts. Some extinct members of Pseudosuchia, a sister group to the avemetatarsalians (the group including dinosaurs and relatives), also evolved bipedal forms – a poposauroid from the Triassic, Effigia okeeffeae, is thought to have been bipedal. Pterosaurs were previously thought to have been bipedal, but recent trackways have all shown quadrupedal locomotion.
A number of groups of extant mammals have independently evolved bipedalism as their main form of locomotion - for example humans, giant pangolins, the extinct giant ground sloths, numerous species of jumping rodents and macropods. Humans, as their bipedalism has been extensively studied, are documented in the next section. Macropods are believed to have evolved bipedal hopping only once in their evolution, at some time no later than 45 million years ago.
Bipedal movement is less common among mammals, most of which are quadrupedal. All primates possess some bipedal ability, though most species primarily use quadrupedal locomotion on land. Primates aside, the macropods (kangaroos, wallabies and their relatives), kangaroo rats and mice, hopping mice and springhare move bipedally by hopping. Very few non-primate mammals commonly move bipedally with an alternating leg gait. Exceptions are the ground pangolin and in some circumstances the tree kangaroo. One black bear, Pedals, became famous locally and on the internet for having a frequent bipedal gait, although this is attributed to injuries on the bear's front paws. A two-legged fox was filmed in a Derbyshire garden in 2023, most likely having been born that way.
Most bipedal animals move with their backs close to horizontal, using a long tail to balance the weight of their bodies. The primate version of bipedalism is unusual because the back is close to upright (completely upright in humans), and the tail may be absent entirely. Many primates can stand upright on their hind legs without any support. Chimpanzees, bonobos, gorillas, gibbons and baboons exhibit forms of bipedalism. On the ground sifakas move like all indrids with bipedal sideways hopping movements of the hind legs, holding their forelimbs up for balance. Geladas, although usually quadrupedal, will sometimes move between adjacent feeding patches with a squatting, shuffling bipedal form of locomotion. However, they can only do so for brief amounts, as their bodies are not adapted for constant bipedal locomotion.
Humans are the only primates who are normally biped, due to an extra curve in the spine which stabilizes the upright position, as well as shorter arms relative to the legs than is the case for the nonhuman great apes. The evolution of human bipedalism began in primates about four million years ago, or as early as seven million years ago with Sahelanthropus or about 12 million years ago with Danuvius guggenmosi. One hypothesis for human bipedalism is that it evolved as a result of differentially successful survival from carrying food to share with group members, although there are alternative hypotheses.
Injured chimpanzees and bonobos have been capable of sustained bipedalism.
Three captive primates, one macaque Natasha and two chimps, Oliver and Poko (chimpanzee), were found to move bipedally. Natasha switched to exclusive bipedalism after an illness, while Poko was discovered in captivity in a tall, narrow cage. Oliver reverted to knuckle-walking after developing arthritis. Non-human primates often use bipedal locomotion when carrying food, or while moving through shallow water.
Other mammals engage in limited, non-locomotory, bipedalism. A number of other animals, such as rats, raccoons, and beavers will squat on their hindlegs to manipulate some objects but revert to four limbs when moving (the beaver will move bipedally if transporting wood for their dams, as will the raccoon when holding food). Bears will fight in a bipedal stance to use their forelegs as weapons. A number of mammals will adopt a bipedal stance in specific situations such as for feeding or fighting. Ground squirrels and meerkats will stand on hind legs to survey their surroundings, but will not walk bipedally. Dogs (e.g. Faith) can stand or move on two legs if trained, or if birth defect or injury precludes quadrupedalism. The gerenuk antelope stands on its hind legs while eating from trees, as did the extinct giant ground sloth and chalicotheres. The spotted skunk will walk on its front legs when threatened, rearing up on its front legs while facing the attacker so that its anal glands, capable of spraying an offensive oil, face its attacker.
Bipedalism is unknown among the amphibians. Among the non-archosaur reptiles bipedalism is rare, but it is found in the "reared-up" running of lizards such as agamids and monitor lizards. Many reptile species will also temporarily adopt bipedalism while fighting. One genus of basilisk lizard can run bipedally across the surface of water for some distance. Among arthropods, cockroaches are known to move bipedally at high speeds. Bipedalism is rarely found outside terrestrial animals, though at least two types of octopus walk bipedally on the sea floor using two of their arms, allowing the remaining arms to be used to camouflage the octopus as a mat of algae or a floating coconut.
There are at least twelve distinct hypotheses as to how and why bipedalism evolved in humans, and also some debate as to when. Bipedalism evolved well before the large human brain or the development of stone tools. Bipedal specializations are found in Australopithecus fossils from 4.2 to 3.9 million years ago and recent studies have suggested that obligate bipedal hominid species were present as early as 7 million years ago. Nonetheless, the evolution of bipedalism was accompanied by significant evolutions in the spine including the forward movement in position of the foramen magnum, where the spinal cord leaves the cranium. Recent evidence regarding modern human sexual dimorphism (physical differences between male and female) in the lumbar spine has been seen in pre-modern primates such as Australopithecus africanus. This dimorphism has been seen as an evolutionary adaptation of females to bear lumbar load better during pregnancy, an adaptation that non-bipedal primates would not need to make. Adapting bipedalism would have required less shoulder stability, which allowed the shoulder and other limbs to become more independent of each other and adapt for specific suspensory behaviors. In addition to the change in shoulder stability, changing locomotion would have increased the demand for shoulder mobility, which would have propelled the evolution of bipedalism forward. The different hypotheses are not necessarily mutually exclusive and a number of selective forces may have acted together to lead to human bipedalism. It is important to distinguish between adaptations for bipedalism and adaptations for running, which came later still.
The form and function of modern-day humans' upper bodies appear to have evolved from living in a more forested setting. Living in this kind of environment would have made it so that being able to travel arboreally would have been advantageous at the time. Although different to human walking, bipedal locomotion in trees was thought to be advantageous. It has also been proposed that, like some modern-day apes, early hominins had undergone a knuckle-walking stage prior to adapting the back limbs for bipedality while retaining forearms capable of grasping. Numerous causes for the evolution of human bipedalism involve freeing the hands for carrying and using tools, sexual dimorphism in provisioning, changes in climate and environment (from jungle to savanna) that favored a more elevated eye-position, and to reduce the amount of skin exposed to the tropical sun. It is possible that bipedalism provided a variety of benefits to the hominin species, and scientists have suggested multiple reasons for evolution of human bipedalism. There is also not only the question of why the earliest hominins were partially bipedal but also why hominins became more bipedal over time. For example, the postural feeding hypothesis describes how the earliest hominins became bipedal for the benefit of reaching food in trees while the savanna-based theory describes how the late hominins that started to settle on the ground became increasingly bipedal.
Napier (1963) argued that it is unlikely that a single factor drove the evolution of bipedalism. He stated "It seems unlikely that any single factor was responsible for such a dramatic change in behaviour. In addition to the advantages of accruing from ability to carry objects – food or otherwise – the improvement of the visual range and the freeing of the hands for purposes of defence and offence may equally have played their part as catalysts." Sigmon (1971) demonstrated that chimpanzees exhibit bipedalism in different contexts, and one single factor should be used to explain bipedalism: preadaptation for human bipedalism. Day (1986) emphasized three major pressures that drove evolution of bipedalism: food acquisition, predator avoidance, and reproductive success. Ko (2015) stated that there are two questions main regarding bipedalism 1. Why were the earliest hominins partially bipedal? and 2. Why did hominins become more bipedal over time? He argued that these questions can be answered with combination of prominent theories such as Savanna-based, Postural feeding, and Provisioning.
According to the Savanna-based theory, hominines came down from the tree's branches and adapted to life on the savanna by walking erect on two feet. The theory suggests that early hominids were forced to adapt to bipedal locomotion on the open savanna after they left the trees. One of the proposed mechanisms was the knuckle-walking hypothesis, which states that human ancestors used quadrupedal locomotion on the savanna, as evidenced by morphological characteristics found in Australopithecus anamensis and Australopithecus afarensis forelimbs, and that it is less parsimonious to assume that knuckle walking developed twice in genera Pan and Gorilla instead of evolving it once as synapomorphy for Pan and Gorilla before losing it in Australopithecus. The evolution of an orthograde posture would have been very helpful on a savanna as it would allow the ability to look over tall grasses in order to watch out for predators, or terrestrially hunt and sneak up on prey. It was also suggested in P. E. Wheeler's "The evolution of bipedality and loss of functional body hair in hominids", that a possible advantage of bipedalism in the savanna was reducing the amount of surface area of the body exposed to the sun, helping regulate body temperature. In fact, Elizabeth Vrba's turnover pulse hypothesis supports the savanna-based theory by explaining the shrinking of forested areas due to global warming and cooling, which forced animals out into the open grasslands and caused the need for hominids to acquire bipedality.
Others state hominines had already achieved the bipedal adaptation that was used in the savanna. The fossil evidence reveals that early bipedal hominins were still adapted to climbing trees at the time they were also walking upright. It is possible that bipedalism evolved in the trees, and was later applied to the savanna as a vestigial trait. Humans and orangutans are both unique to a bipedal reactive adaptation when climbing on thin branches, in which they have increased hip and knee extension in relation to the diameter of the branch, which can increase an arboreal feeding range and can be attributed to a convergent evolution of bipedalism evolving in arboreal environments. Hominine fossils found in dry grassland environments led anthropologists to believe hominines lived, slept, walked upright, and died only in those environments because no hominine fossils were found in forested areas. However, fossilization is a rare occurrence—the conditions must be just right in order for an organism that dies to become fossilized for somebody to find later, which is also a rare occurrence. The fact that no hominine fossils were found in forests does not ultimately lead to the conclusion that no hominines ever died there. The convenience of the savanna-based theory caused this point to be overlooked for over a hundred years.
Some of the fossils found actually showed that there was still an adaptation to arboreal life. For example, Lucy, the famous Australopithecus afarensis, found in Hadar in Ethiopia, which may have been forested at the time of Lucy's death, had curved fingers that would still give her the ability to grasp tree branches, but she walked bipedally. "Little Foot," a nearly-complete specimen of Australopithecus africanus, has a divergent big toe as well as the ankle strength to walk upright. "Little Foot" could grasp things using his feet like an ape, perhaps tree branches, and he was bipedal. Ancient pollen found in the soil in the locations in which these fossils were found suggest that the area used to be much more wet and covered in thick vegetation and has only recently become the arid desert it is now.
An alternative explanation is that the mixture of savanna and scattered forests increased terrestrial travel by proto-humans between clusters of trees, and bipedalism offered greater efficiency for long-distance travel between these clusters than quadrupedalism. In an experiment monitoring chimpanzee metabolic rate via oxygen consumption, it was found that the quadrupedal and bipedal energy costs were very similar, implying that this transition in early ape-like ancestors would not have been very difficult or energetically costing. This increased travel efficiency is likely to have been selected for as it assisted foraging across widely dispersed resources.
The postural feeding hypothesis has been recently supported by Dr. Kevin Hunt, a professor at Indiana University. This hypothesis asserts that chimpanzees were only bipedal when they eat. While on the ground, they would reach up for fruit hanging from small trees and while in trees, bipedalism was used to reach up to grab for an overhead branch. These bipedal movements may have evolved into regular habits because they were so convenient in obtaining food. Also, Hunt's hypotheses states that these movements coevolved with chimpanzee arm-hanging, as this movement was very effective and efficient in harvesting food. When analyzing fossil anatomy, Australopithecus afarensis has very similar features of the hand and shoulder to the chimpanzee, which indicates hanging arms. Also, the Australopithecus hip and hind limb very clearly indicate bipedalism, but these fossils also indicate very inefficient locomotive movement when compared to humans. For this reason, Hunt argues that bipedalism evolved more as a terrestrial feeding posture than as a walking posture.
A related study conducted by University of Birmingham, Professor Susannah Thorpe examined the most arboreal great ape, the orangutan, holding onto supporting branches in order to navigate branches that were too flexible or unstable otherwise. In more than 75 percent of observations, the orangutans used their forelimbs to stabilize themselves while navigating thinner branches. Increased fragmentation of forests where A. afarensis as well as other ancestors of modern humans and other apes resided could have contributed to this increase of bipedalism in order to navigate the diminishing forests. Findings also could shed light on discrepancies observed in the anatomy of A. afarensis, such as the ankle joint, which allowed it to "wobble" and long, highly flexible forelimbs. If bipedalism started from upright navigation in trees, it could explain both increased flexibility in the ankle as well as long forelimbs which grab hold of branches.
One theory on the origin of bipedalism is the behavioral model presented by C. Owen Lovejoy, known as "male provisioning". Lovejoy theorizes that the evolution of bipedalism was linked to monogamy. In the face of long inter-birth intervals and low reproductive rates typical of the apes, early hominids engaged in pair-bonding that enabled greater parental effort directed towards rearing offspring. Lovejoy proposes that male provisioning of food would improve the offspring survivorship and increase the pair's reproductive rate. Thus the male would leave his mate and offspring to search for food and return carrying the food in his arms walking on his legs. This model is supported by the reduction ("feminization") of the male canine teeth in early hominids such as Sahelanthropus tchadensis and Ardipithecus ramidus, which along with low body size dimorphism in Ardipithecus and Australopithecus, suggests a reduction in inter-male antagonistic behavior in early hominids. In addition, this model is supported by a number of modern human traits associated with concealed ovulation (permanently enlarged breasts, lack of sexual swelling) and low sperm competition (moderate sized testes, low sperm mid-piece volume) that argues against recent adaptation to a polygynous reproductive system.
However, this model has been debated, as others have argued that early bipedal hominids were instead polygynous. Among most monogamous primates, males and females are about the same size. That is sexual dimorphism is minimal, and other studies have suggested that Australopithecus afarensis males were nearly twice the weight of females. However, Lovejoy's model posits that the larger range a provisioning male would have to cover (to avoid competing with the female for resources she could attain herself) would select for increased male body size to limit predation risk. Furthermore, as the species became more bipedal, specialized feet would prevent the infant from conveniently clinging to the mother - hampering the mother's freedom and thus make her and her offspring more dependent on resources collected by others. Modern monogamous primates such as gibbons tend to be also territorial, but fossil evidence indicates that Australopithecus afarensis lived in large groups. However, while both gibbons and hominids have reduced canine sexual dimorphism, female gibbons enlarge ('masculinize') their canines so they can actively share in the defense of their home territory. Instead, the reduction of the male hominid canine is consistent with reduced inter-male aggression in a pair-bonded though group living primate.
Recent studies of 4.4 million years old Ardipithecus ramidus suggest bipedalism. It is thus possible that bipedalism evolved very early in homininae and was reduced in chimpanzee and gorilla when they became more specialized. Other recent studies of the foot structure of Ardipithecus ramidus suggest that the species was closely related to African-ape ancestors. This possibly provides a species close to the true connection between fully bipedal hominins and quadruped apes. According to Richard Dawkins in his book "The Ancestor's Tale", chimps and bonobos are descended from Australopithecus gracile type species while gorillas are descended from Paranthropus. These apes may have once been bipedal, but then lost this ability when they were forced back into an arboreal habitat, presumably by those australopithecines from whom eventually evolved hominins. Early hominines such as Ardipithecus ramidus may have possessed an arboreal type of bipedalism that later independently evolved towards knuckle-walking in chimpanzees and gorillas and towards efficient walking and running in modern humans (see figure). It is also proposed that one cause of Neanderthal extinction was a less efficient running.
Joseph Jordania from the University of Melbourne recently (2011) suggested that bipedalism was one of the central elements of the general defense strategy of early hominids, based on aposematism, or warning display and intimidation of potential predators and competitors with exaggerated visual and audio signals. According to this model, hominids were trying to stay as visible and as loud as possible all the time. Several morphological and behavioral developments were employed to achieve this goal: upright bipedal posture, longer legs, long tightly coiled hair on the top of the head, body painting, threatening synchronous body movements, loud voice and extremely loud rhythmic singing/stomping/drumming on external subjects. Slow locomotion and strong body odor (both characteristic for hominids and humans) are other features often employed by aposematic species to advertise their non-profitability for potential predators.
There are a variety of ideas which promote a specific change in behaviour as the key driver for the evolution of hominid bipedalism. For example, Wescott (1967) and later Jablonski & Chaplin (1993) suggest that bipedal threat displays could have been the transitional behaviour which led to some groups of apes beginning to adopt bipedal postures more often. Others (e.g. Dart 1925) have offered the idea that the need for more vigilance against predators could have provided the initial motivation. Dawkins (e.g. 2004) has argued that it could have begun as a kind of fashion that just caught on and then escalated through sexual selection. And it has even been suggested (e.g. Tanner 1981:165) that male phallic display could have been the initial incentive, as well as increased sexual signaling in upright female posture.
The thermoregulatory model explaining the origin of bipedalism is one of the simplest theories so far advanced, but it is a viable explanation. Dr. Peter Wheeler, a professor of evolutionary biology, proposes that bipedalism raises the amount of body surface area higher above the ground which results in a reduction in heat gain and helps heat dissipation. When a hominid is higher above the ground, the organism accesses more favorable wind speeds and temperatures. During heat seasons, greater wind flow results in a higher heat loss, which makes the organism more comfortable. Also, Wheeler explains that a vertical posture minimizes the direct exposure to the sun whereas quadrupedalism exposes more of the body to direct exposure. Analysis and interpretations of Ardipithecus reveal that this hypothesis needs modification to consider that the forest and woodland environmental preadaptation of early-stage hominid bipedalism preceded further refinement of bipedalism by the pressure of natural selection. This then allowed for the more efficient exploitation of the hotter conditions ecological niche, rather than the hotter conditions being hypothetically bipedalism's initial stimulus. A feedback mechanism from the advantages of bipedality in hot and open habitats would then in turn make a forest preadaptation solidify as a permanent state.
Charles Darwin wrote that "Man could not have attained his present dominant position in the world without the use of his hands, which are so admirably adapted to the act of obedience of his will". Darwin (1871:52) and many models on bipedal origins are based on this line of thought. Gordon Hewes (1961) suggested that the carrying of meat "over considerable distances" (Hewes 1961:689) was the key factor. Isaac (1978) and Sinclair et al. (1986) offered modifications of this idea, as indeed did Lovejoy (1981) with his "provisioning model" described above. Others, such as Nancy Tanner (1981), have suggested that infant carrying was key, while others again have suggested stone tools and weapons drove the change. This stone-tools theory is very unlikely, as though ancient humans were known to hunt, the discovery of tools was not discovered for thousands of years after the origin of bipedalism, chronologically precluding it from being a driving force of evolution. (Wooden tools and spears fossilize poorly and therefore it is difficult to make a judgment about their potential usage.)
The observation that large primates, including especially the great apes, that predominantly move quadrupedally on dry land, tend to switch to bipedal locomotion in waist deep water, has led to the idea that the origin of human bipedalism may have been influenced by waterside environments. This idea, labelled "the wading hypothesis", was originally suggested by the Oxford marine biologist Alister Hardy who said: "It seems to me likely that Man learnt to stand erect first in water and then, as his balance improved, he found he became better equipped for standing up on the shore when he came out, and indeed also for running." It was then promoted by Elaine Morgan, as part of the aquatic ape hypothesis, who cited bipedalism among a cluster of other human traits unique among primates, including voluntary control of breathing, hairlessness and subcutaneous fat. The "aquatic ape hypothesis", as originally formulated, has not been accepted or considered a serious theory within the anthropological scholarly community. Others, however, have sought to promote wading as a factor in the origin of human bipedalism without referring to further ("aquatic ape" related) factors. Since 2000 Carsten Niemitz has published a series of papers and a book on a variant of the wading hypothesis, which he calls the "amphibian generalist theory" (German: Amphibische Generalistentheorie).
Other theories have been proposed that suggest wading and the exploitation of aquatic food sources (providing essential nutrients for human brain evolution or critical fallback foods) may have exerted evolutionary pressures on human ancestors promoting adaptations which later assisted full-time bipedalism. It has also been thought that consistent water-based food sources had developed early hominid dependency and facilitated dispersal along seas and rivers.
Prehistoric fossil records show that early hominins first developed bipedalism before being followed by an increase in brain size. The consequences of these two changes in particular resulted in painful and difficult labor due to the increased favor of a narrow pelvis for bipedalism being countered by larger heads passing through the constricted birth canal. This phenomenon is commonly known as the obstetrical dilemma.
Non-human primates habitually deliver their young on their own, but the same cannot be said for modern-day humans. Isolated birth appears to be rare and actively avoided cross-culturally, even if birthing methods may differ between said cultures. This is due to the fact that the narrowing of the hips and the change in the pelvic angle caused a discrepancy in the ratio of the size of the head to the birth canal. The result of this is that there is greater difficulty in birthing for hominins in general, let alone to be doing it by oneself.
Bipedal movement occurs in a number of ways and requires many mechanical and neurological adaptations. Some of these are described below.
Energy-efficient means of standing bipedally involve constant adjustment of balance, and of course these must avoid overcorrection. The difficulties associated with simple standing in upright humans are highlighted by the greatly increased risk of falling present in the elderly, even with minimal reductions in control system effectiveness.
Shoulder stability would decrease with the evolution of bipedalism. Shoulder mobility would increase because the need for a stable shoulder is only present in arboreal habitats. Shoulder mobility would support suspensory locomotion behaviors which are present in human bipedalism. The forelimbs are freed from weight-bearing requirements, which makes the shoulder a place of evidence for the evolution of bipedalism.
Unlike non-human apes that are able to practice bipedality such as Pan and Gorilla, hominins have the ability to move bipedally without the utilization of a bent-hip-bent-knee (BHBK) gait, which requires the engagement of both the hip and the knee joints. This human ability to walk is made possible by the spinal curvature humans have that non-human apes do not. Rather, walking is characterized by an "inverted pendulum" movement in which the center of gravity vaults over a stiff leg with each step. Force plates can be used to quantify the whole-body kinetic & potential energy, with walking displaying an out-of-phase relationship indicating exchange between the two. This model applies to all walking organisms regardless of the number of legs, and thus bipedal locomotion does not differ in terms of whole-body kinetics.
In humans, walking is composed of several separate processes:
Early hominins underwent post-cranial changes in order to better adapt to bipedality, especially running. One of these changes is having longer hindlimbs proportional to the forelimbs and their effects. As previously mentioned, longer hindlimbs assist in thermoregulation by reducing the total surface area exposed to direct sunlight while simultaneously allowing for more space for cooling winds. Additionally, having longer limbs is more energy-efficient, since longer limbs mean that overall muscle strain is lessened. Better energy efficiency, in turn, means higher endurance, particularly when running long distances.
Running is characterized by a spring-mass movement. Kinetic and potential energy are in phase, and the energy is stored & released from a spring-like limb during foot contact, achieved by the plantar arch and the Achilles tendon in the foot and leg, respectively. Again, the whole-body kinetics are similar to animals with more limbs.
Bipedalism requires strong leg muscles, particularly in the thighs. Contrast in domesticated poultry the well muscled legs, against the small and bony wings. Likewise in humans, the quadriceps and hamstring muscles of the thigh are both so crucial to bipedal activities that each alone is much larger than the well-developed biceps of the arms. In addition to the leg muscles, the increased size of the gluteus maximus in humans is an important adaptation as it provides support and stability to the trunk and lessens the amount of stress on the joints when running.
Quadrupeds, have more restrictive breathing respire while moving than do bipedal humans. "Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running."
Respiration through bipedality means that there is better breath control in bipeds, which can be associated with brain growth. The modern brain utilizes approximately 20% of energy input gained through breathing and eating, as opposed to species like chimpanzees who use up twice as much energy as humans for the same amount of movement. This excess energy, leading to brain growth, also leads to the development of verbal communication. This is because breath control means that the muscles associated with breathing can be manipulated into creating sounds. This means that the onset of bipedality, leading to more efficient breathing, may be related to the origin of verbal language.
For nearly the whole of the 20th century, bipedal robots were very difficult to construct and robot locomotion involved only wheels, treads, or multiple legs. Recent cheap and compact computing power has made two-legged robots more feasible. Some notable biped robots are ASIMO, HUBO, MABEL and QRIO. Recently, spurred by the success of creating a fully passive, un-powered bipedal walking robot, those working on such machines have begun using principles gleaned from the study of human and animal locomotion, which often relies on passive mechanisms to minimize power consumption. | [
{
"paragraph_id": 0,
"text": "Bipedalism is a form of terrestrial locomotion where a tetrapod moves by means of its two rear (or lower) limbs or legs. An animal or machine that usually moves in a bipedal manner is known as a biped /ˈbaɪpɛd/, meaning 'two feet' (from Latin bis 'double' and pes 'foot'). Types of bipedal movement include walking or running (a bipedal gait) and hopping.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Several groups of modern species are habitual bipeds whose normal method of locomotion is two-legged. In the Triassic period some groups of archosaurs (a group that includes crocodiles and dinosaurs) developed bipedalism; among the dinosaurs, all the early forms and many later groups were habitual or exclusive bipeds; the birds are members of a clade of exclusively bipedal dinosaurs, the theropods. Within mammals, habitual bipedalism has evolved multiple times, with the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and hominin apes (australopithecines, including humans) as well as various other extinct groups evolving the trait independently. A larger number of modern species intermittently or briefly use a bipedal gait. Several lizard species move bipedally when running, usually to escape from threats. Many primate and bear species will adopt a bipedal gait in order to reach food or explore their environment, though there are a few cases where they walk on their hind limbs only. Several arboreal primate species, such as gibbons and indriids, exclusively walk on two legs during the brief periods they spend on the ground. Many animals rear up on their hind legs while fighting or copulating. Some animals commonly stand on their hind legs to reach food, keep watch, threaten a competitor or predator, or pose in courtship, but do not move bipedally.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The word is derived from the Latin words bi(s) 'two' and ped- 'foot', as contrasted with quadruped 'four feet'.",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Limited and exclusive bipedalism can offer a species several advantages. Bipedalism raises the head; this allows a greater field of vision with improved detection of distant dangers or resources, access to deeper water for wading animals and allows the animals to reach higher food sources with their mouths. While upright, non-locomotory limbs become free for other uses, including manipulation (in primates and rodents), flight (in birds), digging (in the giant pangolin), combat (in bears, great apes and the large monitor lizard) or camouflage.",
"title": "Advantages"
},
{
"paragraph_id": 4,
"text": "The maximum bipedal speed appears slower than the maximum speed of quadrupedal movement with a flexible backbone – both the ostrich and the red kangaroo can reach speeds of 70 km/h (43 mph), while the cheetah can exceed 100 km/h (62 mph). Even though bipedalism is slower at first, over long distances, it has allowed humans to outrun most other animals according to the endurance running hypothesis. Bipedality in kangaroo rats has been hypothesized to improve locomotor performance, which could aid in escaping from predators.",
"title": "Advantages"
},
{
"paragraph_id": 5,
"text": "Zoologists often label behaviors, including bipedalism, as \"facultative\" (i.e. optional) or \"obligate\" (the animal has no reasonable alternative). Even this distinction is not completely clear-cut — for example, humans other than infants normally walk and run in biped fashion, but almost all can crawl on hands and knees when necessary. There are even reports of humans who normally walk on all fours with their feet but not their knees on the ground, but these cases are a result of conditions such as Uner Tan syndrome — very rare genetic neurological disorders rather than normal behavior. Even if one ignores exceptions caused by some kind of injury or illness, there are many unclear cases, including the fact that \"normal\" humans can crawl on hands and knees. This article therefore avoids the terms \"facultative\" and \"obligate\", and focuses on the range of styles of locomotion normally used by various groups of animals. Normal humans may be considered \"obligate\" bipeds because the alternatives are very uncomfortable and usually only resorted to when walking is impossible.",
"title": "Facultative and obligate bipedalism"
},
{
"paragraph_id": 6,
"text": "There are a number of states of movement commonly associated with bipedalism.",
"title": "Movement"
},
{
"paragraph_id": 7,
"text": "The great majority of living terrestrial vertebrates are quadrupeds, with bipedalism exhibited by only a handful of living groups. Humans, gibbons and large birds walk by raising one foot at a time. On the other hand, most macropods, smaller birds, lemurs and bipedal rodents move by hopping on both legs simultaneously. Tree kangaroos are able to walk or hop, most commonly alternating feet when moving arboreally and hopping on both feet simultaneously when on the ground.",
"title": "Bipedal animals"
},
{
"paragraph_id": 8,
"text": "Many species of lizards become bipedal during high-speed, sprint locomotion, including the world's fastest lizard, the spiny-tailed iguana (genus Ctenosaura).",
"title": "Bipedal animals"
},
{
"paragraph_id": 9,
"text": "The first known biped is the bolosaurid Eudibamus whose fossils date from 290 million years ago. Its long hind-legs, short forelegs, and distinctive joints all suggest bipedalism. The species became extinct in the early Permian.",
"title": "Bipedal animals"
},
{
"paragraph_id": 10,
"text": "All birds are bipeds, as is the case for all theropod dinosaurs. However, hoatzin chicks have claws on their wings which they use for climbing.",
"title": "Bipedal animals"
},
{
"paragraph_id": 11,
"text": "Bipedalism evolved more than once in archosaurs, the group that includes both dinosaurs and crocodilians. All dinosaurs are thought to be descended from a fully bipedal ancestor, perhaps similar to Eoraptor.",
"title": "Bipedal animals"
},
{
"paragraph_id": 12,
"text": "Dinosaurs diverged from their archosaur ancestors approximately 230 million years ago during the Middle to Late Triassic period, roughly 20 million years after the Permian-Triassic extinction event wiped out an estimated 95 percent of all life on Earth. Radiometric dating of fossils from the early dinosaur genus Eoraptor establishes its presence in the fossil record at this time. Paleontologists suspect Eoraptor resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as Marasuchus and Lagerpeton in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators.",
"title": "Bipedal animals"
},
{
"paragraph_id": 13,
"text": "Bipedal movement also re-evolved in a number of other dinosaur lineages such as the iguanodonts. Some extinct members of Pseudosuchia, a sister group to the avemetatarsalians (the group including dinosaurs and relatives), also evolved bipedal forms – a poposauroid from the Triassic, Effigia okeeffeae, is thought to have been bipedal. Pterosaurs were previously thought to have been bipedal, but recent trackways have all shown quadrupedal locomotion.",
"title": "Bipedal animals"
},
{
"paragraph_id": 14,
"text": "A number of groups of extant mammals have independently evolved bipedalism as their main form of locomotion - for example humans, giant pangolins, the extinct giant ground sloths, numerous species of jumping rodents and macropods. Humans, as their bipedalism has been extensively studied, are documented in the next section. Macropods are believed to have evolved bipedal hopping only once in their evolution, at some time no later than 45 million years ago.",
"title": "Bipedal animals"
},
{
"paragraph_id": 15,
"text": "Bipedal movement is less common among mammals, most of which are quadrupedal. All primates possess some bipedal ability, though most species primarily use quadrupedal locomotion on land. Primates aside, the macropods (kangaroos, wallabies and their relatives), kangaroo rats and mice, hopping mice and springhare move bipedally by hopping. Very few non-primate mammals commonly move bipedally with an alternating leg gait. Exceptions are the ground pangolin and in some circumstances the tree kangaroo. One black bear, Pedals, became famous locally and on the internet for having a frequent bipedal gait, although this is attributed to injuries on the bear's front paws. A two-legged fox was filmed in a Derbyshire garden in 2023, most likely having been born that way.",
"title": "Bipedal animals"
},
{
"paragraph_id": 16,
"text": "Most bipedal animals move with their backs close to horizontal, using a long tail to balance the weight of their bodies. The primate version of bipedalism is unusual because the back is close to upright (completely upright in humans), and the tail may be absent entirely. Many primates can stand upright on their hind legs without any support. Chimpanzees, bonobos, gorillas, gibbons and baboons exhibit forms of bipedalism. On the ground sifakas move like all indrids with bipedal sideways hopping movements of the hind legs, holding their forelimbs up for balance. Geladas, although usually quadrupedal, will sometimes move between adjacent feeding patches with a squatting, shuffling bipedal form of locomotion. However, they can only do so for brief amounts, as their bodies are not adapted for constant bipedal locomotion.",
"title": "Bipedal animals"
},
{
"paragraph_id": 17,
"text": "Humans are the only primates who are normally biped, due to an extra curve in the spine which stabilizes the upright position, as well as shorter arms relative to the legs than is the case for the nonhuman great apes. The evolution of human bipedalism began in primates about four million years ago, or as early as seven million years ago with Sahelanthropus or about 12 million years ago with Danuvius guggenmosi. One hypothesis for human bipedalism is that it evolved as a result of differentially successful survival from carrying food to share with group members, although there are alternative hypotheses.",
"title": "Bipedal animals"
},
{
"paragraph_id": 18,
"text": "Injured chimpanzees and bonobos have been capable of sustained bipedalism.",
"title": "Bipedal animals"
},
{
"paragraph_id": 19,
"text": "Three captive primates, one macaque Natasha and two chimps, Oliver and Poko (chimpanzee), were found to move bipedally. Natasha switched to exclusive bipedalism after an illness, while Poko was discovered in captivity in a tall, narrow cage. Oliver reverted to knuckle-walking after developing arthritis. Non-human primates often use bipedal locomotion when carrying food, or while moving through shallow water.",
"title": "Bipedal animals"
},
{
"paragraph_id": 20,
"text": "Other mammals engage in limited, non-locomotory, bipedalism. A number of other animals, such as rats, raccoons, and beavers will squat on their hindlegs to manipulate some objects but revert to four limbs when moving (the beaver will move bipedally if transporting wood for their dams, as will the raccoon when holding food). Bears will fight in a bipedal stance to use their forelegs as weapons. A number of mammals will adopt a bipedal stance in specific situations such as for feeding or fighting. Ground squirrels and meerkats will stand on hind legs to survey their surroundings, but will not walk bipedally. Dogs (e.g. Faith) can stand or move on two legs if trained, or if birth defect or injury precludes quadrupedalism. The gerenuk antelope stands on its hind legs while eating from trees, as did the extinct giant ground sloth and chalicotheres. The spotted skunk will walk on its front legs when threatened, rearing up on its front legs while facing the attacker so that its anal glands, capable of spraying an offensive oil, face its attacker.",
"title": "Limited bipedalism"
},
{
"paragraph_id": 21,
"text": "Bipedalism is unknown among the amphibians. Among the non-archosaur reptiles bipedalism is rare, but it is found in the \"reared-up\" running of lizards such as agamids and monitor lizards. Many reptile species will also temporarily adopt bipedalism while fighting. One genus of basilisk lizard can run bipedally across the surface of water for some distance. Among arthropods, cockroaches are known to move bipedally at high speeds. Bipedalism is rarely found outside terrestrial animals, though at least two types of octopus walk bipedally on the sea floor using two of their arms, allowing the remaining arms to be used to camouflage the octopus as a mat of algae or a floating coconut.",
"title": "Limited bipedalism"
},
{
"paragraph_id": 22,
"text": "There are at least twelve distinct hypotheses as to how and why bipedalism evolved in humans, and also some debate as to when. Bipedalism evolved well before the large human brain or the development of stone tools. Bipedal specializations are found in Australopithecus fossils from 4.2 to 3.9 million years ago and recent studies have suggested that obligate bipedal hominid species were present as early as 7 million years ago. Nonetheless, the evolution of bipedalism was accompanied by significant evolutions in the spine including the forward movement in position of the foramen magnum, where the spinal cord leaves the cranium. Recent evidence regarding modern human sexual dimorphism (physical differences between male and female) in the lumbar spine has been seen in pre-modern primates such as Australopithecus africanus. This dimorphism has been seen as an evolutionary adaptation of females to bear lumbar load better during pregnancy, an adaptation that non-bipedal primates would not need to make. Adapting bipedalism would have required less shoulder stability, which allowed the shoulder and other limbs to become more independent of each other and adapt for specific suspensory behaviors. In addition to the change in shoulder stability, changing locomotion would have increased the demand for shoulder mobility, which would have propelled the evolution of bipedalism forward. The different hypotheses are not necessarily mutually exclusive and a number of selective forces may have acted together to lead to human bipedalism. It is important to distinguish between adaptations for bipedalism and adaptations for running, which came later still.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 23,
"text": "The form and function of modern-day humans' upper bodies appear to have evolved from living in a more forested setting. Living in this kind of environment would have made it so that being able to travel arboreally would have been advantageous at the time. Although different to human walking, bipedal locomotion in trees was thought to be advantageous. It has also been proposed that, like some modern-day apes, early hominins had undergone a knuckle-walking stage prior to adapting the back limbs for bipedality while retaining forearms capable of grasping. Numerous causes for the evolution of human bipedalism involve freeing the hands for carrying and using tools, sexual dimorphism in provisioning, changes in climate and environment (from jungle to savanna) that favored a more elevated eye-position, and to reduce the amount of skin exposed to the tropical sun. It is possible that bipedalism provided a variety of benefits to the hominin species, and scientists have suggested multiple reasons for evolution of human bipedalism. There is also not only the question of why the earliest hominins were partially bipedal but also why hominins became more bipedal over time. For example, the postural feeding hypothesis describes how the earliest hominins became bipedal for the benefit of reaching food in trees while the savanna-based theory describes how the late hominins that started to settle on the ground became increasingly bipedal.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 24,
"text": "Napier (1963) argued that it is unlikely that a single factor drove the evolution of bipedalism. He stated \"It seems unlikely that any single factor was responsible for such a dramatic change in behaviour. In addition to the advantages of accruing from ability to carry objects – food or otherwise – the improvement of the visual range and the freeing of the hands for purposes of defence and offence may equally have played their part as catalysts.\" Sigmon (1971) demonstrated that chimpanzees exhibit bipedalism in different contexts, and one single factor should be used to explain bipedalism: preadaptation for human bipedalism. Day (1986) emphasized three major pressures that drove evolution of bipedalism: food acquisition, predator avoidance, and reproductive success. Ko (2015) stated that there are two questions main regarding bipedalism 1. Why were the earliest hominins partially bipedal? and 2. Why did hominins become more bipedal over time? He argued that these questions can be answered with combination of prominent theories such as Savanna-based, Postural feeding, and Provisioning.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 25,
"text": "According to the Savanna-based theory, hominines came down from the tree's branches and adapted to life on the savanna by walking erect on two feet. The theory suggests that early hominids were forced to adapt to bipedal locomotion on the open savanna after they left the trees. One of the proposed mechanisms was the knuckle-walking hypothesis, which states that human ancestors used quadrupedal locomotion on the savanna, as evidenced by morphological characteristics found in Australopithecus anamensis and Australopithecus afarensis forelimbs, and that it is less parsimonious to assume that knuckle walking developed twice in genera Pan and Gorilla instead of evolving it once as synapomorphy for Pan and Gorilla before losing it in Australopithecus. The evolution of an orthograde posture would have been very helpful on a savanna as it would allow the ability to look over tall grasses in order to watch out for predators, or terrestrially hunt and sneak up on prey. It was also suggested in P. E. Wheeler's \"The evolution of bipedality and loss of functional body hair in hominids\", that a possible advantage of bipedalism in the savanna was reducing the amount of surface area of the body exposed to the sun, helping regulate body temperature. In fact, Elizabeth Vrba's turnover pulse hypothesis supports the savanna-based theory by explaining the shrinking of forested areas due to global warming and cooling, which forced animals out into the open grasslands and caused the need for hominids to acquire bipedality.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 26,
"text": "Others state hominines had already achieved the bipedal adaptation that was used in the savanna. The fossil evidence reveals that early bipedal hominins were still adapted to climbing trees at the time they were also walking upright. It is possible that bipedalism evolved in the trees, and was later applied to the savanna as a vestigial trait. Humans and orangutans are both unique to a bipedal reactive adaptation when climbing on thin branches, in which they have increased hip and knee extension in relation to the diameter of the branch, which can increase an arboreal feeding range and can be attributed to a convergent evolution of bipedalism evolving in arboreal environments. Hominine fossils found in dry grassland environments led anthropologists to believe hominines lived, slept, walked upright, and died only in those environments because no hominine fossils were found in forested areas. However, fossilization is a rare occurrence—the conditions must be just right in order for an organism that dies to become fossilized for somebody to find later, which is also a rare occurrence. The fact that no hominine fossils were found in forests does not ultimately lead to the conclusion that no hominines ever died there. The convenience of the savanna-based theory caused this point to be overlooked for over a hundred years.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 27,
"text": "Some of the fossils found actually showed that there was still an adaptation to arboreal life. For example, Lucy, the famous Australopithecus afarensis, found in Hadar in Ethiopia, which may have been forested at the time of Lucy's death, had curved fingers that would still give her the ability to grasp tree branches, but she walked bipedally. \"Little Foot,\" a nearly-complete specimen of Australopithecus africanus, has a divergent big toe as well as the ankle strength to walk upright. \"Little Foot\" could grasp things using his feet like an ape, perhaps tree branches, and he was bipedal. Ancient pollen found in the soil in the locations in which these fossils were found suggest that the area used to be much more wet and covered in thick vegetation and has only recently become the arid desert it is now.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 28,
"text": "An alternative explanation is that the mixture of savanna and scattered forests increased terrestrial travel by proto-humans between clusters of trees, and bipedalism offered greater efficiency for long-distance travel between these clusters than quadrupedalism. In an experiment monitoring chimpanzee metabolic rate via oxygen consumption, it was found that the quadrupedal and bipedal energy costs were very similar, implying that this transition in early ape-like ancestors would not have been very difficult or energetically costing. This increased travel efficiency is likely to have been selected for as it assisted foraging across widely dispersed resources.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 29,
"text": "The postural feeding hypothesis has been recently supported by Dr. Kevin Hunt, a professor at Indiana University. This hypothesis asserts that chimpanzees were only bipedal when they eat. While on the ground, they would reach up for fruit hanging from small trees and while in trees, bipedalism was used to reach up to grab for an overhead branch. These bipedal movements may have evolved into regular habits because they were so convenient in obtaining food. Also, Hunt's hypotheses states that these movements coevolved with chimpanzee arm-hanging, as this movement was very effective and efficient in harvesting food. When analyzing fossil anatomy, Australopithecus afarensis has very similar features of the hand and shoulder to the chimpanzee, which indicates hanging arms. Also, the Australopithecus hip and hind limb very clearly indicate bipedalism, but these fossils also indicate very inefficient locomotive movement when compared to humans. For this reason, Hunt argues that bipedalism evolved more as a terrestrial feeding posture than as a walking posture.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 30,
"text": "A related study conducted by University of Birmingham, Professor Susannah Thorpe examined the most arboreal great ape, the orangutan, holding onto supporting branches in order to navigate branches that were too flexible or unstable otherwise. In more than 75 percent of observations, the orangutans used their forelimbs to stabilize themselves while navigating thinner branches. Increased fragmentation of forests where A. afarensis as well as other ancestors of modern humans and other apes resided could have contributed to this increase of bipedalism in order to navigate the diminishing forests. Findings also could shed light on discrepancies observed in the anatomy of A. afarensis, such as the ankle joint, which allowed it to \"wobble\" and long, highly flexible forelimbs. If bipedalism started from upright navigation in trees, it could explain both increased flexibility in the ankle as well as long forelimbs which grab hold of branches.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 31,
"text": "One theory on the origin of bipedalism is the behavioral model presented by C. Owen Lovejoy, known as \"male provisioning\". Lovejoy theorizes that the evolution of bipedalism was linked to monogamy. In the face of long inter-birth intervals and low reproductive rates typical of the apes, early hominids engaged in pair-bonding that enabled greater parental effort directed towards rearing offspring. Lovejoy proposes that male provisioning of food would improve the offspring survivorship and increase the pair's reproductive rate. Thus the male would leave his mate and offspring to search for food and return carrying the food in his arms walking on his legs. This model is supported by the reduction (\"feminization\") of the male canine teeth in early hominids such as Sahelanthropus tchadensis and Ardipithecus ramidus, which along with low body size dimorphism in Ardipithecus and Australopithecus, suggests a reduction in inter-male antagonistic behavior in early hominids. In addition, this model is supported by a number of modern human traits associated with concealed ovulation (permanently enlarged breasts, lack of sexual swelling) and low sperm competition (moderate sized testes, low sperm mid-piece volume) that argues against recent adaptation to a polygynous reproductive system.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 32,
"text": "However, this model has been debated, as others have argued that early bipedal hominids were instead polygynous. Among most monogamous primates, males and females are about the same size. That is sexual dimorphism is minimal, and other studies have suggested that Australopithecus afarensis males were nearly twice the weight of females. However, Lovejoy's model posits that the larger range a provisioning male would have to cover (to avoid competing with the female for resources she could attain herself) would select for increased male body size to limit predation risk. Furthermore, as the species became more bipedal, specialized feet would prevent the infant from conveniently clinging to the mother - hampering the mother's freedom and thus make her and her offspring more dependent on resources collected by others. Modern monogamous primates such as gibbons tend to be also territorial, but fossil evidence indicates that Australopithecus afarensis lived in large groups. However, while both gibbons and hominids have reduced canine sexual dimorphism, female gibbons enlarge ('masculinize') their canines so they can actively share in the defense of their home territory. Instead, the reduction of the male hominid canine is consistent with reduced inter-male aggression in a pair-bonded though group living primate.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 33,
"text": "Recent studies of 4.4 million years old Ardipithecus ramidus suggest bipedalism. It is thus possible that bipedalism evolved very early in homininae and was reduced in chimpanzee and gorilla when they became more specialized. Other recent studies of the foot structure of Ardipithecus ramidus suggest that the species was closely related to African-ape ancestors. This possibly provides a species close to the true connection between fully bipedal hominins and quadruped apes. According to Richard Dawkins in his book \"The Ancestor's Tale\", chimps and bonobos are descended from Australopithecus gracile type species while gorillas are descended from Paranthropus. These apes may have once been bipedal, but then lost this ability when they were forced back into an arboreal habitat, presumably by those australopithecines from whom eventually evolved hominins. Early hominines such as Ardipithecus ramidus may have possessed an arboreal type of bipedalism that later independently evolved towards knuckle-walking in chimpanzees and gorillas and towards efficient walking and running in modern humans (see figure). It is also proposed that one cause of Neanderthal extinction was a less efficient running.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 34,
"text": "Joseph Jordania from the University of Melbourne recently (2011) suggested that bipedalism was one of the central elements of the general defense strategy of early hominids, based on aposematism, or warning display and intimidation of potential predators and competitors with exaggerated visual and audio signals. According to this model, hominids were trying to stay as visible and as loud as possible all the time. Several morphological and behavioral developments were employed to achieve this goal: upright bipedal posture, longer legs, long tightly coiled hair on the top of the head, body painting, threatening synchronous body movements, loud voice and extremely loud rhythmic singing/stomping/drumming on external subjects. Slow locomotion and strong body odor (both characteristic for hominids and humans) are other features often employed by aposematic species to advertise their non-profitability for potential predators.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 35,
"text": "There are a variety of ideas which promote a specific change in behaviour as the key driver for the evolution of hominid bipedalism. For example, Wescott (1967) and later Jablonski & Chaplin (1993) suggest that bipedal threat displays could have been the transitional behaviour which led to some groups of apes beginning to adopt bipedal postures more often. Others (e.g. Dart 1925) have offered the idea that the need for more vigilance against predators could have provided the initial motivation. Dawkins (e.g. 2004) has argued that it could have begun as a kind of fashion that just caught on and then escalated through sexual selection. And it has even been suggested (e.g. Tanner 1981:165) that male phallic display could have been the initial incentive, as well as increased sexual signaling in upright female posture.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 36,
"text": "The thermoregulatory model explaining the origin of bipedalism is one of the simplest theories so far advanced, but it is a viable explanation. Dr. Peter Wheeler, a professor of evolutionary biology, proposes that bipedalism raises the amount of body surface area higher above the ground which results in a reduction in heat gain and helps heat dissipation. When a hominid is higher above the ground, the organism accesses more favorable wind speeds and temperatures. During heat seasons, greater wind flow results in a higher heat loss, which makes the organism more comfortable. Also, Wheeler explains that a vertical posture minimizes the direct exposure to the sun whereas quadrupedalism exposes more of the body to direct exposure. Analysis and interpretations of Ardipithecus reveal that this hypothesis needs modification to consider that the forest and woodland environmental preadaptation of early-stage hominid bipedalism preceded further refinement of bipedalism by the pressure of natural selection. This then allowed for the more efficient exploitation of the hotter conditions ecological niche, rather than the hotter conditions being hypothetically bipedalism's initial stimulus. A feedback mechanism from the advantages of bipedality in hot and open habitats would then in turn make a forest preadaptation solidify as a permanent state.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 37,
"text": "Charles Darwin wrote that \"Man could not have attained his present dominant position in the world without the use of his hands, which are so admirably adapted to the act of obedience of his will\". Darwin (1871:52) and many models on bipedal origins are based on this line of thought. Gordon Hewes (1961) suggested that the carrying of meat \"over considerable distances\" (Hewes 1961:689) was the key factor. Isaac (1978) and Sinclair et al. (1986) offered modifications of this idea, as indeed did Lovejoy (1981) with his \"provisioning model\" described above. Others, such as Nancy Tanner (1981), have suggested that infant carrying was key, while others again have suggested stone tools and weapons drove the change. This stone-tools theory is very unlikely, as though ancient humans were known to hunt, the discovery of tools was not discovered for thousands of years after the origin of bipedalism, chronologically precluding it from being a driving force of evolution. (Wooden tools and spears fossilize poorly and therefore it is difficult to make a judgment about their potential usage.)",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 38,
"text": "The observation that large primates, including especially the great apes, that predominantly move quadrupedally on dry land, tend to switch to bipedal locomotion in waist deep water, has led to the idea that the origin of human bipedalism may have been influenced by waterside environments. This idea, labelled \"the wading hypothesis\", was originally suggested by the Oxford marine biologist Alister Hardy who said: \"It seems to me likely that Man learnt to stand erect first in water and then, as his balance improved, he found he became better equipped for standing up on the shore when he came out, and indeed also for running.\" It was then promoted by Elaine Morgan, as part of the aquatic ape hypothesis, who cited bipedalism among a cluster of other human traits unique among primates, including voluntary control of breathing, hairlessness and subcutaneous fat. The \"aquatic ape hypothesis\", as originally formulated, has not been accepted or considered a serious theory within the anthropological scholarly community. Others, however, have sought to promote wading as a factor in the origin of human bipedalism without referring to further (\"aquatic ape\" related) factors. Since 2000 Carsten Niemitz has published a series of papers and a book on a variant of the wading hypothesis, which he calls the \"amphibian generalist theory\" (German: Amphibische Generalistentheorie).",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 39,
"text": "Other theories have been proposed that suggest wading and the exploitation of aquatic food sources (providing essential nutrients for human brain evolution or critical fallback foods) may have exerted evolutionary pressures on human ancestors promoting adaptations which later assisted full-time bipedalism. It has also been thought that consistent water-based food sources had developed early hominid dependency and facilitated dispersal along seas and rivers.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 40,
"text": "Prehistoric fossil records show that early hominins first developed bipedalism before being followed by an increase in brain size. The consequences of these two changes in particular resulted in painful and difficult labor due to the increased favor of a narrow pelvis for bipedalism being countered by larger heads passing through the constricted birth canal. This phenomenon is commonly known as the obstetrical dilemma.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 41,
"text": "Non-human primates habitually deliver their young on their own, but the same cannot be said for modern-day humans. Isolated birth appears to be rare and actively avoided cross-culturally, even if birthing methods may differ between said cultures. This is due to the fact that the narrowing of the hips and the change in the pelvic angle caused a discrepancy in the ratio of the size of the head to the birth canal. The result of this is that there is greater difficulty in birthing for hominins in general, let alone to be doing it by oneself.",
"title": "Evolution of human bipedalism"
},
{
"paragraph_id": 42,
"text": "Bipedal movement occurs in a number of ways and requires many mechanical and neurological adaptations. Some of these are described below.",
"title": "Physiology"
},
{
"paragraph_id": 43,
"text": "Energy-efficient means of standing bipedally involve constant adjustment of balance, and of course these must avoid overcorrection. The difficulties associated with simple standing in upright humans are highlighted by the greatly increased risk of falling present in the elderly, even with minimal reductions in control system effectiveness.",
"title": "Physiology"
},
{
"paragraph_id": 44,
"text": "Shoulder stability would decrease with the evolution of bipedalism. Shoulder mobility would increase because the need for a stable shoulder is only present in arboreal habitats. Shoulder mobility would support suspensory locomotion behaviors which are present in human bipedalism. The forelimbs are freed from weight-bearing requirements, which makes the shoulder a place of evidence for the evolution of bipedalism.",
"title": "Physiology"
},
{
"paragraph_id": 45,
"text": "Unlike non-human apes that are able to practice bipedality such as Pan and Gorilla, hominins have the ability to move bipedally without the utilization of a bent-hip-bent-knee (BHBK) gait, which requires the engagement of both the hip and the knee joints. This human ability to walk is made possible by the spinal curvature humans have that non-human apes do not. Rather, walking is characterized by an \"inverted pendulum\" movement in which the center of gravity vaults over a stiff leg with each step. Force plates can be used to quantify the whole-body kinetic & potential energy, with walking displaying an out-of-phase relationship indicating exchange between the two. This model applies to all walking organisms regardless of the number of legs, and thus bipedal locomotion does not differ in terms of whole-body kinetics.",
"title": "Physiology"
},
{
"paragraph_id": 46,
"text": "In humans, walking is composed of several separate processes:",
"title": "Physiology"
},
{
"paragraph_id": 47,
"text": "Early hominins underwent post-cranial changes in order to better adapt to bipedality, especially running. One of these changes is having longer hindlimbs proportional to the forelimbs and their effects. As previously mentioned, longer hindlimbs assist in thermoregulation by reducing the total surface area exposed to direct sunlight while simultaneously allowing for more space for cooling winds. Additionally, having longer limbs is more energy-efficient, since longer limbs mean that overall muscle strain is lessened. Better energy efficiency, in turn, means higher endurance, particularly when running long distances.",
"title": "Physiology"
},
{
"paragraph_id": 48,
"text": "Running is characterized by a spring-mass movement. Kinetic and potential energy are in phase, and the energy is stored & released from a spring-like limb during foot contact, achieved by the plantar arch and the Achilles tendon in the foot and leg, respectively. Again, the whole-body kinetics are similar to animals with more limbs.",
"title": "Physiology"
},
{
"paragraph_id": 49,
"text": "Bipedalism requires strong leg muscles, particularly in the thighs. Contrast in domesticated poultry the well muscled legs, against the small and bony wings. Likewise in humans, the quadriceps and hamstring muscles of the thigh are both so crucial to bipedal activities that each alone is much larger than the well-developed biceps of the arms. In addition to the leg muscles, the increased size of the gluteus maximus in humans is an important adaptation as it provides support and stability to the trunk and lessens the amount of stress on the joints when running.",
"title": "Physiology"
},
{
"paragraph_id": 50,
"text": "Quadrupeds, have more restrictive breathing respire while moving than do bipedal humans. \"Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running.\"",
"title": "Physiology"
},
{
"paragraph_id": 51,
"text": "Respiration through bipedality means that there is better breath control in bipeds, which can be associated with brain growth. The modern brain utilizes approximately 20% of energy input gained through breathing and eating, as opposed to species like chimpanzees who use up twice as much energy as humans for the same amount of movement. This excess energy, leading to brain growth, also leads to the development of verbal communication. This is because breath control means that the muscles associated with breathing can be manipulated into creating sounds. This means that the onset of bipedality, leading to more efficient breathing, may be related to the origin of verbal language.",
"title": "Physiology"
},
{
"paragraph_id": 52,
"text": "For nearly the whole of the 20th century, bipedal robots were very difficult to construct and robot locomotion involved only wheels, treads, or multiple legs. Recent cheap and compact computing power has made two-legged robots more feasible. Some notable biped robots are ASIMO, HUBO, MABEL and QRIO. Recently, spurred by the success of creating a fully passive, un-powered bipedal walking robot, those working on such machines have begun using principles gleaned from the study of human and animal locomotion, which often relies on passive mechanisms to minimize power consumption.",
"title": "Bipedal robots"
}
] | Bipedalism is a form of terrestrial locomotion where a tetrapod moves by means of its two rear limbs or legs. An animal or machine that usually moves in a bipedal manner is known as a biped, meaning 'two feet'. Types of bipedal movement include walking or running and hopping. Several groups of modern species are habitual bipeds whose normal method of locomotion is two-legged. In the Triassic period some groups of archosaurs developed bipedalism; among the dinosaurs, all the early forms and many later groups were habitual or exclusive bipeds; the birds are members of a clade of exclusively bipedal dinosaurs, the theropods. Within mammals, habitual bipedalism has evolved multiple times, with the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and hominin apes as well as various other extinct groups evolving the trait independently.
A larger number of modern species intermittently or briefly use a bipedal gait. Several lizard species move bipedally when running, usually to escape from threats. Many primate and bear species will adopt a bipedal gait in order to reach food or explore their environment, though there are a few cases where they walk on their hind limbs only. Several arboreal primate species, such as gibbons and indriids, exclusively walk on two legs during the brief periods they spend on the ground. Many animals rear up on their hind legs while fighting or copulating. Some animals commonly stand on their hind legs to reach food, keep watch, threaten a competitor or predator, or pose in courtship, but do not move bipedally. | 2001-09-19T10:46:30Z | 2023-12-24T03:48:39Z | [
"Template:Convert",
"Template:NoteTag",
"Template:NoteFoot",
"Template:Reflist",
"Template:Cite news",
"Template:Cite magazine",
"Template:Cite book",
"Template:Page needed",
"Template:Clear",
"Template:Clarify",
"Template:Cite journal",
"Template:Locomotion",
"Template:Human Evolution",
"Template:Portal bar",
"Template:Redirect",
"Template:Cite web",
"Template:Cite encyclopedia",
"Template:Short description",
"Template:Anchor",
"Template:IPAc-en",
"Template:Main",
"Template:Lang-de"
] | https://en.wikipedia.org/wiki/Bipedalism |
4,211 | Bootstrapping | In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input.
Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers or a boot hook tool to help pull the boots on. The saying "to pull oneself up by one's bootstraps" was already in use during the 19th century as an example of an impossible task. The idiom dates at least to 1834, when it appeared in the Workingman's Advocate: "It is conjectured that Mr. Murphee will now be enabled to hand himself over the Cumberland river or a barn yard fence by the straps of his boots." In 1860 it appeared in a comment on philosophy of mind: "The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps." Bootstrap as a metaphor, meaning to better oneself by one's own unaided efforts, was in use in 1922. This metaphor spawned additional metaphors for a series of self-sustaining processes that proceed without external help.
The term is sometimes attributed to a story in Rudolf Erich Raspe's The Surprising Adventures of Baron Munchausen, but in that story Baron Munchausen pulls himself (and his horse) out of a swamp by his hair (specifically, his pigtail), not by his bootstraps – and no explicit reference to bootstraps has been found elsewhere in the various versions of the Munchausen tales.
Originally meant to attempt something ludicrously far-fetched or even impossible, the phrase "Pull yourself up by your bootstraps!" has since been utilized as a narrative for economic mobility or a cure for depression. That idea is believed to have been popularized by American writer Horatio Alger in the 19th century. To request that someone "bootstrap" is to suggest that they might overcome great difficulty by sheer force of will.
Critics have observed that the phrase is used to portray unfair situations as far more meritocratic than they really are. A 2009 study found that 77% of Americans believe that wealth is often the result of hard work. Various studies have found that the main predictor of future wealth is not IQ or hard work, but initial wealth.
In computer technology, the term bootstrapping refers to language compilers that are able to be coded in the same language. (For example, a C compiler is now written in the C language. Once the basic compiler is written, improvements can be iteratively made, thus pulling the language up by its bootstraps). Also, booting usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, the kernel will load the operating system which will then take care of loading other device drivers and software as needed.
Booting is the process of starting a computer, specifically with regard to starting its software. The process involves a chain of stages, in which at each stage, a relatively small and simple program loads and then executes the larger, more complicated program of the next stage. It is in this sense that the computer "pulls itself up by its bootstraps"; i.e., it improves itself by its own efforts. Booting is a chain of events that starts with execution of hardware-based procedures and may then hand-off to firmware and software which is loaded into main memory. Booting often involves processes such as performing self-tests, loading configuration settings, loading a BIOS, resident monitors, a hypervisor, an operating system, or utility software.
The computer term bootstrap began as a metaphor in the 1950s. In computers, pressing a bootstrap button caused a hardwired program to read a bootstrap program from an input unit. The computer would then execute the bootstrap program, which caused it to read more program instructions. It became a self-sustaining process that proceeded without external help from manually entered instructions. As a computing term, bootstrap has been used since at least 1953.
Bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g., ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language.
Historically, bootstrapping also refers to an early technique for computer program development on new hardware. The technique described in this paragraph has been replaced by the use of a cross compiler executed by a pre-existing computer. Bootstrapping in program development began during the 1950s when each program was constructed on paper in decimal code or in binary code, bit by bit (1s and 0s), because there was no high-level computer language, no compiler, no assembler, and no linker. A tiny assembler program was hand-coded for a new computer (for example the IBM 650) which converted a few instructions into binary or decimal code: A1. This simple assembler program was then rewritten in its just-defined assembly language but with extensions that would enable the use of some additional mnemonics for more complex operation codes. The enhanced assembler's source program was then assembled by its predecessor's executable (A1) into binary or decimal code to give A2, and the cycle repeated (now with those enhancements available), until the entire instruction set was coded, branch addresses were automatically calculated, and other conveniences (such as conditional assembly, macros, optimisations, etc.) established. This was how the early Symbolic Optimal Assembly Program (SOAP) was developed. Compilers, linkers, loaders, and utilities were then coded in assembly language, further continuing the bootstrapping process of developing complex software systems by using simpler software.
The term was also championed by Doug Engelbart to refer to his belief that organizations could better evolve by improving the process they use for improvement (thus obtaining a compounding effect over time). His SRI team that developed the NLS hypertext system applied this strategy by using the tool they had developed to improve the tool.
The development of compilers for new programming languages first developed in an existing language but then rewritten in the new language and compiled by itself, is another example of the bootstrapping notion.
During the installation of computer programs, it is sometimes necessary to update the installer or package manager itself. The common pattern for this is to use a small executable bootstrapper file (e.g., setup.exe) which updates the installer and starts the real installation after the update. Sometimes the bootstrapper also installs other prerequisites for the software during the bootstrapping process.
A bootstrapping node, also known as a rendezvous host, is a node in an overlay network that provides initial configuration information to newly joining nodes so that they may successfully join the overlay network.
A type of computer simulation called discrete-event simulation represents the operation of a system as a chronological sequence of events. A technique called bootstrapping the simulation model is used, which bootstraps initial data points using a pseudorandom number generator to schedule an initial set of pending events, which schedule additional events, and with time, the distribution of event times approaches its steady state—the bootstrapping behavior is overwhelmed by steady-state behavior.
Bootstrapping is a technique used to iteratively improve a classifier's performance. Typically, multiple classifiers will be trained on different sets of the input data, and on prediction tasks the output of the different classifiers will be combined.
Seed AI is a hypothesized type of artificial intelligence capable of recursive self-improvement. Having improved itself, it would become better at improving itself, potentially leading to an exponential increase in intelligence. No such AI is known to exist, but it remains an active field of research. Seed AI is a significant part of some theories about the technological singularity: proponents believe that the development of seed AI will rapidly yield ever-smarter intelligence (via bootstrapping) and thus a new era.
Bootstrapping is a resampling technique used to obtain estimates of summary statistics.
Bootstrapping in business means starting a business without external help or working capital. Entrepreneurs in the startup development phase of their company survive through internal cash flow and are very cautious with their expenses. Generally at the start of a venture, a small amount of money will be set aside for the bootstrap process. Bootstrapping can also be a supplement for econometric models. Bootstrapping was also expanded upon in the book Bootstrap Business by Richard Christiansen, the Harvard Business Review article The Art of Bootstrapping and the follow-up book The Origin and Evolution of New Businesses by Amar Bhide. There is also an entire bible written on how to properly bootstrap by Seth Godin.
Experts have noted that several common stages exist for bootstrapping a business venture:
There are many types of companies that are eligible for bootstrapping. Early-stage companies that do not necessarily require large influxes of capital (particularly from outside sources) qualify. This would specifically allow for flexibility for the business and time to grow. Serial entrepreneur companies could also possibly reap the benefits of bootstrapping. These are organizations whereby the founder has money from the sale of a previous companies they can use to invest.
There are different methods of bootstrapping. Future business owners aspiring to use bootstrapping as way of launching their product or service often use the following methods:
Bootstrapping is often considered successful. When taking into account statistics provided by Fundera, approximately 77% of small business rely on some sort of personal investment and or savings in order to fund their startup ventures. The average small business venture requires approximately $10,000 in startup capital with a third of small business launching with less than $5,000 bootstrapped.
Based on startup data presented by Entrepreneur.com, in comparison other methods of funding, bootstrapping is more commonly used than others. “0.91% of startups are funded by angel investors, while 0.05% are funded by VCs. In contrast, 57 percent of startups are funded by personal loans and credit, while 38 percent receive funding from family and friends.”
Some examples of successful entrepreneurs that have used bootstrapping in order to finance their businesses include serial entrepreneur Mark Cuban. He has publicly endorsed bootstrapping claiming that “If you can start on your own … do it by [yourself] without having to go out and raise money.” When asked why he believed this approach was most necessary, he replied, “I think the biggest mistake people make is once they have an idea and the goal of starting a business, they think they have to raise money. And once you raise money, that’s not an accomplishment, that’s an obligation” because “now, you’re reporting to whoever you raised money from.”
Bootstrapped companies such as Apple Inc. (APPL), eBay Inc. (EBAY) and Coca-Cola Co. have also claimed that they attribute some of their success to the fact that this method of funding enables them to remain highly focused on a specific array of profitable product.
There are advantages to bootstrapping. Entrepreneurs are in full control over the finances of the business and can maintain control over the organization's inflows and outflows of cash. Equity is retained by the owner and can be redistributed at their discretion. There is less liability or opportunity to accumulate debt from other financial sources. Bootstrapping often leads to entrepreneurs operating their businesses with freedom to do as they see fit; in a similar fashion to sole proprietors. This is an effective method if the business owner's goal is to be able to fund future investments back into the business. Besides the direct stakeholders of the business, entrepreneurs do not have to answer to a board of investors which could possibly pressure them into making certain decisions beneficial to them.
There are also drawbacks of bootstrapping. Personal liability is one. Credit lines usually must be established in owner's name which is the downfall of some companies due to debt being accumulated from various credit cards, etc. All financial risks pertaining to the business in question all fall on the owner's shoulders. The owner is forced to put either their own or their family/friend's investments in jeopardy in the event of the business failing. Possible legal issues are another drawback. There have been some cases in which entrepreneurs have been sued by family or even close friends for the improper use of their bootstrapped money. Because financing is limited to what the owner or company makes, this can create a ceiling which prohibits room for growth. Without the aid of occasional external sources of funding, entrepreneurs can find themselves unable to promote employees or even expand their businesses. A lack of money could possibly lead to a reduction of the quality of the service or product meant to be provided. Certain investors tend to be well-respected within specific industries and running a company without their backing or support could cause pivotal opportunities to be lost. Personal stress to entrepreneur or business owner in question is common. Tackling funding by themselves has often led to stressful times for certain individuals.
Startups can grow by reinvesting profits in its own growth if bootstrapping costs are low and return on investment is high. This financing approach allows owners to maintain control of their business and forces them to spend with discipline. In addition, bootstrapping allows startups to focus on customers rather than investors, thereby increasing the likelihood of creating a profitable business. This leaves startups with a better exit strategy with greater returns.
Leveraged buyouts, or highly leveraged or "bootstrap" transactions, occur when an investor acquires a controlling interest in a company's equity and where a significant percentage of the purchase price is financed through leverage, i.e. borrowing by the acquired company.
Bootstrapping in finance refers to the method to create the spot rate curve. Operation Bootstrap (Operación Manos a la Obra) refers to the ambitious projects that industrialized Puerto Rico in the mid-20th century.
Richard Dawkins in his book River Out of Eden used the computer bootstrapping concept to explain how biological cells differentiate: "Different cells receive different combinations of chemicals, which switch on different combinations of genes, and some genes work to switch other genes on or off. And so the bootstrapping continues, until we have the full repertoire of different kinds of cells."
Bootstrapping analysis gives a way to judge the strength of support for clades on phylogenetic trees. A number is written by a node, which reflects the percentage of bootstrap trees which also resolve the clade at the endpoints of that branch.
Bootstrapping is a rule preventing the admission of hearsay evidence in conspiracy cases.
Bootstrapping is a theory of language acquisition.
Bootstrapping is using very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles or operators.
In tokamak fusion devices, bootstrapping refers to the process in which a bootstrap current is self-generated by the plasma, which reduces or eliminates the need for an external current driver. Maximising the bootstrap current is a major goal of advanced tokamak designs.
Bootstrapping in inertial confinement fusion refers to the alpha particles produced in the fusion reaction providing further heating to the plasma. This heating leads to ignition and an overall energy gain.
Bootstrapping is a form of positive feedback in analog circuit design.
An electric power grid is almost never brought down intentionally. Generators and power stations are started and shut down as necessary. A typical power station requires power for start up prior to being able to generate power. This power is obtained from the grid, so if the entire grid is down these stations cannot be started.
Therefore, to get a grid started, there must be at least a small number of power stations that can start entirely on their own. A black start is the process of restoring a power station to operation without relying on external power. In the absence of grid power, one or more black starts are used to bootstrap the grid.
A Bootstrapping Server Function (BSF) is an intermediary element in cellular networks which provides application independent functions for mutual authentication of user equipment and servers unknown to each other and for 'bootstrapping' the exchange of secret session keys afterwards. The term 'bootstrapping' is related to building a security relation with a previously unknown device first and to allow installing security elements (keys) in the device and the BSF afterwards.
A nuclear power plant always needs to have a way to remove decay heat, which is usually done with electrical cooling pumps. But in the rare case of a complete loss of electrical power, this can still be achieved by booting a turbine generator. As steam builds up in the steam generator, it can be used to power the turbine generator (initially with no oil pumps, circ water pumps, or condensation pumps). Once the turbine generator is producing electricity, the auxiliary pumps can be powered on, and the reactor cooling pumps can be run momentarily. Eventually the steam pressure will become insufficient to power the turbine generator, and the process can be shut down in reverse order. The process can be repeated until no longer needed. This can cause great damage to the turbine generator, but more importantly, it saves the nuclear reactor. | [
{
"paragraph_id": 0,
"text": "In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers or a boot hook tool to help pull the boots on. The saying \"to pull oneself up by one's bootstraps\" was already in use during the 19th century as an example of an impossible task. The idiom dates at least to 1834, when it appeared in the Workingman's Advocate: \"It is conjectured that Mr. Murphee will now be enabled to hand himself over the Cumberland river or a barn yard fence by the straps of his boots.\" In 1860 it appeared in a comment on philosophy of mind: \"The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps.\" Bootstrap as a metaphor, meaning to better oneself by one's own unaided efforts, was in use in 1922. This metaphor spawned additional metaphors for a series of self-sustaining processes that proceed without external help.",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "The term is sometimes attributed to a story in Rudolf Erich Raspe's The Surprising Adventures of Baron Munchausen, but in that story Baron Munchausen pulls himself (and his horse) out of a swamp by his hair (specifically, his pigtail), not by his bootstraps – and no explicit reference to bootstraps has been found elsewhere in the various versions of the Munchausen tales.",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Originally meant to attempt something ludicrously far-fetched or even impossible, the phrase \"Pull yourself up by your bootstraps!\" has since been utilized as a narrative for economic mobility or a cure for depression. That idea is believed to have been popularized by American writer Horatio Alger in the 19th century. To request that someone \"bootstrap\" is to suggest that they might overcome great difficulty by sheer force of will.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "Critics have observed that the phrase is used to portray unfair situations as far more meritocratic than they really are. A 2009 study found that 77% of Americans believe that wealth is often the result of hard work. Various studies have found that the main predictor of future wealth is not IQ or hard work, but initial wealth.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "In computer technology, the term bootstrapping refers to language compilers that are able to be coded in the same language. (For example, a C compiler is now written in the C language. Once the basic compiler is written, improvements can be iteratively made, thus pulling the language up by its bootstraps). Also, booting usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, the kernel will load the operating system which will then take care of loading other device drivers and software as needed.",
"title": "Applications"
},
{
"paragraph_id": 6,
"text": "Booting is the process of starting a computer, specifically with regard to starting its software. The process involves a chain of stages, in which at each stage, a relatively small and simple program loads and then executes the larger, more complicated program of the next stage. It is in this sense that the computer \"pulls itself up by its bootstraps\"; i.e., it improves itself by its own efforts. Booting is a chain of events that starts with execution of hardware-based procedures and may then hand-off to firmware and software which is loaded into main memory. Booting often involves processes such as performing self-tests, loading configuration settings, loading a BIOS, resident monitors, a hypervisor, an operating system, or utility software.",
"title": "Applications"
},
{
"paragraph_id": 7,
"text": "The computer term bootstrap began as a metaphor in the 1950s. In computers, pressing a bootstrap button caused a hardwired program to read a bootstrap program from an input unit. The computer would then execute the bootstrap program, which caused it to read more program instructions. It became a self-sustaining process that proceeded without external help from manually entered instructions. As a computing term, bootstrap has been used since at least 1953.",
"title": "Applications"
},
{
"paragraph_id": 8,
"text": "Bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g., ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language.",
"title": "Applications"
},
{
"paragraph_id": 9,
"text": "Historically, bootstrapping also refers to an early technique for computer program development on new hardware. The technique described in this paragraph has been replaced by the use of a cross compiler executed by a pre-existing computer. Bootstrapping in program development began during the 1950s when each program was constructed on paper in decimal code or in binary code, bit by bit (1s and 0s), because there was no high-level computer language, no compiler, no assembler, and no linker. A tiny assembler program was hand-coded for a new computer (for example the IBM 650) which converted a few instructions into binary or decimal code: A1. This simple assembler program was then rewritten in its just-defined assembly language but with extensions that would enable the use of some additional mnemonics for more complex operation codes. The enhanced assembler's source program was then assembled by its predecessor's executable (A1) into binary or decimal code to give A2, and the cycle repeated (now with those enhancements available), until the entire instruction set was coded, branch addresses were automatically calculated, and other conveniences (such as conditional assembly, macros, optimisations, etc.) established. This was how the early Symbolic Optimal Assembly Program (SOAP) was developed. Compilers, linkers, loaders, and utilities were then coded in assembly language, further continuing the bootstrapping process of developing complex software systems by using simpler software.",
"title": "Applications"
},
{
"paragraph_id": 10,
"text": "The term was also championed by Doug Engelbart to refer to his belief that organizations could better evolve by improving the process they use for improvement (thus obtaining a compounding effect over time). His SRI team that developed the NLS hypertext system applied this strategy by using the tool they had developed to improve the tool.",
"title": "Applications"
},
{
"paragraph_id": 11,
"text": "The development of compilers for new programming languages first developed in an existing language but then rewritten in the new language and compiled by itself, is another example of the bootstrapping notion.",
"title": "Applications"
},
{
"paragraph_id": 12,
"text": "During the installation of computer programs, it is sometimes necessary to update the installer or package manager itself. The common pattern for this is to use a small executable bootstrapper file (e.g., setup.exe) which updates the installer and starts the real installation after the update. Sometimes the bootstrapper also installs other prerequisites for the software during the bootstrapping process.",
"title": "Applications"
},
{
"paragraph_id": 13,
"text": "A bootstrapping node, also known as a rendezvous host, is a node in an overlay network that provides initial configuration information to newly joining nodes so that they may successfully join the overlay network.",
"title": "Applications"
},
{
"paragraph_id": 14,
"text": "A type of computer simulation called discrete-event simulation represents the operation of a system as a chronological sequence of events. A technique called bootstrapping the simulation model is used, which bootstraps initial data points using a pseudorandom number generator to schedule an initial set of pending events, which schedule additional events, and with time, the distribution of event times approaches its steady state—the bootstrapping behavior is overwhelmed by steady-state behavior.",
"title": "Applications"
},
{
"paragraph_id": 15,
"text": "Bootstrapping is a technique used to iteratively improve a classifier's performance. Typically, multiple classifiers will be trained on different sets of the input data, and on prediction tasks the output of the different classifiers will be combined.",
"title": "Applications"
},
{
"paragraph_id": 16,
"text": "Seed AI is a hypothesized type of artificial intelligence capable of recursive self-improvement. Having improved itself, it would become better at improving itself, potentially leading to an exponential increase in intelligence. No such AI is known to exist, but it remains an active field of research. Seed AI is a significant part of some theories about the technological singularity: proponents believe that the development of seed AI will rapidly yield ever-smarter intelligence (via bootstrapping) and thus a new era.",
"title": "Applications"
},
{
"paragraph_id": 17,
"text": "Bootstrapping is a resampling technique used to obtain estimates of summary statistics.",
"title": "Applications"
},
{
"paragraph_id": 18,
"text": "Bootstrapping in business means starting a business without external help or working capital. Entrepreneurs in the startup development phase of their company survive through internal cash flow and are very cautious with their expenses. Generally at the start of a venture, a small amount of money will be set aside for the bootstrap process. Bootstrapping can also be a supplement for econometric models. Bootstrapping was also expanded upon in the book Bootstrap Business by Richard Christiansen, the Harvard Business Review article The Art of Bootstrapping and the follow-up book The Origin and Evolution of New Businesses by Amar Bhide. There is also an entire bible written on how to properly bootstrap by Seth Godin.",
"title": "Applications"
},
{
"paragraph_id": 19,
"text": "Experts have noted that several common stages exist for bootstrapping a business venture:",
"title": "Applications"
},
{
"paragraph_id": 20,
"text": "There are many types of companies that are eligible for bootstrapping. Early-stage companies that do not necessarily require large influxes of capital (particularly from outside sources) qualify. This would specifically allow for flexibility for the business and time to grow. Serial entrepreneur companies could also possibly reap the benefits of bootstrapping. These are organizations whereby the founder has money from the sale of a previous companies they can use to invest.",
"title": "Applications"
},
{
"paragraph_id": 21,
"text": "There are different methods of bootstrapping. Future business owners aspiring to use bootstrapping as way of launching their product or service often use the following methods:",
"title": "Applications"
},
{
"paragraph_id": 22,
"text": "Bootstrapping is often considered successful. When taking into account statistics provided by Fundera, approximately 77% of small business rely on some sort of personal investment and or savings in order to fund their startup ventures. The average small business venture requires approximately $10,000 in startup capital with a third of small business launching with less than $5,000 bootstrapped.",
"title": "Applications"
},
{
"paragraph_id": 23,
"text": "Based on startup data presented by Entrepreneur.com, in comparison other methods of funding, bootstrapping is more commonly used than others. “0.91% of startups are funded by angel investors, while 0.05% are funded by VCs. In contrast, 57 percent of startups are funded by personal loans and credit, while 38 percent receive funding from family and friends.”",
"title": "Applications"
},
{
"paragraph_id": 24,
"text": "Some examples of successful entrepreneurs that have used bootstrapping in order to finance their businesses include serial entrepreneur Mark Cuban. He has publicly endorsed bootstrapping claiming that “If you can start on your own … do it by [yourself] without having to go out and raise money.” When asked why he believed this approach was most necessary, he replied, “I think the biggest mistake people make is once they have an idea and the goal of starting a business, they think they have to raise money. And once you raise money, that’s not an accomplishment, that’s an obligation” because “now, you’re reporting to whoever you raised money from.”",
"title": "Applications"
},
{
"paragraph_id": 25,
"text": "Bootstrapped companies such as Apple Inc. (APPL), eBay Inc. (EBAY) and Coca-Cola Co. have also claimed that they attribute some of their success to the fact that this method of funding enables them to remain highly focused on a specific array of profitable product.",
"title": "Applications"
},
{
"paragraph_id": 26,
"text": "There are advantages to bootstrapping. Entrepreneurs are in full control over the finances of the business and can maintain control over the organization's inflows and outflows of cash. Equity is retained by the owner and can be redistributed at their discretion. There is less liability or opportunity to accumulate debt from other financial sources. Bootstrapping often leads to entrepreneurs operating their businesses with freedom to do as they see fit; in a similar fashion to sole proprietors. This is an effective method if the business owner's goal is to be able to fund future investments back into the business. Besides the direct stakeholders of the business, entrepreneurs do not have to answer to a board of investors which could possibly pressure them into making certain decisions beneficial to them.",
"title": "Applications"
},
{
"paragraph_id": 27,
"text": "There are also drawbacks of bootstrapping. Personal liability is one. Credit lines usually must be established in owner's name which is the downfall of some companies due to debt being accumulated from various credit cards, etc. All financial risks pertaining to the business in question all fall on the owner's shoulders. The owner is forced to put either their own or their family/friend's investments in jeopardy in the event of the business failing. Possible legal issues are another drawback. There have been some cases in which entrepreneurs have been sued by family or even close friends for the improper use of their bootstrapped money. Because financing is limited to what the owner or company makes, this can create a ceiling which prohibits room for growth. Without the aid of occasional external sources of funding, entrepreneurs can find themselves unable to promote employees or even expand their businesses. A lack of money could possibly lead to a reduction of the quality of the service or product meant to be provided. Certain investors tend to be well-respected within specific industries and running a company without their backing or support could cause pivotal opportunities to be lost. Personal stress to entrepreneur or business owner in question is common. Tackling funding by themselves has often led to stressful times for certain individuals.",
"title": "Applications"
},
{
"paragraph_id": 28,
"text": "Startups can grow by reinvesting profits in its own growth if bootstrapping costs are low and return on investment is high. This financing approach allows owners to maintain control of their business and forces them to spend with discipline. In addition, bootstrapping allows startups to focus on customers rather than investors, thereby increasing the likelihood of creating a profitable business. This leaves startups with a better exit strategy with greater returns.",
"title": "Applications"
},
{
"paragraph_id": 29,
"text": "Leveraged buyouts, or highly leveraged or \"bootstrap\" transactions, occur when an investor acquires a controlling interest in a company's equity and where a significant percentage of the purchase price is financed through leverage, i.e. borrowing by the acquired company.",
"title": "Applications"
},
{
"paragraph_id": 30,
"text": "Bootstrapping in finance refers to the method to create the spot rate curve. Operation Bootstrap (Operación Manos a la Obra) refers to the ambitious projects that industrialized Puerto Rico in the mid-20th century.",
"title": "Applications"
},
{
"paragraph_id": 31,
"text": "Richard Dawkins in his book River Out of Eden used the computer bootstrapping concept to explain how biological cells differentiate: \"Different cells receive different combinations of chemicals, which switch on different combinations of genes, and some genes work to switch other genes on or off. And so the bootstrapping continues, until we have the full repertoire of different kinds of cells.\"",
"title": "Applications"
},
{
"paragraph_id": 32,
"text": "Bootstrapping analysis gives a way to judge the strength of support for clades on phylogenetic trees. A number is written by a node, which reflects the percentage of bootstrap trees which also resolve the clade at the endpoints of that branch.",
"title": "Applications"
},
{
"paragraph_id": 33,
"text": "Bootstrapping is a rule preventing the admission of hearsay evidence in conspiracy cases.",
"title": "Applications"
},
{
"paragraph_id": 34,
"text": "Bootstrapping is a theory of language acquisition.",
"title": "Applications"
},
{
"paragraph_id": 35,
"text": "Bootstrapping is using very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles or operators.",
"title": "Applications"
},
{
"paragraph_id": 36,
"text": "In tokamak fusion devices, bootstrapping refers to the process in which a bootstrap current is self-generated by the plasma, which reduces or eliminates the need for an external current driver. Maximising the bootstrap current is a major goal of advanced tokamak designs.",
"title": "Applications"
},
{
"paragraph_id": 37,
"text": "Bootstrapping in inertial confinement fusion refers to the alpha particles produced in the fusion reaction providing further heating to the plasma. This heating leads to ignition and an overall energy gain.",
"title": "Applications"
},
{
"paragraph_id": 38,
"text": "Bootstrapping is a form of positive feedback in analog circuit design.",
"title": "Applications"
},
{
"paragraph_id": 39,
"text": "An electric power grid is almost never brought down intentionally. Generators and power stations are started and shut down as necessary. A typical power station requires power for start up prior to being able to generate power. This power is obtained from the grid, so if the entire grid is down these stations cannot be started.",
"title": "Applications"
},
{
"paragraph_id": 40,
"text": "Therefore, to get a grid started, there must be at least a small number of power stations that can start entirely on their own. A black start is the process of restoring a power station to operation without relying on external power. In the absence of grid power, one or more black starts are used to bootstrap the grid.",
"title": "Applications"
},
{
"paragraph_id": 41,
"text": "A Bootstrapping Server Function (BSF) is an intermediary element in cellular networks which provides application independent functions for mutual authentication of user equipment and servers unknown to each other and for 'bootstrapping' the exchange of secret session keys afterwards. The term 'bootstrapping' is related to building a security relation with a previously unknown device first and to allow installing security elements (keys) in the device and the BSF afterwards.",
"title": "Applications"
},
{
"paragraph_id": 42,
"text": "A nuclear power plant always needs to have a way to remove decay heat, which is usually done with electrical cooling pumps. But in the rare case of a complete loss of electrical power, this can still be achieved by booting a turbine generator. As steam builds up in the steam generator, it can be used to power the turbine generator (initially with no oil pumps, circ water pumps, or condensation pumps). Once the turbine generator is producing electricity, the auxiliary pumps can be powered on, and the reactor cooling pumps can be run momentarily. Eventually the steam pressure will become insufficient to power the turbine generator, and the process can be shut down in reverse order. The process can be repeated until no longer needed. This can cause great damage to the turbine generator, but more importantly, it saves the nuclear reactor.",
"title": "Applications"
}
] | In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input. | 2001-09-19T14:13:16Z | 2023-12-02T16:56:06Z | [
"Template:Cite mailing list",
"Template:Cite web",
"Template:YouTube",
"Template:Snd",
"Template:Main",
"Template:Annotated link",
"Template:Cite news",
"Template:ISBN",
"Template:Wiktionary",
"Template:Cite journal",
"Template:Short description",
"Template:Other uses",
"Template:Linktext",
"Template:Confusing",
"Template:Reflist",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Bootstrapping |
4,213 | Baltic languages | The Baltic languages are a branch of the Indo-European language family spoken natively or as a second language by a population of about 6.5–7.0 million people mainly in areas extending east and southeast of the Baltic Sea in Europe. Together with the Slavic languages, they form the Balto-Slavic branch of the Indo-European family.
Scholars usually regard them as a single subgroup divided into two branches: West Baltic (containing only extinct languages) and East Baltic (containing at least two living languages, Lithuanian, Latvian, and by some counts including Latgalian and Samogitian as separate languages rather than dialects of those two). The range of the East Baltic linguistic influence once possibly reached as far as the Ural Mountains, but this hypothesis has been questioned.
Old Prussian, a Western Baltic language that became extinct in the 18th century, has possibly retained the greatest number of properties from Proto-Baltic.
Although related, Lithuanian, Latvian, and particularly Old Prussian have lexicons that differ substantially from one another and so the languages are not mutually intelligible. Relatively low mutual interaction for neighbouring languages historically led to gradual erosion of mutual intelligibility; development of their respective linguistic innovations that did not exist in shared Proto-Baltic, the substantial number of false friends and various uses and sources of loanwords from their surrounding languages are considered to be the major reasons for poor mutual intelligibility today.
Within Indo-European, the Baltic languages are generally classified as forming a single family with two branches: Eastern and Western Baltic. But these two branches are sometimes classified as independent branches of Balto-Slavic itself.
It is believed that the Baltic languages are among the most conservative of the currently remaining Indo-European languages, despite their late attestation.
Although the Baltic Aesti tribe was mentioned by ancient historians such as Tacitus as early as 98 CE, the first attestation of a Baltic language was c. 1369, in a Basel epigram of two lines written in Old Prussian. Lithuanian was first attested in a printed book, which is a Catechism by Martynas Mažvydas published in 1547. Latvian appeared in a printed Catechism in 1585.
One reason for the late attestation is that the Baltic peoples resisted Christianization longer than any other Europeans, which delayed the introduction of writing and isolated their languages from outside influence.
With the establishment of a German state in Prussia, and the mass influx of Germanic (and to a lesser degree Slavic-speaking) settlers, the Prussians began to be assimilated, and by the end of the 17th century, the Prussian language had become extinct.
After the Partitions of Poland, most of the Baltic lands were under the rule of the Russian Empire, where the native languages or alphabets were sometimes prohibited from being written down or used publicly in a Russification effort (see Lithuanian press ban for the ban in force from 1864 to 1904).
Speakers of modern Baltic languages are generally concentrated within the borders of Lithuania and Latvia, and in emigrant communities in the United States, Canada, Australia and the countries within the former borders of the Soviet Union.
Historically the languages were spoken over a larger area: west to the mouth of the Vistula river in present-day Poland, at least as far east as the Dniepr river in present-day Belarus, perhaps even to Moscow, and perhaps as far south as Kyiv. Key evidence of Baltic language presence in these regions is found in hydronyms (names of bodies of water) that are characteristically Baltic. The use of hydronyms is generally accepted to determine the extent of a culture's influence, but not the date of such influence.
The eventual expansion of the use of Slavic languages in the south and east, and Germanic languages in the west, reduced the geographic distribution of Baltic languages to a fraction of the area that they formerly covered. The Russian geneticist Oleg Balanovsky speculated that there is a predominance of the assimilated pre-Slavic substrate in the genetics of East and West Slavic populations, according to him the common genetic structure which contrasts East Slavs and Balts from other populations may suggest that the pre-Slavic substrate of the East Slavs consists most significantly of Baltic-speakers, which predated the Slavs in the cultures of the Eurasian steppe according to archaeological references he cites.
Though Estonia is geopolitically included among the Baltic states due to its location, Estonian is a Finnic language and is not related to the Baltic languages, which are Indo-European.
The Mordvinic languages, spoken mainly along western tributaries of the Volga, show several dozen loanwords from one or more Baltic languages. These may have been mediated by contacts with the Eastern Balts along the river Oka. In regards to the same geographical location, Asko Parpola, in a 2013 article, suggested that the Baltic presence in this area, dated to c. 200–600 CE, is due to an "elite superstratum". However, linguist Petri Kallio [nn] argued that the Volga-Oka is a secondary Baltic-speaking area, expanding from East Baltic, due to a large number of Baltic loanwords in Finnic and Saami.
Finnish scholars also indicate that Latvian had extensive contacts with Livonian, and, to a lesser extent, to Estonian and South Estonian. Therefore, this contact accounts for the number of Finnic hydronyms in Lithuania and Latvia that increase in a northwards direction.
Parpola, in the same article, supposed the existence of a Baltic substratum for Finnic, in Estonia and coastal Finland. In the same vein, Kallio argues for the existence of a lost "North Baltic language" that would account for loanwords during the evolution of the Finnic branch.
The Baltic languages are of particular interest to linguists because they retain many archaic features, which are thought to have been present in the early stages of the Proto-Indo-European language. However, linguists have had a hard time establishing the precise relationship of the Baltic languages to other languages in the Indo-European family. Several of the extinct Baltic languages have a limited or nonexistent written record, their existence being known only from the records of ancient historians and personal or place names. All of the languages in the Baltic group (including the living ones) were first written down relatively late in their probable existence as distinct languages. These two factors combined with others have obscured the history of the Baltic languages, leading to a number of theories regarding their position in the Indo-European family.
The Baltic languages show a close relationship with the Slavic languages, and are grouped with them in a Balto-Slavic family by most scholars. This family is considered to have developed from a common ancestor, Proto-Balto-Slavic. Later on, several lexical, phonological and morphological dialectisms developed, separating the various Balto-Slavic languages from each other. Although it is generally agreed that the Slavic languages developed from a single more-or-less unified dialect (Proto-Slavic) that split off from common Balto-Slavic, there is more disagreement about the relationship between the Baltic languages.
The traditional view is that the Balto-Slavic languages split into two branches, Baltic and Slavic, with each branch developing as a single common language (Proto-Baltic and Proto-Slavic) for some time afterwards. Proto-Baltic is then thought to have split into East Baltic and West Baltic branches. However, more recent scholarship has suggested that there was no unified Proto-Baltic stage, but that Proto-Balto-Slavic split directly into three groups: Slavic, East Baltic and West Baltic. Under this view, the Baltic family is paraphyletic, and consists of all Balto-Slavic languages that are not Slavic. In the 1960s Vladimir Toporov and Vyacheslav Ivanov made the following conclusions about the relationship between the Baltic and Slavic languages:
These scholars' theses do not contradict the close relationship between Baltic and Slavic languages and, from a historical perspective, specify the Baltic-Slavic languages' evolution.
Finally, a minority of scholars argue that Baltic descended directly from Proto-Indo-European, without an intermediate common Balto-Slavic stage. They argue that the many similarities and shared innovations between Baltic and Slavic are caused by several millennia of contact between the groups, rather than a shared heritage.
The Baltic-speaking peoples likely encompassed an area in eastern Europe much larger than their modern range. As in the case of the Celtic languages of Western Europe, they were reduced by invasion, extermination and assimilation. Studies in comparative linguistics point to genetic relationship between the languages of the Baltic family and the following extinct languages:
The Baltic classification of Dacian and Thracian has been proposed by the Lithuanian scientist Jonas Basanavičius, who insisted this is the most important work of his life and listed 600 identical words of Balts and Thracians. His theory included Phrygian in the related group, but this did not find support and was disapproved among other authors, such as Ivan Duridanov, whose own analysis found Phrygian completely lacking parallels in either Thracian or Baltic languages.
The Bulgarian linguist Ivan Duridanov, who improved the most extensive list of toponyms, in his first publication claimed that Thracian is genetically linked to the Baltic languages and in the next one he made the following classification:
"The Thracian language formed a close group with the Baltic, the Dacian and the "Pelasgian" languages. More distant were its relations with the other Indo-European languages, and especially with Greek, the Italic and Celtic languages, which exhibit only isolated phonetic similarities with Thracian; the Tokharian and the Hittite were also distant. "
Of about 200 reconstructed Thracian words by Duridanov most cognates (138) appear in the Baltic languages, mostly in Lithuanian, followed by Germanic (61), Indo-Aryan (41), Greek (36), Bulgarian (23), Latin (10) and Albanian (8). The cognates of the reconstructed Dacian words in his publication are found mostly in the Baltic languages, followed by Albanian. Parallels have enabled linguists, using the techniques of comparative linguistics, to decipher the meanings of several Dacian and Thracian placenames with, they claim, a high degree of probability. Of 74 Dacian placenames attested in primary sources and considered by Duridanov, a total of 62 have Baltic cognates, most of which were rated "certain" by Duridanov. For a big number of 300 Thracian geographic names most parallels were found between Thracian and Baltic geographic names in the study of Duridanov. According to him the most important impression make the geographic cognates of Baltic and Thracian
"the similarity of these parallels stretching frequently on the main element and the suffix simultaneously, which makes a strong impression".
Romanian linguist Sorin Paliga, analysing and criticizing Harvey Mayer's study, did admit "great likeness" between Thracian, the substrate of Romanian, and "some Baltic forms". | [
{
"paragraph_id": 0,
"text": "The Baltic languages are a branch of the Indo-European language family spoken natively or as a second language by a population of about 6.5–7.0 million people mainly in areas extending east and southeast of the Baltic Sea in Europe. Together with the Slavic languages, they form the Balto-Slavic branch of the Indo-European family.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Scholars usually regard them as a single subgroup divided into two branches: West Baltic (containing only extinct languages) and East Baltic (containing at least two living languages, Lithuanian, Latvian, and by some counts including Latgalian and Samogitian as separate languages rather than dialects of those two). The range of the East Baltic linguistic influence once possibly reached as far as the Ural Mountains, but this hypothesis has been questioned.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Old Prussian, a Western Baltic language that became extinct in the 18th century, has possibly retained the greatest number of properties from Proto-Baltic.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although related, Lithuanian, Latvian, and particularly Old Prussian have lexicons that differ substantially from one another and so the languages are not mutually intelligible. Relatively low mutual interaction for neighbouring languages historically led to gradual erosion of mutual intelligibility; development of their respective linguistic innovations that did not exist in shared Proto-Baltic, the substantial number of false friends and various uses and sources of loanwords from their surrounding languages are considered to be the major reasons for poor mutual intelligibility today.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Within Indo-European, the Baltic languages are generally classified as forming a single family with two branches: Eastern and Western Baltic. But these two branches are sometimes classified as independent branches of Balto-Slavic itself.",
"title": "Branches"
},
{
"paragraph_id": 5,
"text": "It is believed that the Baltic languages are among the most conservative of the currently remaining Indo-European languages, despite their late attestation.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Although the Baltic Aesti tribe was mentioned by ancient historians such as Tacitus as early as 98 CE, the first attestation of a Baltic language was c. 1369, in a Basel epigram of two lines written in Old Prussian. Lithuanian was first attested in a printed book, which is a Catechism by Martynas Mažvydas published in 1547. Latvian appeared in a printed Catechism in 1585.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "One reason for the late attestation is that the Baltic peoples resisted Christianization longer than any other Europeans, which delayed the introduction of writing and isolated their languages from outside influence.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "With the establishment of a German state in Prussia, and the mass influx of Germanic (and to a lesser degree Slavic-speaking) settlers, the Prussians began to be assimilated, and by the end of the 17th century, the Prussian language had become extinct.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "After the Partitions of Poland, most of the Baltic lands were under the rule of the Russian Empire, where the native languages or alphabets were sometimes prohibited from being written down or used publicly in a Russification effort (see Lithuanian press ban for the ban in force from 1864 to 1904).",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Speakers of modern Baltic languages are generally concentrated within the borders of Lithuania and Latvia, and in emigrant communities in the United States, Canada, Australia and the countries within the former borders of the Soviet Union.",
"title": "Geographic distribution"
},
{
"paragraph_id": 11,
"text": "Historically the languages were spoken over a larger area: west to the mouth of the Vistula river in present-day Poland, at least as far east as the Dniepr river in present-day Belarus, perhaps even to Moscow, and perhaps as far south as Kyiv. Key evidence of Baltic language presence in these regions is found in hydronyms (names of bodies of water) that are characteristically Baltic. The use of hydronyms is generally accepted to determine the extent of a culture's influence, but not the date of such influence.",
"title": "Geographic distribution"
},
{
"paragraph_id": 12,
"text": "The eventual expansion of the use of Slavic languages in the south and east, and Germanic languages in the west, reduced the geographic distribution of Baltic languages to a fraction of the area that they formerly covered. The Russian geneticist Oleg Balanovsky speculated that there is a predominance of the assimilated pre-Slavic substrate in the genetics of East and West Slavic populations, according to him the common genetic structure which contrasts East Slavs and Balts from other populations may suggest that the pre-Slavic substrate of the East Slavs consists most significantly of Baltic-speakers, which predated the Slavs in the cultures of the Eurasian steppe according to archaeological references he cites.",
"title": "Geographic distribution"
},
{
"paragraph_id": 13,
"text": "Though Estonia is geopolitically included among the Baltic states due to its location, Estonian is a Finnic language and is not related to the Baltic languages, which are Indo-European.",
"title": "Geographic distribution"
},
{
"paragraph_id": 14,
"text": "The Mordvinic languages, spoken mainly along western tributaries of the Volga, show several dozen loanwords from one or more Baltic languages. These may have been mediated by contacts with the Eastern Balts along the river Oka. In regards to the same geographical location, Asko Parpola, in a 2013 article, suggested that the Baltic presence in this area, dated to c. 200–600 CE, is due to an \"elite superstratum\". However, linguist Petri Kallio [nn] argued that the Volga-Oka is a secondary Baltic-speaking area, expanding from East Baltic, due to a large number of Baltic loanwords in Finnic and Saami.",
"title": "Geographic distribution"
},
{
"paragraph_id": 15,
"text": "Finnish scholars also indicate that Latvian had extensive contacts with Livonian, and, to a lesser extent, to Estonian and South Estonian. Therefore, this contact accounts for the number of Finnic hydronyms in Lithuania and Latvia that increase in a northwards direction.",
"title": "Geographic distribution"
},
{
"paragraph_id": 16,
"text": "Parpola, in the same article, supposed the existence of a Baltic substratum for Finnic, in Estonia and coastal Finland. In the same vein, Kallio argues for the existence of a lost \"North Baltic language\" that would account for loanwords during the evolution of the Finnic branch.",
"title": "Geographic distribution"
},
{
"paragraph_id": 17,
"text": "The Baltic languages are of particular interest to linguists because they retain many archaic features, which are thought to have been present in the early stages of the Proto-Indo-European language. However, linguists have had a hard time establishing the precise relationship of the Baltic languages to other languages in the Indo-European family. Several of the extinct Baltic languages have a limited or nonexistent written record, their existence being known only from the records of ancient historians and personal or place names. All of the languages in the Baltic group (including the living ones) were first written down relatively late in their probable existence as distinct languages. These two factors combined with others have obscured the history of the Baltic languages, leading to a number of theories regarding their position in the Indo-European family.",
"title": "Comparative linguistics"
},
{
"paragraph_id": 18,
"text": "The Baltic languages show a close relationship with the Slavic languages, and are grouped with them in a Balto-Slavic family by most scholars. This family is considered to have developed from a common ancestor, Proto-Balto-Slavic. Later on, several lexical, phonological and morphological dialectisms developed, separating the various Balto-Slavic languages from each other. Although it is generally agreed that the Slavic languages developed from a single more-or-less unified dialect (Proto-Slavic) that split off from common Balto-Slavic, there is more disagreement about the relationship between the Baltic languages.",
"title": "Comparative linguistics"
},
{
"paragraph_id": 19,
"text": "The traditional view is that the Balto-Slavic languages split into two branches, Baltic and Slavic, with each branch developing as a single common language (Proto-Baltic and Proto-Slavic) for some time afterwards. Proto-Baltic is then thought to have split into East Baltic and West Baltic branches. However, more recent scholarship has suggested that there was no unified Proto-Baltic stage, but that Proto-Balto-Slavic split directly into three groups: Slavic, East Baltic and West Baltic. Under this view, the Baltic family is paraphyletic, and consists of all Balto-Slavic languages that are not Slavic. In the 1960s Vladimir Toporov and Vyacheslav Ivanov made the following conclusions about the relationship between the Baltic and Slavic languages:",
"title": "Comparative linguistics"
},
{
"paragraph_id": 20,
"text": "These scholars' theses do not contradict the close relationship between Baltic and Slavic languages and, from a historical perspective, specify the Baltic-Slavic languages' evolution.",
"title": "Comparative linguistics"
},
{
"paragraph_id": 21,
"text": "Finally, a minority of scholars argue that Baltic descended directly from Proto-Indo-European, without an intermediate common Balto-Slavic stage. They argue that the many similarities and shared innovations between Baltic and Slavic are caused by several millennia of contact between the groups, rather than a shared heritage.",
"title": "Comparative linguistics"
},
{
"paragraph_id": 22,
"text": "The Baltic-speaking peoples likely encompassed an area in eastern Europe much larger than their modern range. As in the case of the Celtic languages of Western Europe, they were reduced by invasion, extermination and assimilation. Studies in comparative linguistics point to genetic relationship between the languages of the Baltic family and the following extinct languages:",
"title": "Comparative linguistics"
},
{
"paragraph_id": 23,
"text": "The Baltic classification of Dacian and Thracian has been proposed by the Lithuanian scientist Jonas Basanavičius, who insisted this is the most important work of his life and listed 600 identical words of Balts and Thracians. His theory included Phrygian in the related group, but this did not find support and was disapproved among other authors, such as Ivan Duridanov, whose own analysis found Phrygian completely lacking parallels in either Thracian or Baltic languages.",
"title": "Comparative linguistics"
},
{
"paragraph_id": 24,
"text": "The Bulgarian linguist Ivan Duridanov, who improved the most extensive list of toponyms, in his first publication claimed that Thracian is genetically linked to the Baltic languages and in the next one he made the following classification:",
"title": "Comparative linguistics"
},
{
"paragraph_id": 25,
"text": "\"The Thracian language formed a close group with the Baltic, the Dacian and the \"Pelasgian\" languages. More distant were its relations with the other Indo-European languages, and especially with Greek, the Italic and Celtic languages, which exhibit only isolated phonetic similarities with Thracian; the Tokharian and the Hittite were also distant. \"",
"title": "Comparative linguistics"
},
{
"paragraph_id": 26,
"text": "Of about 200 reconstructed Thracian words by Duridanov most cognates (138) appear in the Baltic languages, mostly in Lithuanian, followed by Germanic (61), Indo-Aryan (41), Greek (36), Bulgarian (23), Latin (10) and Albanian (8). The cognates of the reconstructed Dacian words in his publication are found mostly in the Baltic languages, followed by Albanian. Parallels have enabled linguists, using the techniques of comparative linguistics, to decipher the meanings of several Dacian and Thracian placenames with, they claim, a high degree of probability. Of 74 Dacian placenames attested in primary sources and considered by Duridanov, a total of 62 have Baltic cognates, most of which were rated \"certain\" by Duridanov. For a big number of 300 Thracian geographic names most parallels were found between Thracian and Baltic geographic names in the study of Duridanov. According to him the most important impression make the geographic cognates of Baltic and Thracian",
"title": "Comparative linguistics"
},
{
"paragraph_id": 27,
"text": "\"the similarity of these parallels stretching frequently on the main element and the suffix simultaneously, which makes a strong impression\".",
"title": "Comparative linguistics"
},
{
"paragraph_id": 28,
"text": "Romanian linguist Sorin Paliga, analysing and criticizing Harvey Mayer's study, did admit \"great likeness\" between Thracian, the substrate of Romanian, and \"some Baltic forms\".",
"title": "Comparative linguistics"
}
] | The Baltic languages are a branch of the Indo-European language family spoken natively or as a second language by a population of about 6.5–7.0 million people mainly in areas extending east and southeast of the Baltic Sea in Europe. Together with the Slavic languages, they form the Balto-Slavic branch of the Indo-European family. Scholars usually regard them as a single subgroup divided into two branches: West Baltic and East Baltic. The range of the East Baltic linguistic influence once possibly reached as far as the Ural Mountains, but this hypothesis has been questioned. Old Prussian, a Western Baltic language that became extinct in the 18th century, has possibly retained the greatest number of properties from Proto-Baltic. Although related, Lithuanian, Latvian, and particularly Old Prussian have lexicons that differ substantially from one another and so the languages are not mutually intelligible. Relatively low mutual interaction for neighbouring languages historically led to gradual erosion of mutual intelligibility; development of their respective linguistic innovations that did not exist in shared Proto-Baltic, the substantial number of false friends and various uses and sources of loanwords from their surrounding languages are considered to be the major reasons for poor mutual intelligibility today. | 2001-10-16T19:18:09Z | 2023-12-31T01:28:05Z | [
"Template:Dagger",
"Template:Clarify",
"Template:Cite encyclopedia",
"Template:Authority control",
"Template:Distinguish",
"Template:Use dmy dates",
"Template:See also",
"Template:Indo-European languages",
"Template:Ill",
"Template:Cite book",
"Template:Cite web",
"Template:ISBN",
"Template:Infobox language family",
"Template:Circa",
"Template:Sfn",
"Template:ISSN",
"Template:Refbegin",
"Template:Short description",
"Template:More citations needed",
"Template:Attribution needed",
"Template:Citation needed",
"Template:Baltic languages",
"Template:Pn",
"Template:Asterisk",
"Template:Reflist",
"Template:Cite journal",
"Template:Refend",
"Template:By whom?",
"Template:-",
"Template:Citation"
] | https://en.wikipedia.org/wiki/Baltic_languages |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.